More stories

  • in

    Advancing dynamic brain imaging with AI

    MRI, electroencephalography (EEG) and magnetoencephalography have long served as the tools to study brain activity, but new research from Carnegie Mellon University introduces a novel, AI-based dynamic brain imaging technology which could map out rapidly changing electrical activity in the brain with high speed, high resolution, and low cost. The advancement comes on the heels of more than thirty years of research that Bin He has undertaken, focused on ways to improve non-invasive dynamic brain imaging technology.
    Brain electrical activity is distributed over the three-dimensional brain and rapidly changes over time. Many efforts have been made to image brain function and dysfunction, and each method bears pros and cons. For example, MRI has commonly been used to study brain activity, but is not fast enough to capture brain dynamics. EEG is a favorable alternative to MRI technology however, its less-than-optimal spatial resolution has been a major hindrance in its wide utility for imaging.
    Electrophysiological source imaging has also been pursued, in which scalp EEG recordings are translated back to the brain using signal processing and machine learning to reconstruct dynamic pictures of brain activity over time. While EEG source imaging is generally cheaper and faster, specific training and expertise is needed for users to select and tune parameters for every recording. In new published work, He and his group introduce a first of its kind AI-based dynamic brain imaging methodology, that has the potential of imaging dynamics of neural circuits with precision and speed.
    “As part of a decades-long effort to develop innovative, non-invasive functional neuroimaging solutions, I have been working on a dynamic brain imaging technology that can provide precision, be effective and easy to use, to better serve clinicians and researchers,” said Bin He, professor of biomedical engineering at Carnegie Mellon University.
    He continues, “Our group is the first to reach the goal by introducing AI and multi-scale brain models. Using biophysically inspired neural networks, we are innovating this deep learning approach to train a neural network that can precisely translate scalp EEG signals back to neural circuit activity in the brain without human intervention.”
    In He’s study, which was recently published in Proceedings of the National Academy of Sciences(PNAS), the performance of this new approach was evaluated by imaging sensory and cognitive brain responses in 20 healthy human subjects. It was also rigorously validated in identifying epileptogenic tissue in a cohort of 20 drug-resistant epilepsy patients by comparing AI based noninvasive imaging results with invasive measurements and surgical resection outcomes.
    Results wise, the novel AI approach outperformed conventional source imaging methods when precision and computational efficiency are considered.
    “With this new approach, you only need a centralized location to perform brain modeling and training deep neural network,” explained He. “After collecting data in a clinical or research setting, clinicians and researchers could remotely submit the data to the centralized well trained deep neural networks and quickly receive accurate analysis results. This technology could speed up diagnosis and assist neurologists and neurosurgeons for better and faster surgical planning.”
    As a next step, the group plans to conduct larger clinical trials in efforts to bring the research closer to clinical implementation.
    “The goal is for efficient and effective dynamic brain imaging with simple operation and low cost,” explained He. “This AI-based brain source imaging technology makes it possible.”
    Story Source:
    Materials provided by College of Engineering, Carnegie Mellon University. Original written by Sara Vaccar. Note: Content may be edited for style and length. More

  • in

    Engineers repurpose 19th-century photography technique to make stretchy, color-changing films

    Imagine stretching a piece of film to reveal a hidden message. Or checking an arm band’s color to gauge muscle mass. Or sporting a swimsuit that changes hue as you do laps. Such chameleon-like, color-shifting materials could be on the horizon, thanks to a photographic technique that’s been resurrected and repurposed by MIT engineers.
    By applying a 19th-century color photography technique to modern holographic materials, an MIT team has printed large-scale images onto elastic materials that when stretched can transform their color, reflecting different wavelengths as the material is strained.
    The researchers produced stretchy films printed with detailed flower bouquets that morph from warm to cooler shades when the films are stretched. They also printed films that reveal the imprint of objects such as a strawberry, a coin, and a fingerprint.
    The team’s results provide the first scalable manufacturing technique for producing detailed, large-scale materials with “structural color” — color that arises as a consequence of a material’s microscopic structure, rather than from chemical additives or dyes.
    “Scaling these materials is not trivial, because you need to control these structures at the nanoscale,” says Benjamin Miller, a graduate student in MIT’s Department of Mechanical Engineering. “Now that we’ve cleared this scaling hurdle, we can explore questions like: Can we use this material to make robotic skin that has a human-like sense of touch? And can we create touch-sensing devices for things like virtual augmented reality or medical training? It’s a big space we’re looking at now.”
    The team’s results appear today in Nature Materials. Miller’s co-authors are MIT undergraduate Helen Liu, and Mathias Kolle, associate professor of mechanical engineering at MIT. More

  • in

    Quantum control for advanced technology: Past and present

    One of the cornerstones of the implementation of quantum technology is the creation and manipulation of the shape of external fields that can optimise the performance of quantum devices. Known as quantum optimal control, this set of methods comprises a field that has rapidly evolved and expanded over recent years.
    A new review paper published in EPJ Quantum Technology and authored by Christiane P. Koch, Dahlem Center for Complex Quantum Systems and Fachbereich Physik, Freie Universität Berlin along with colleagues from across Europe assesses recent progress in the understanding of the controllability of quantum systems as well as the application of quantum control to quantum technologies. As such, it lays out a potential roadmap for future technology.
    While quantum optimal control builds on conventional control theory encompassing the interface of applied mathematics, engineering, and physics, it must also factor in the quirks and counter-intuitive nature of quantum physics.
    This includes superposition, the concept that a quantum system can exist in multiple states at one time, one of the keys to the advanced computing power of machines that rely on quantum bits — or qubits.
    Ultimately the main goal of quantum optimal control is to make emerging quantum technologies operate at their optimal performance and reach physical limits.
    “Each device architecture comes with specific limits but these limits are often not attained by more traditional ways to operate the device,” Koch says. “Using pulse shaping may push the devices to the limits in terms of accuracy or operation speed that is fundamentally possible.”
    The authors of this review consider factors in the discipline including the extent to which a quantum system can be established, controlled and observed without causing this superposition to collapse, something which seriously impedes the stability of quantum computers.
    The review also suggests that just as conventional engineers have a control theoretical framework to rely on, the training of future “quantum engineers” may require a similar framework which is yet to be developed.
    A quantum system that unifies theory and experiment is one of the current research goals of the field with the authors pointing out that this will also form the basis for the development of optimal control strategies.
    As well as assessing the recent progress towards this goal, the team lay out some of the roadblocks that may lie ahead for the field. Roadblocks that will need to be overcome if a quantum technological future is to be manifested.
    Story Source:
    Materials provided by Springer. Note: Content may be edited for style and length. More

  • in

    Fiddler crab eye view inspires researchers to develop novel artificial vision

    Artificial vision systems find a wide range of applications, including self-driving cars, object detection, crop monitoring, and smart cameras. Such vision is often inspired by the vision of biological organisms. For instance, human and insect vision have inspired terrestrial artificial vision, while fish eyes have led to aquatic artificial vision. While the progress is remarkable, current artificial visions suffer from some limitations: they are not suitable for imaging both land and underwater environments, and are limited to a hemispherical (180°) field-of-view (FOV).
    To overcome these issues, a group of researchers from Korea and USA, including Professor Young Min Song from Gwangju Institute of Science and Technology in Korea, have now designed a novel artificial vision system with an omnidirectional imaging ability, which can work in both aquatic and terrestrial environments. Their study was made available online on 12 July 2022 and published in Nature Electronics on 11 July 2022.
    “Research in bio-inspired vision often results in a novel development that did not exist before. This, in turn, enables a deeper understanding of nature and ensure that the developed imaging device is both structurally and functionally effective,” says Prof. Song, explaining his motivation behind the study.
    The inspiration for the system came from the fiddler crab (Uca arcuata), a semiterrestrial crab species with amphibious imaging ability and a 360° FOV. These remarkable features result from the ellipsoidal eye stalk of the fiddler crab’s compound eyes, enabling panoramic imaging, and flat corneas with a graded refractive index profile, allowing for amphibious imaging.
    Accordingly, the researchers developed a vision system consisting of an array of flat micro-lenses with a graded refractive index profile that was integrated into a flexible comb-shaped silicon photodiode array and then mounted onto a spherical structure. The graded refractive index and the flat surface of the micro-lens were optimized to offset the defocusing effects due to changes in the external environment. Put simply, light rays traveling in different mediums (corresponding to different refractive indices) were made to focus at the same spot.
    To test the capabilities of their system, the team performed optical simulations and imaging demonstrations in air and water. Amphibious imaging was performed by immersing the device halfway in water. To their delight, the images produced by the system were clear and free of distortions. The team further showed that the system had a panoramic visual field, 300o horizontally and 160o vertically, in both air and water. Additionally, the spherical mount was only 2 cm in diameter, making the system compact and portable.
    “Our vision system could pave the way for 360° omnidirectional cameras with applications in virtual or augmented reality or an all-weather vision for autonomous vehicles,” speculates Prof. Song excitedly.
    Story Source:
    Materials provided by GIST (Gwangju Institute of Science and Technology). Note: Content may be edited for style and length. More

  • in

    A roadmap for the future of quantum simulation

    A roadmap for the future direction of quantum simulation has been set out in a paper co-authored at the University of Strathclyde.
    Quantum computers are hugely powerful devices with a capacity for speed and calculation which is well beyond the reach of classical, or binary, computing. Instead of a binary system of zeroes and ones, it operates through superpositions, which may be zeroes, ones or both at the same time.
    The continuously-evolving development of quantum computing has reached the point of having an advantage over classical computers for an artificial problem. It could have future applications in a wide range of areas. One promising class of problems involves the simulation of quantum systems, with potential applications such as developing materials for batteries, industrial catalysis and nitrogen fixing.
    The paper, published in Nature, explores near- and medium-term possibilities for quantum simulation on analogue and digital platforms to help evaluate the potential of this area. It has been co-written by researchers from Strathclyde, the Max Planck Institute of Quantum Optics, Ludwig Maximilians University in Munich, Munich Center for Quantum Science and Technology, the University of Innsbruck, the Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences, and Microsoft Corporation.
    Professor Andrew Daley, of Strathclyde’s Department of Physics, is lead author of the paper. He said: “There has been a great deal of exciting progress in analogue and digital quantum simulation in recent years, and quantum simulation is one of the most promising fields of quantum information processing. It is already quite mature, both in terms of algorithm development, and in the availability of significantly advanced analogue quantum simulation experiments internationally.
    “In computing history, classical analogue and digital computing co-existed for more than half a century, with a gradual transition towards digital computing, and we expect the same thing to happen with the emergence of quantum simulation. More

  • in

    How to help assembly-line robots shift gears and pick up almost anything

    At the beginning of the COVID-19 pandemic, car manufacturing companies such as Ford quickly shifted their production focus from automobiles to masks and ventilators.
    To make this switch possible, these companies relied on people working on an assembly line. It would have been too challenging for a robot to make this transition because robots are tied to their usual tasks.
    Theoretically, a robot could pick up almost anything if its grippers could be swapped out for each task. To keep costs down, these grippers could be passive, meaning grippers pick up objects without changing shape, similar to how the tongs on a forklift work.
    A University of Washington team created a new tool that can design a 3D-printable passive gripper and calculate the best path to pick up an object. The team tested this system on a suite of 22 objects — including a 3D-printed bunny, a doorstop-shaped wedge, a tennis ball and a drill. The designed grippers and paths were successful for 20 of the objects. Two of these were the wedge and a pyramid shape with a curved keyhole. Both shapes are challenging for multiple types of grippers to pick up.
    The team will present these findings Aug. 11 at SIGGRAPH 2022.
    “We still produce most of our items with assembly lines, which are really great but also very rigid. The pandemic showed us that we need to have a way to easily repurpose these production lines,” said senior author Adriana Schulz, a UW assistant professor in the Paul G. Allen School of Computer Science & Engineering. “Our idea is to create custom tooling for these manufacturing lines. That gives us a very simple robot that can do one task with a specific gripper. And then when I change the task, I just replace the gripper.”
    Passive grippers can’t adjust to fit the object they’re picking up, so traditionally, objects have been designed to match a specific gripper. More

  • in

    Teaching computers to predict efficient catalysis

    Researchers from Aarhus and Berlin have developed a new algorithm that can teach computers to predict how complex molecules will bind to the surface of catalysts. This is important when you have to produce synthetic fuels, for example. And it’s almost like playing extreme Tetris.
    Imagine a game of Tetris where you not only have to stack the pieces in three dimensions, but the pieces are also much more complicated than the seven geometric shapes you normally use in the game.
    In this case, the pieces are large and complex molecules that are to bind to another material in a chemical reaction.
    To make things even harder, both the molecules and the other material have several places on the surface where they can bind to each other — and it is crucial that the binding is neither too weak nor too strong.
    The binding has to be exactly right, otherwise the other material cannot function as a catalyst (see fact box at the end of the text).
    Such an extreme game of Tetris perfectly illustrates the challenges that researchers all over the world encounter when working on developing new and better catalysts for a wide range of technical-chemical processes. More

  • in

    Engineers develop stickers that can see inside the body

    Ultrasound imaging is a safe and noninvasive window into the body’s workings, providing clinicians with live images of a patient’s internal organs. To capture these images, trained technicians manipulate ultrasound wands and probes to direct sound waves into the body. These waves reflect back out to produce high-resolution images of a patient’s heart, lungs, and other deep organs.
    Currently, ultrasound imaging requires bulky and specialized equipment available only in hospitals and doctor’s offices. But a new design by MIT engineers might make the technology as wearable and accessible as buying Band-Aids at the pharmacy.
    In a paper appearing today in Science, the engineers present the design for a new ultrasound sticker — a stamp-sized device that sticks to skin and can provide continuous ultrasound imaging of internal organs for 48 hours.
    The researchers applied the stickers to volunteers and showed the devices produced live, high-resolution images of major blood vessels and deeper organs such as the heart, lungs, and stomach. The stickers maintained a strong adhesion and captured changes in underlying organs as volunteers performed various activities, including sitting, standing, jogging, and biking.
    The current design requires connecting the stickers to instruments that translate the reflected sound waves into images. The researchers point out that even in their current form, the stickers could have immediate applications: For instance, the devices could be applied to patients in the hospital, similar to heart-monitoring EKG stickers, and could continuously image internal organs without requiring a technician to hold a probe in place for long periods of time.
    If the devices can be made to operate wirelessly — a goal the team is currently working toward — the ultrasound stickers could be made into wearable imaging products that patients could take home from a doctor’s office or even buy at a pharmacy. More