More stories

  • in

    In DNA, scientists find solution to engineering transformative electronics

    Scientists at the University of Virginia School of Medicine and their collaborators have used DNA to overcome a nearly insurmountable obstacle to engineer materials that would revolutionize electronics.
    One possible outcome of such engineered materials could be superconductors, which have zero electrical resistance, allowing electrons to flow unimpeded. That means that they don’t lose energy and don’t create heat, unlike current means of electrical transmission. Development of a superconductor that could be used widely at room temperature — instead of at extremely high or low temperatures, as is now possible — could lead to hyper-fast computers, shrink the size of electronic devices, allow high-speed trains to float on magnets and slash energy use, among other benefits.
    One such superconductor was first proposed more than 50 years ago by Stanford physicist William A. Little. Scientists have spent decades trying to make it work, but even after validating the feasibility of his idea, they were left with a challenge that appeared impossible to overcome. Until now.
    Edward H. Egelman, PhD, of UVA’s Department of Biochemistry and Molecular Genetics, has been a leader in the field of cryo-electron microscopy (cryo-EM), and he and Leticia Beltran, a graduate student in his lab, used cryo-EM imaging for this seemingly impossible project. “It demonstrates,” he said, “that the cryo-EM technique has great potential in materials research.”
    Engineering at the Atomic Level
    One possible way to realize Little’s idea for a superconductor is to modify lattices of carbon nanotubes, hollow cylinders of carbon so tiny they must be measured in nanometers — billionths of a meter. But there was a huge challenge: controlling chemical reactions along the nanotubes so that the lattice could be assembled as precisely as needed and function as intended.
    Egelman and his collaborators found an answer in the very building blocks of life. They took DNA, the genetic material that tells living cells how to operate, and used it to guide a chemical reaction that would overcome the great barrier to Little’s superconductor. In short, they used chemistry to perform astonishingly precise structural engineering — construction at the level of individual molecules. The result was a lattice of carbon nanotubes assembled as needed for Little’s room-temperature superconductor.
    “This work demonstrates that ordered carbon nanotube modification can be achieved by taking advantage of DNA-sequence control over the spacing between adjacent reaction sites,” Egelman said.
    The lattice they built has not been tested for superconductivity, for now, but it offers proof of principle and has great potential for the future, the researchers say. “While cryo-EM has emerged as the main technique in biology for determining the atomic structures of protein assemblies, it has had much less impact thus far in materials science,” said Egelman, whose prior work led to his induction in the National Academy of Sciences, one of the highest honors a scientist can receive.
    Egelman and his colleagues say their DNA-guided approach to lattice construction could have a wide variety of useful research applications, especially in physics. But it also validates the possibility of building Little’s room-temperature superconductor. The scientists’ work, combined with other breakthroughs in superconductors in recent years, could ultimately transform technology as we know it and lead to a much more “Star Trek” future.
    “While we often think of biology using tools and techniques from physics, our work shows that the approaches being developed in biology can actually be applied to problems in physics and engineering,” Egelman said. “This is what is so exciting about science: not being able to predict where our work will lead.”
    The work was supported by the Department of Commerce’s National Institute of Standards and Technology and by National Institutes of Health grant GM122510, as well as by an NRC postdoctoral fellowship. More

  • in

    Bioscientists use mixed-reality headset, custom software to measure vegetation in the field

    Ecologists won’t always need expensive and bulky equipment to measure vegetation in the wild. Rice University scientists have discovered a modern heads-up display works pretty well.
    Rice researchers set up aMicrosoft HoloLens as a mixed-reality sensor to feed VegSense, their application to measure understory vegetation, plant life that grows between the forest canopy and floor.
    A proof-of-concept study by graduate student Daniel Gorczynski and bioscientistLydia Beaudrot shows VegSense could be a suitable alternative to traditional classical field measurements at a low cost.
    Their study in Methods in Ecology and Evolution shows the hardware-software combination excels at quantifying relatively mature trees in the wild, which is one measure of a forest’s overall health.
    Gorczynski came up with the idea to try HoloLens, commonly marketed as a productivity tool for manufacturing, health care and education. He developed the open-source software for the device and noted that while the combination is less effective at picking up saplings and small branches, there’s ample room for improvement.
    Gorczynski said he was introduced to mixed-reality sensing while an undergraduate at Vanderbilt University and recognized its potential for biological studies. “It seemed sort of like a natural fit,” he said. Gorczynski brought the idea to Beaudrot in 2019 shortly after his arrival at Rice. More

  • in

    Using smartphones could help improve memory skills

    Using digital devices, such as smartphones, could help improve memory skills rather than causing people to become lazy or forgetful, finds a new study led by UCL researchers.
    The research, published in Journal of Experimental Psychology: General, showed that digital devices help people to store and remember very important information. This, in turn, frees up their memory to recall additional less important things.
    Neuroscientists have previously expressed concerns that the overuse of technology could result in the breakdown of cognitive abilities and cause “digital dementia.”
    However, the findings show that using a digital device as external memory not only helps people to remember the information saved into the device, but it also helps them to remember unsaved information too.
    To demonstrate this, researchers developed a memory task to be played on a touchscreen digital tablet or computer. The test was undertaken by 158 volunteers aged between 18 and 71.
    Participants were shown up to 12 numbered circles on the screen, and had to remember to drag some of these to the left and some to the right. The number of circles that they remembered to drag to the correct side determined their pay at the end of the experiment. One side was designated ‘high value’, meaning that remembering to drag a circle to this side was worth 10 times as much money as remembering to drag a circle to the other ‘low value’ side. More

  • in

    Advancing dynamic brain imaging with AI

    MRI, electroencephalography (EEG) and magnetoencephalography have long served as the tools to study brain activity, but new research from Carnegie Mellon University introduces a novel, AI-based dynamic brain imaging technology which could map out rapidly changing electrical activity in the brain with high speed, high resolution, and low cost. The advancement comes on the heels of more than thirty years of research that Bin He has undertaken, focused on ways to improve non-invasive dynamic brain imaging technology.
    Brain electrical activity is distributed over the three-dimensional brain and rapidly changes over time. Many efforts have been made to image brain function and dysfunction, and each method bears pros and cons. For example, MRI has commonly been used to study brain activity, but is not fast enough to capture brain dynamics. EEG is a favorable alternative to MRI technology however, its less-than-optimal spatial resolution has been a major hindrance in its wide utility for imaging.
    Electrophysiological source imaging has also been pursued, in which scalp EEG recordings are translated back to the brain using signal processing and machine learning to reconstruct dynamic pictures of brain activity over time. While EEG source imaging is generally cheaper and faster, specific training and expertise is needed for users to select and tune parameters for every recording. In new published work, He and his group introduce a first of its kind AI-based dynamic brain imaging methodology, that has the potential of imaging dynamics of neural circuits with precision and speed.
    “As part of a decades-long effort to develop innovative, non-invasive functional neuroimaging solutions, I have been working on a dynamic brain imaging technology that can provide precision, be effective and easy to use, to better serve clinicians and researchers,” said Bin He, professor of biomedical engineering at Carnegie Mellon University.
    He continues, “Our group is the first to reach the goal by introducing AI and multi-scale brain models. Using biophysically inspired neural networks, we are innovating this deep learning approach to train a neural network that can precisely translate scalp EEG signals back to neural circuit activity in the brain without human intervention.”
    In He’s study, which was recently published in Proceedings of the National Academy of Sciences(PNAS), the performance of this new approach was evaluated by imaging sensory and cognitive brain responses in 20 healthy human subjects. It was also rigorously validated in identifying epileptogenic tissue in a cohort of 20 drug-resistant epilepsy patients by comparing AI based noninvasive imaging results with invasive measurements and surgical resection outcomes.
    Results wise, the novel AI approach outperformed conventional source imaging methods when precision and computational efficiency are considered.
    “With this new approach, you only need a centralized location to perform brain modeling and training deep neural network,” explained He. “After collecting data in a clinical or research setting, clinicians and researchers could remotely submit the data to the centralized well trained deep neural networks and quickly receive accurate analysis results. This technology could speed up diagnosis and assist neurologists and neurosurgeons for better and faster surgical planning.”
    As a next step, the group plans to conduct larger clinical trials in efforts to bring the research closer to clinical implementation.
    “The goal is for efficient and effective dynamic brain imaging with simple operation and low cost,” explained He. “This AI-based brain source imaging technology makes it possible.”
    Story Source:
    Materials provided by College of Engineering, Carnegie Mellon University. Original written by Sara Vaccar. Note: Content may be edited for style and length. More

  • in

    Engineers repurpose 19th-century photography technique to make stretchy, color-changing films

    Imagine stretching a piece of film to reveal a hidden message. Or checking an arm band’s color to gauge muscle mass. Or sporting a swimsuit that changes hue as you do laps. Such chameleon-like, color-shifting materials could be on the horizon, thanks to a photographic technique that’s been resurrected and repurposed by MIT engineers.
    By applying a 19th-century color photography technique to modern holographic materials, an MIT team has printed large-scale images onto elastic materials that when stretched can transform their color, reflecting different wavelengths as the material is strained.
    The researchers produced stretchy films printed with detailed flower bouquets that morph from warm to cooler shades when the films are stretched. They also printed films that reveal the imprint of objects such as a strawberry, a coin, and a fingerprint.
    The team’s results provide the first scalable manufacturing technique for producing detailed, large-scale materials with “structural color” — color that arises as a consequence of a material’s microscopic structure, rather than from chemical additives or dyes.
    “Scaling these materials is not trivial, because you need to control these structures at the nanoscale,” says Benjamin Miller, a graduate student in MIT’s Department of Mechanical Engineering. “Now that we’ve cleared this scaling hurdle, we can explore questions like: Can we use this material to make robotic skin that has a human-like sense of touch? And can we create touch-sensing devices for things like virtual augmented reality or medical training? It’s a big space we’re looking at now.”
    The team’s results appear today in Nature Materials. Miller’s co-authors are MIT undergraduate Helen Liu, and Mathias Kolle, associate professor of mechanical engineering at MIT. More

  • in

    Quantum control for advanced technology: Past and present

    One of the cornerstones of the implementation of quantum technology is the creation and manipulation of the shape of external fields that can optimise the performance of quantum devices. Known as quantum optimal control, this set of methods comprises a field that has rapidly evolved and expanded over recent years.
    A new review paper published in EPJ Quantum Technology and authored by Christiane P. Koch, Dahlem Center for Complex Quantum Systems and Fachbereich Physik, Freie Universität Berlin along with colleagues from across Europe assesses recent progress in the understanding of the controllability of quantum systems as well as the application of quantum control to quantum technologies. As such, it lays out a potential roadmap for future technology.
    While quantum optimal control builds on conventional control theory encompassing the interface of applied mathematics, engineering, and physics, it must also factor in the quirks and counter-intuitive nature of quantum physics.
    This includes superposition, the concept that a quantum system can exist in multiple states at one time, one of the keys to the advanced computing power of machines that rely on quantum bits — or qubits.
    Ultimately the main goal of quantum optimal control is to make emerging quantum technologies operate at their optimal performance and reach physical limits.
    “Each device architecture comes with specific limits but these limits are often not attained by more traditional ways to operate the device,” Koch says. “Using pulse shaping may push the devices to the limits in terms of accuracy or operation speed that is fundamentally possible.”
    The authors of this review consider factors in the discipline including the extent to which a quantum system can be established, controlled and observed without causing this superposition to collapse, something which seriously impedes the stability of quantum computers.
    The review also suggests that just as conventional engineers have a control theoretical framework to rely on, the training of future “quantum engineers” may require a similar framework which is yet to be developed.
    A quantum system that unifies theory and experiment is one of the current research goals of the field with the authors pointing out that this will also form the basis for the development of optimal control strategies.
    As well as assessing the recent progress towards this goal, the team lay out some of the roadblocks that may lie ahead for the field. Roadblocks that will need to be overcome if a quantum technological future is to be manifested.
    Story Source:
    Materials provided by Springer. Note: Content may be edited for style and length. More

  • in

    Fiddler crab eye view inspires researchers to develop novel artificial vision

    Artificial vision systems find a wide range of applications, including self-driving cars, object detection, crop monitoring, and smart cameras. Such vision is often inspired by the vision of biological organisms. For instance, human and insect vision have inspired terrestrial artificial vision, while fish eyes have led to aquatic artificial vision. While the progress is remarkable, current artificial visions suffer from some limitations: they are not suitable for imaging both land and underwater environments, and are limited to a hemispherical (180°) field-of-view (FOV).
    To overcome these issues, a group of researchers from Korea and USA, including Professor Young Min Song from Gwangju Institute of Science and Technology in Korea, have now designed a novel artificial vision system with an omnidirectional imaging ability, which can work in both aquatic and terrestrial environments. Their study was made available online on 12 July 2022 and published in Nature Electronics on 11 July 2022.
    “Research in bio-inspired vision often results in a novel development that did not exist before. This, in turn, enables a deeper understanding of nature and ensure that the developed imaging device is both structurally and functionally effective,” says Prof. Song, explaining his motivation behind the study.
    The inspiration for the system came from the fiddler crab (Uca arcuata), a semiterrestrial crab species with amphibious imaging ability and a 360° FOV. These remarkable features result from the ellipsoidal eye stalk of the fiddler crab’s compound eyes, enabling panoramic imaging, and flat corneas with a graded refractive index profile, allowing for amphibious imaging.
    Accordingly, the researchers developed a vision system consisting of an array of flat micro-lenses with a graded refractive index profile that was integrated into a flexible comb-shaped silicon photodiode array and then mounted onto a spherical structure. The graded refractive index and the flat surface of the micro-lens were optimized to offset the defocusing effects due to changes in the external environment. Put simply, light rays traveling in different mediums (corresponding to different refractive indices) were made to focus at the same spot.
    To test the capabilities of their system, the team performed optical simulations and imaging demonstrations in air and water. Amphibious imaging was performed by immersing the device halfway in water. To their delight, the images produced by the system were clear and free of distortions. The team further showed that the system had a panoramic visual field, 300o horizontally and 160o vertically, in both air and water. Additionally, the spherical mount was only 2 cm in diameter, making the system compact and portable.
    “Our vision system could pave the way for 360° omnidirectional cameras with applications in virtual or augmented reality or an all-weather vision for autonomous vehicles,” speculates Prof. Song excitedly.
    Story Source:
    Materials provided by GIST (Gwangju Institute of Science and Technology). Note: Content may be edited for style and length. More

  • in

    A roadmap for the future of quantum simulation

    A roadmap for the future direction of quantum simulation has been set out in a paper co-authored at the University of Strathclyde.
    Quantum computers are hugely powerful devices with a capacity for speed and calculation which is well beyond the reach of classical, or binary, computing. Instead of a binary system of zeroes and ones, it operates through superpositions, which may be zeroes, ones or both at the same time.
    The continuously-evolving development of quantum computing has reached the point of having an advantage over classical computers for an artificial problem. It could have future applications in a wide range of areas. One promising class of problems involves the simulation of quantum systems, with potential applications such as developing materials for batteries, industrial catalysis and nitrogen fixing.
    The paper, published in Nature, explores near- and medium-term possibilities for quantum simulation on analogue and digital platforms to help evaluate the potential of this area. It has been co-written by researchers from Strathclyde, the Max Planck Institute of Quantum Optics, Ludwig Maximilians University in Munich, Munich Center for Quantum Science and Technology, the University of Innsbruck, the Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences, and Microsoft Corporation.
    Professor Andrew Daley, of Strathclyde’s Department of Physics, is lead author of the paper. He said: “There has been a great deal of exciting progress in analogue and digital quantum simulation in recent years, and quantum simulation is one of the most promising fields of quantum information processing. It is already quite mature, both in terms of algorithm development, and in the availability of significantly advanced analogue quantum simulation experiments internationally.
    “In computing history, classical analogue and digital computing co-existed for more than half a century, with a gradual transition towards digital computing, and we expect the same thing to happen with the emergence of quantum simulation. More