More stories

  • in

    Scientists solve chemical mystery at the interface of biology and technology

    Researchers who want to bridge the divide between biology and technology spend a lot of time thinking about translating between the two different “languages” of those realms.
    “Our digital technology operates through a series of electronic on-off switches that control the flow of current and voltage,” said Rajiv Giridharagopal, a research scientist at the University of Washington. “But our bodies operate on chemistry. In our brains, neurons propagate signals electrochemically, by moving ions — charged atoms or molecules — not electrons.”
    Implantable devices from pacemakers to glucose monitors rely on components that can speak both languages and bridge that gap. Among those components are OECTs — or organic electrochemical transistors — which allow current to flow in devices like implantable biosensors. But scientists long knew about a quirk of OECTs that no one could explain: When an OECT is switched on, there is a lag before current reaches the desired operational level. When switched off, there is no lag. Current drops almost immediately.
    A UW-led study has solved this lagging mystery, and in the process paved the way to custom-tailored OECTs for a growing list of applications in biosensing, brain-inspired computation and beyond.
    “How fast you can switch a transistor is important for almost any application,” said project leader David Ginger, a UW professor of chemistry, chief scientist at the UW Clean Energy Institute and faculty member in the UW Molecular Engineering and Sciences Institute. “Scientists have recognized the unusual switching behavior of OECTs, but we never knew its cause — until now.”
    In a paper published April 17 in Nature Materials, Ginger’s team at the UW — along with Professor Christine Luscombe at the Okinawa Institute of Science and Technology in Japan and Professor Chang-Zhi Li at Zhejiang University in China — report that OECTs turn on via a two-step process, which causes the lag. But they appear to turn off through a simpler one-step process.
    In principle, OECTs operate like transistors in electronics: When switched on, they allow the flow of electrical current. When switched off, they block it. But OECTs operate by coupling the flow of ions with the flow of electrons, which makes them interesting routes for interfacing with chemistry and biology.

    The new study illuminates the two steps OECTs go through when switched on. First, a wavefront of ions races across the transistor. Then, more charge-bearing particles invade the transistor’s flexible structure, causing it to swell slightly and bringing current up to operational levels. In contrast, the team discovered that deactivation is a one-step process: Levels of charged chemicals simply drop uniformly across the transistor, quickly interrupting the flow of current.
    Knowing the lag’s cause should help scientists design new generations of OECTs for a wider set of applications.
    “There’s always been this drive in technology development to make components faster, more reliable and more efficient,” Ginger said. “Yet, the ‘rules’ for how OECTs behave haven’t been well understood. A driving force in this work is to learn them and apply them to future research and development efforts.”
    Whether they reside within devices to measure blood glucose or brain activity, OECTs are largely made up of flexible, organic semiconducting polymers — repeating units of complex, carbon-rich compounds — and operate immersed in liquids containing salts and other chemicals. For this project, the team studied OECTs that change color in response to electrical charge. The polymer materials were synthesized by Luscombe’s team at the Okinawa Institute of Science and Technology and Li’s at Zhejiang University, and then fabricated into transistors by UW doctoral students Jiajie Guo and Shinya “Emerson” Chen, who are co-lead authors on the paper.
    “A challenge in the materials design for OECTs lies in creating a substance that facilitates effective ion transport and retains electronic conductivity,” said Luscombe, who is also a UW affiliate professor of chemistry and of materials science and engineering. “The ion transport requires a flexible material, whereas ensuring high electronic conductivity typically necessitates a more rigid structure, posing a dilemma in the development of such materials.”
    Guo and Chen observed under a microscope — and recorded with a smartphone camera — precisely what happens when the custom-built OECTs are switched on and off. It showed clearly that a two-step chemical process lies at the heart of the OECT activation lag.

    Past research, including by Ginger’s group at the UW, demonstrated that polymer structure, especially its flexibility, is important to how OECTs function. These devices operate in fluid-filled environments containing chemical salts and other biological compounds, which are more bulky compared to the electronic underpinnings of our digital devices.
    The new study goes further by more directly linking OECT structure and performance. The team found that the degree of activation lag should vary based on what material the OECT is made of, such as whether its polymers are more ordered or more randomly arranged, according to Giridharagopal. Future research could explore how to reduce or lengthen the lag times, which for OECTs in the current study were fractions of a second.
    “Depending on the type of device you’re trying to build, you could tailor composition, fluid, salts, charge carriers and other parameters to suit your needs,” said Giridharagopal.
    OECTs aren’t just used in biosensing. They are also used to study nerve impulses in muscles, as well as forms of computing to create artificial neural networks and understand how our brains store and retrieve information. These widely divergent applications necessitate building new generations of OECTs with specialized features, including ramp-up and ramp-down times, according to Ginger.
    “Now that we’re learning the steps needed to realize those applications, development can really accelerate,” said Ginger.
    Guo is now a postdoctoral researcher at the Lawrence Berkeley National Laboratory and Chen is now a scientist at Analog Devices. Other co-authors on the paper are Connor Bischak, a former UW postdoctoral researcher in chemistry who is now an assistant professor at the University of Utah; Jonathan Onorato, a UW doctoral alum and scientist at Exponent; and Kangrong Yan and Ziqui Shen of Zhejiang University. The research was funded by the U.S. National Science Foundation, and polymers developed at Zhejiang University were funded by the National Science Foundation of China. More

  • in

    Machine listening: Making speech recognition systems more inclusive

    Interactions with voice technology, such as Amazon’s Alexa, Apple’s Siri, and Google Assistant, can make life easier by increasing efficiency and productivity. However, errors in generating and understanding speech during interactions are common. When using these devices, speakers often style-shift their speech from their normal patterns into a louder and slower register, called technology-directed speech.
    Research on technology-directed speech typically focuses on mainstream varieties of U.S. English without considering speaker groups that are more consistently misunderstood by technology. In JASA Express Letters, published on behalf of the Acoustical Society of America by AIP Publishing, researchers from Google Research, the University of California, Davis, and Stanford University wanted to address this gap.
    One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Since the rate of automatic speech recognition errors can be higher for AAE speakers, downstream effects of linguistic discrimination in technology may result.
    “Across all automatic speech recognition systems, four out of every ten words spoken by Black men were being transcribed incorrectly,” said co-author Zion Mengesha. “This affects fairness for African American English speakers in every institution using voice technology, including health care and employment.”
    “We saw an opportunity to better understand this problem by talking to Black users and understanding their emotional, behavioral, and linguistic responses when engaging with voice technology,” said co-author Courtney Heldreth.
    The team designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Study participants included 19 adults identifying as Black or African American who had experienced issues with voice technology. Each participant asked a series of questions to a voice assistant. The same questions were repeated as if speaking to a familiar person and, again, to a stranger. Each question was recorded for a total of 153 recordings.
    Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation (more monotone speech).
    “These findings suggest that people have mental models of how to talk to technology,” said co-author Michelle Cohn. “A set ‘mode’ that they engage to be better understood, in light of disparities in speech recognition systems.”
    There are other groups misunderstood by voice technology, such as second-language speakers. The researchers hope to expand the language varieties explored in human-computer interaction experiments and address barriers in technology so that it can support everyone who wants to use it. More

  • in

    New technology makes 3D microscopes easier to use, less expensive to manufacture

    Researchers in Purdue University’s College of Engineering are developing patented and patent-pending innovations that make 3D microscopes faster to operate and less expensive to manufacture.
    Traditional, large depth-of-field 3D microscopes are used across academia and industry, with applications ranging from the life sciences to quality control processes used in semiconductor manufacturing. Song Zhang, professor in Purdue’s School of Mechanical Engineering, said they are too slow to capture 3D images and too expensive to build due to the requirement of a high-precision translation stage.
    “Such drawbacks in a microscope slow the measurement process, making it difficult to use for applications that require high speeds, such as in situ quality control,” Zhang said.
    Research about the Purdue 3D microscope and its innovations has been published in the peer-reviewed Optics Letters and the August 2023 and March 2024 issues of the peer-reviewed Optics and Lasers in Engineering. The National Science Foundation awarded a grant to conduct the research.
    The Purdue innovation
    Zhang said the Purdue 3D microscope automatically completes three steps: focusing in on an object, determining the optimal capture process and creating a high-quality 3D image for the end user.
    “In contrast, a traditional microscope requires users to carefully follow instructions provided by the manufacturer to perform a high-quality capture,” Zhang said.

    Zhang and his colleagues use an electronically tunable lens, or ETL, that changes the focal plane of the imaging system without moving parts. He said using the lens makes the 3D microscope easier to use and less expensive to build.
    “Our suite of patents covers methods on how to calibrate the ETL, how to create all-in-focus 3D images quickly and how to speed up the data acquisition process by leveraging the ETL hardware information,” Zhang said. “The end result is the same as a traditional microscope: 3D surface images of a scene. Ours is different because of its high speed and relatively low cost.”
    The next developmental steps
    Zhang and his team have developed algorithms and created a prototype system in their lab. They are looking to translate their research into a commercial product.
    “This will require an industrial partner,” Zhang said. “We are certainly interested in helping this process, including sharing our know-how and research results to make the transition smooth.”
    Zhang disclosed the innovations to the Purdue Innovates Office of Technology Commercialization, which has applied for and received patents to protect the multiple pieces of intellectual property. More

  • in

    Trotting robots reveal emergence of animal gait transitions

    With the help of a form of machine learning called deep reinforcement learning (DRL), the EPFL robot notably learned to transition from trotting to pronking — a leaping, arch-backed gait used by animals like springbok and gazelles — to navigate a challenging terrain with gaps ranging from 14-30cm. The study, led by the BioRobotics Laboratory in EPFL’s School of Engineering, offers new insights into why and how such gait transitions occur in animals.
    “Previous research has introduced energy efficiency and musculoskeletal injury avoidance as the two main explanations for gait transitions. More recently, biologists have argued that stability on flat terrain could be more important. But animal and robotic experiments have shown that these hypotheses are not always valid, especially on uneven ground,” says PhD student Milad Shafiee, first author on a paper published in Nature Communications.
    Shafiee and co-authors Guillaume Bellegarda and BioRobotics Lab head Auke Ijspeert were therefore interested in a new hypothesis for why gait transitions occur: viability, or fall avoidance. To test this hypothesis, they used DRL to train a quadruped robot to cross various terrains. On flat terrain, they found that different gaits showed different levels of robustness against random pushes, and that the robot switched from a walk to a trot to maintain viability, just as quadruped animals do when they accelerate. And when confronted with successive gaps in the experimental surface, the robot spontaneously switched from trotting to pronking to avoid falls. Moreover, viability was the only factor that was improved by such gait transitions.
    “We showed that on flat terrain and challenging discrete terrain, viability leads to the emergence of gait transitions, but that energy efficiency is not necessarily improved,” Shafiee explains. “It seems that energy efficiency, which was previously thought to be a driver of such transitions, may be more of a consequence. When an animal is navigating challenging terrain, it’s likely that its first priority is not falling, followed by energy efficiency.”
    A bio-inspired learning architecture
    To model locomotion control in their robot, the researchers considered the three interacting elements that drive animal movement: the brain, the spinal cord, and sensory feedback from the body. They used DRL to train a neural network to imitate the spinal cord’s transmission of brain signals to the body as the robot crossed an experimental terrain. Then, the team assigned different weights to three possible learning goals: energy efficiency, force reduction, and viability. A series of computer simulations revealed that of these three goals, viability was the only one that prompted the robot to automatically — without instruction from the scientists — change its gait.
    The team emphasizes that these observations represent the first learning-based locomotion framework in which gait transitions emerge spontaneously during the learning process, as well as the most dynamic crossing of such large consecutive gaps for a quadrupedal robot.
    “Our bio-inspired learning architecture demonstrated state-of-the-art quadruped robot agility on the challenging terrain,” Shafiee says.
    The researchers aim to expand on their work with additional experiments that place different types of robots in a wider variety of challenging environments. In addition to further elucidating animal locomotion, they hope that ultimately, their work will enable the more widespread use of robots for biological research, reducing reliance on animal models and the associated ethics concerns. More

  • in

    Scientists harness the wind as a tool to move objects

    Researchers have developed a technique to move objects around with a jet of wind. The new approach makes it possible to manipulate objects at a distance and could be integrated into robots to give machines ethereal fingers.
    ‘Airflow or wind is everywhere in our living environment, moving around objects like pollen, pathogens, droplets, seeds and leaves. Wind has also been actively used in industry and in our everyday lives — for example, in leaf blowers to clean leaves. But so far, we can’t control the direction the leaves move — we can only blow them together into a pile,’ says Professor Quan Zhou from Aalto University, who led the study.
    The first step in manipulating objects with wind is understanding how objects move in the airflow. To that end, a research team at Aalto University recorded thousands of sample movements in an artificially generated airflow and used these to build templates of how objects move on a surface in a jet of air.
    The team’s analysis showed that even though the airflow is generally chaotic, it’s still regular enough to move objects in a controlled way in different directions — even back towards the nozzle blowing out the air.
    ‘We designed an algorithm that controls the direction of the air nozzle with two motors. The jet of air is blown onto the surface from several meters away and to the side of the object, so the generated airflow field moves the object in the desired direction. The control algorithm repeatedly adjusts the direction of the air nozzle so that the airflow moves the objects along the desired trajectory,’ explains Zhou.
    ‘Our observations allowed us to use airflow to move objects along different paths, like circles or even complex letter-like paths. Our method is versatile in terms of the object’s shape and material — we can control the movement of objects of almost any shape,’ he continues.
    The technology still needs to be refined, but the researchers are optimistic about the untapped potential of their nature-inspired approach. It could be used to collect items that are scattered on a surface, such as pushing debris and waste to collection points. It could also be useful in complex processing tasks where physical contact is impossible, such as handling electrical circuits.
    ‘We believe that this technique could get even better with a deeper understanding of the characteristics of the airflow field, which is what we’re working on next,’ says Zhou. More

  • in

    Researchers develop a new way to instruct dance in Virtual Reality

    Researchers at Aalto University were looking for better ways to instruct dance choreography in virtual reality. The new WAVE technique they developed will be presented in May at the CHI conference, a major venue for human-computer interaction research.
    Previous techniques have largely relied on pre-rehearsal and simplification.
    ‘In virtual reality, it is difficult to visualise and communicate how a dancer should move. The human body is so multi-dimensional, and it is difficult to take in rich data in real time,’ says Professor Perttu Hämäläinen.
    The researchers started by experimenting with visualisation techniques familiar from previous dance games. But after several prototypes and stages, they decided to try out the audience wave, familiar from sporting events, to guide the dance.
    ‘The wave-like movement of the model dancers allows you to see in advance what kind of movement is coming next. And you don’t have to rehearse the movement beforehand,’ says PhD researcher Markus Laattala.
    In general, one cannot follow a new choreography in real time because of the delay in human perceptual motor control. The WAVE technique developed by the researchers, on the other hand, is based on anticipating future movement, such as a turn.
    ‘No one had figured out how to guide a continuous, fluid movement like contemporary dance. In the choreography we implemented, making a wave is communication, a kind of micro-canon in which the model dancers follow the same choreography with a split-second delay,’ says Hämäläinen.

    From tai chi to exaggerated movements
    A total of 36 people took part in the one-minute dance test, comparing the new WAVE visualization to a traditional virtual version in which there was only one model dancer to follow. The differences between the techniques were clear.
    ‘This implementation is at least suitable for slow-paced dance styles. The dancer can just jump in and start dancing without having to learn anything beforehand. However, in faster movements, the visuals can get confusing, and further research and development is needed to adapt and test the approach with more dance styles’ says Hämäläinen.
    In addition to virtual dance games, the new technique may be applicable to music videos, karaoke, and tai chi.
    ‘It would be optimal for the user if they could decide how to position the model dancers in a way that suits them. And if the idea were taken further, several dancers could send each other moves in social virtual reality. It could become a whole new way of dancing together’, says Laattala.
    ‘Current mainstream VR devices only track the movement of the headset and handheld controllers. On the other hand, machine learning data can sometimes be used to infer how the legs move,’ says Hämäläinen.

    ‘But in dance, inference is more difficult because the movements are stranger than, for example, walking,’ adds Laattala.
    On the other hand, if you have a mirror in the real dance space, you can follow the movement of your feet using machine vision. The dancer’s view could be modified using a virtual mirror.
    ‘A dancer’s virtual performance can be improved by exaggeration, for example by increasing flexibility, height of the jumps, or hip movement. This can make them feel that they are more skilled than they are, which research shows has a positive impact on physical activity motivation,’ says Hämäläinen.
    The virtual dance game has been developed using the Magics infrastructure’s motion capture kit, where the model dancer is dressed in a costume with sensors. These have been used to record the dance animation.
    The WAVE dance game can be downloaded for Meta Quest 2 and 3 VR devices here: https://github.com/CarouselDancing/WAVE. The Github repository also includes the open source code that anyone can use to develop the game further.
    Reference:
    Laattala, M., Piitulainen, R., Ady, N., Tamariz, M., & Hämäläinen, P. (2024). Anticipatory Movement Visualization for VR Dancing. ACM SIGCHI Annual Conference on Human Factors in Computing Systems. More

  • in

    ‘Seeing the invisible’: New tech enables deep tissue imaging during surgery

    Hyperspectral imaging (HSI) is a state-of-the-art technique that captures and processes information across a given electromagnetic spectrum. Unlike traditional imaging techniques that capture light intensity at specific wavelengths, HSI collects a full spectrum at each pixel in an image. This rich spectral data enables the distinction between different materials and substances based on their unique spectral signatures. Near-infrared hyperspectral imaging (NIR-HSI) has attracted significant attention in the food and industrial fields as a non-destructive technique for analyzing the composition of objects. A notable aspect of NIR-HSI is over-thousand-nanometer (OTN) spectroscopy, which can be used for the identification of organic substances, their concentration estimation, and 2D map creation. Additionally, NIR-HSI can be used to acquire information deep into the body, making it useful for the visualization of lesions hidden in normal tissues.
    Various types of HSI devices have been developed to suit different imaging targets and situations, such as for imaging under a microscope or portable imaging and imaging in confined spaces. However, for OTN wavelengths, ordinary visible cameras lose sensitivity and only a few commercially available lenses exist that can correct chromatic aberration. Moreover, it is necessary to construct cameras, optical systems, and illumination systems for portable NRI-HSI devices, but no device that can acquire NIR-HSI with a rigid scope, crucial for portability, has been reported yet.
    Now, in a new study, a team of researchers, led by Professor Hiroshi Takemura from Tokyo University of Science (TUS) and including Toshihiro Takamatsu, Ryodai Fukushima, Kounosuke Sato, Masakazu Umezawa, and Kohei Soga, all from TUS, Hideo Yokota from RIKEN, and Abian Hernandez Guedes and Gustavo M. Calico, both from the University of Las Palmas de Gran Canaria, has recently developed the world’s first rigid endoscope system capable of HSI from visible to OTN wavelengths. Their findings were published in Volume 32, Issue 9 of Optics Express on April 17, 2024.
    At the core of this innovative system lies a supercontinuum (SC) light source and an acoustic-opto tunable filter (AOTF) that can emit specific wavelengths. Prof. Takemura explains, “An SC light source can output intense coherent white light, whereas an AOTF can extract light containing a specific wavelength. This combination offers easy light transmission to the light guide and the ability to electrically switch between a broad range of wavelengths within a millisecond.”
    The team verified the optical performance and classification ability of the system, demonstrating its capability to perform HSI in the range of 490-1600 nm, enabling visible as well as NIR-HSI. Additionally, the results highlighted several advantages, such as the low light power of extracted wavelengths, enabling non-destructive imaging, and downsizing capability. Moreover, a more continuous NIR spectrum can be obtained compared to that of conventional rigid-scope-type devices.
    To demonstrate their system’s capability, the researchers used it to acquire the spectra of six types of resins and employed a neural network to classify the spectra pixel-by-pixel in multiple wavelengths. The results revealed that when the OTN wavelength range was extracted from the HSI data for training, the neural network could classify seven different targets, including the six resins and a white reference, with an accuracy of 99.6%, reproducibility of 93.7%, and specificity of 99.1%. This means that the system can successfully extract molecular vibration information of each resin at each pixel.
    Prof. Takemura and his team also identified several future research directions for improving this method, including enhancing image quality and recall in the visible region and refining the design of the rigid endoscope to correct chromatic aberrations over a wide area. With these further advancements, in the coming years, the proposed HSI technology is expected to facilitate new applications in industrial inspection and quality control, working as a “superhuman vision” tool that unlocks new ways of perceiving and understanding the world around us.
    “This breakthrough, which combines expertise from different fields through a collaborative, cross-disciplinary approach, enables the identification of invaded cancer areas and the visualization of deep tissues such as blood vessels, nerves, and ureters during medical procedures, leading to improved surgical navigation. Additionally, it enables measurement using light previously unseen in industrial applications, potentially creating new areas of non-use and non-destructive testing,” remarks Prof. Takemura. “By visualizing the invisible, we aim to accelerate the development of medicine and improve the quality of life of physicians as well as patients.” More

  • in

    When does a conductor not conduct?

    An Australian-led study has found unusual insulating behaviour in a new atomically-thin material — and the ability to switch it on and off.
    Materials that feature strong interactions between electrons can display unusual properties such as the ability to act as insulators even when they are expected to conduct electricity. These insulators, known as Mott insulators, occur when electrons become frozen because of strong repulsion they feel from other electrons nearby, preventing them from carrying a current.
    Led by FLEET at Monash University, a new study (published in Nature Communications) has demonstrated a Mott insulating phase within an atomically-thin metal-organic framework (MOF), and the ability to controllably switch this material from an insulator to a conductor. This material’s ability to act as an efficient ‘switch’ makes it a promising candidate for application in new electronic devices such as transistors.
    Electron interactions written in the stars
    The atomically thin (or ‘2D’) material at the heart of the study is a type of MOF, a class of materials composed from organic molecules and metal atoms.
    “Thanks to the versatility of supramolecular chemistry approaches — in particular applied on surfaces as substrates — we have an almost infinite number of combinations to construct materials from the bottom-up, with atomic scale precision,” explains corresponding author A/Prof Schiffrin. “In these approaches, organic molecules are used as building blocks, By carefully choosing the right ingredients, we can tune the properties of MOFs.”
    The important tailor-made property of the MOF in this study is its star-shaped geometry, known as a kagome structure. This geometry enhances the influence of electron-electron interactions, directly leading to the realisation of a Mott insulator.

    The on-off switch: electron population
    The authors constructed the star-shaped kagome MOF from a combination of copper atoms and 9,10-dicyanoanthracene (DCA) molecules. They grew the material upon another atomically thin insulating material, hexagonal boron nitride (hBN), on an atomically flat copper surface, Cu(111).
    “We measured the structural and electronic properties of the MOF at the atomic scale using scanning tunnelling microscopy and spectroscopy,” explains lead author Dr. Benjamin Lowe, who recently completed his PhD with FLEET. “This allowed us to measure an unexpected energy gap — the hallmark of an insulator.”
    The authors’ suspicion that the experimentally measured energy gap was a signature of a Mott insulating phase was confirmed by comparing experimental results with dynamical mean-field theory calculations.
    “The electronic signature in our calculations showed remarkable agreement with experimental measurements and provided conclusive evidence of a Mott insulating phase,” explains FLEET alum Dr. Bernard Field, who performed the theoretical calculations in collaboration with researchers from the University of Queensland and the Okinawa Institute of Science and Technology Graduate University in Japan.
    The authors were also able to change the electron population in the MOF by using variations in the chemical environment of the hBN substrate and the electric field underneath the scanning tunnelling microscope tip.

    When some electrons are removed from the MOF, the repulsion that the remaining electrons feel is reduced and they become unfrozen — allowing the material to behave like a metal. The authors were able to observe this metallic phase from a vanishing of the measured energy gap when they removed some electrons from the MOF. Electron population is the on-off switch for controllable Mott insulator to metal phase transitions.
    What’s next?
    The ability of this MOF to switch between Mott insulator and metal phases by modifying the electron population is a promising result that could be exploited in new types of electronic devices (for example, transistors). A promising next step towards such applications would be to reproduce these findings within a device structure in which an electric field is applied uniformly across the whole material.
    The observation of a Mott insulator in a MOF which is easy to synthesise and contains abundant elements also makes these materials attractive candidates for further studies of strongly correlated phenomena — potentially including superconductivity, magnetism, or spin liquids. More