More stories

  • in

    Engineers tap into good vibrations to power the Internet of Things

    In a world hungry for clean energy, engineers have created a new material that converts the simple mechanical vibrations all around us into electricity to power sensors in everything from pacemakers to spacecraft.
    The first of its kind and the product of a decade of work by researchers at the University of Waterloo and the University of Toronto, the novel generating system is compact, reliable, low-cost and very, very green.
    “Our breakthrough will have a significant social and economic impact by reducing our reliance on non-renewable power sources,” said Asif Khan, a Waterloo researcher and co-author of a new study on the project. “We need these energy-generating materials more critically at this moment than at any other time in history.”
    The system Khan and his colleagues developed is based on the piezoelectric effect, which generates an electrical current by applying pressure — mechanical vibrations are one example — to an appropriate substance.
    The effect was discovered in 1880, and since then, a limited number of piezoelectric materials, such as quartz and Rochelle salts, have been used in technologies ranging from sonar and ultrasonic imaging to microwave devices.
    The problem is that until now, traditional piezoelectric materials used in commercial devices have had limited capacity for generating electricity. They also often use lead, which Khan describes as “detrimental to the environment and human health.”
    The researchers solved both problems.
    They started by growing a large single crystal of a molecular metal-halide compound called edabco copper chloride using the Jahn-Teller effect, a well-known chemistry concept related to spontaneous geometrical distortion of a crystal field.
    Khan said that highly piezoelectric material was then used to fabricate nanogenerators “with a record power density that can harvest tiny mechanical vibrations in any dynamic circumstances, from human motion to automotive vehicles” in a process requiring neither lead nor non-renewable energy.
    The nanogenerator is tiny — 2.5 centimetres square and about the thickness of a business card — and could be conveniently used in countless situations. It has the potential to power sensors in a vast array of electronic devices, including billions needed for the Internet of Things — the burgeoning global network of objects embedded with sensors and software that connect and exchange data with other devices.
    Dr. Dayan Ban, a researcher at the Waterloo Institute for Nanotechnology, said that in future, an aircraft’s vibrations could power its sensory monitoring systems, or a person’s heartbeat could keep their battery-free pacemaker running.
    “Our new material has shown record-breaking performance,” said Ban, a professor of electrical and computer engineering. “It represents a new path forward in this field.” More

  • in

    ‘Raw’ data show AI signals mirror how the brain listens and learns

    New research from the University of California, Berkeley, shows that artificial intelligence (AI) systems can process signals in a way that is remarkably similar to how the brain interprets speech, a finding scientists say might help explain the black box of how AI systems operate.
    Using a system of electrodes placed on participants’ heads, scientists with the Berkeley Speech and Computation Lab measured brain waves as participants listened to a single syllable — “bah.” They then compared that brain activity to the signals produced by an AI system trained to learn English.
    “The shapes are remarkably similar,” said Gasper Begus, assistant professor of linguistics at UC Berkeley and lead author on the study published recently in the journal Scientific Reports. “That tells you similar things get encoded, that processing is similar. ”
    A side-by-side comparison graph of the two signals shows that similarity strikingly.
    “There are no tweaks to the data,” Begus added. “This is raw.”
    AI systems have recently advanced by leaps and bounds. Since ChatGPT ricocheted around the world last year, these tools have been forecast to upend sectors of society and revolutionize how millions of people work. But despite these impressive advances, scientists have had a limited understanding of how exactly the tools they created operate between input and output.

    A question and answer in ChatGPT has been the benchmark to measure an AI system’s intelligence and biases. But what happens between those steps has been something of a black box. Knowing how and why these systems provide the information they do — how they learn — becomes essential as they become ingrained in daily life in fields spanning health care to education.
    Begus and his co-authors, Alan Zhou of Johns Hopkins University and T. Christina Zhao of the University of Washington, are among a cadre of scientists working to crack open that box.
    To do so, Begus turned to his training in linguistics.
    When we listen to spoken words, Begus said, the sound enters our ears and is converted into electrical signals. Those signals then travel through the brainstem and to the outer parts of our brain. With the electrode experiment, researchers traced that path in response to 3,000 repetitions of a single sound and found that the brain waves for speech closely followed the actual sounds of language.
    The researchers transmitted the same recording of the “bah” sound through an unsupervised neural network — an AI system — that could interpret sound. Using a technique developed in the Berkeley Speech and Computation Lab, they measured the coinciding waves and documented them as they occurred.

    Previous research required extra steps to compare waves from the brain and machines. Studying the waves in their raw form will help researchers understand and improve how these systems learn and increasingly come to mirror human cognition, Begus said.
    “I’m really interested as a scientist in the interpretability of these models,” Begus said. “They are so powerful. Everyone is talking about them. And everyone is using them. But much less is being done to try to understand them.”
    Begus believes that what happens between input and output doesn’t have to remain a black box. Understanding how those signals compare to the brain activity of human beings is an important benchmark in the race to build increasingly powerful systems. So is knowing what’s going on under the hood.
    For example, having that understanding could help put guardrails on increasingly powerful AI models. It could also improve our understanding of how errors and bias are baked into the learning processes.
    Begus said he and his colleagues are collaborating with other researchers using brain imaging techniques to measure how these signals might compare. They’re also studying how other languages, like Mandarin, are decoded in the brain differently and what that might indicate about knowledge.
    Many models are trained on visual cues, like colors or written text — both of which have thousands of variations at the granular level. Language, however, opens the door for a more solid understanding, Begus said.
    The English language, for example, has just a few dozen sounds.
    “If you want to understand these models, you have to start with simple things. And speech is way easier to understand,” Begus said. “I am very hopeful that speech is the thing that will help us understand how these models are learning.”
    In cognitive science, one of the primary goals is to build mathematical models that resemble humans as closely as possible. The newly documented similarities in brain waves and AI waves are a benchmark on how close researchers are to meeting that goal.
    “I’m not saying that we need to build things like humans,” Begus said. “I’m not saying that we don’t. But understanding how different architectures are similar or different from humans is important.” More

  • in

    Deep neural network provides robust detection of disease biomarkers in real time

    Sophisticated systems for the detection of biomarkers — molecules such as DNA or proteins that indicate the presence of a disease — are crucial for real-time diagnostic and disease-monitoring devices.
    Holger Schmidt, distinguished professor of electrical and computer engineering at UC Santa Cruz, and his group have long been focused on developing unique, highly sensitive devices called optofluidic chips to detect biomarkers.
    Schmidt’s graduate student Vahid Ganjalizadeh led an effort to use machine learning to enhance their systems by improving its ability to accurately classify biomarkers. The deep neural network he developed classifies particle signals with 99.8 percent accuracy in real time, on a system that is relatively cheap and portable for point-of-care applications, as shown in a new paper in Nature Scientific Reports.
    When taking biomarker detectors into the field or a point-of-care setting such as a health clinic, the signals received by the sensors may not be as high quality as those in a lab or a controlled environment. This may be due to a variety of factors, such as the need to use cheaper chips to bring down costs, or environmental characteristics such as temperature and humidity.
    To address the challenges of a weak signal, Schmidt and his team developed a deep neural network that can identify the source of that weak signal with high confidence. The researchers trained the neural network with known training signals, teaching it to recognize potential variations it could see, so that it can recognize patterns and identify new signals with very high accuracy.
    First, a parallel cluster wavelet analysis (PCWA) approach designed in Schmidt’s lab detects that a signal is present. Then, the neural network processes the potentially weak or noisy signal, identifying its source. This system works in real time, so users are able to receive results in a fraction of a second.

    “It’s all about making the most of possibly low quality signals, and doing that really fast and efficiently,” Schmidt said.
    A smaller version of the neural network model can run on portable devices. In the paper, the researchers run the system over a Google Coral Dev board, a relatively cheap edge device for accelerated execution of artificial intelligence algorithms. This means the system also requires less power to execute the processing compared to other techniques.
    “Unlike some research that requires running on supercomputers to do high-accuracy detection, we proved that even a compact, portable, relatively cheap device can do the job for us,” Ganjalizadeh said. “It makes it available, feasible, and portable for point-of-care applications.”
    The entire system is designed to be used completely locally, meaning the data processing can happen without internet access, unlike other systems that rely on cloud computing. This also provides a data security advantage, because results can be produced without the need to share data with a cloud server provider.
    It is also designed to be able to give results on a mobile device, eliminating the need to bring a laptop into the field.
    “You can build a more robust system that you could take out to under-resourced or less- developed regions, and it still works,” Schmidt said.
    This improved system will work for any other biomarkers Schmidt’s lab’s systems have been used to detect in the past, such as COVID-19, Ebola, flu, and cancer biomarkers. Although they are currently focused on medical applications, the system could potentially be adapted for the detection of any type of signal.
    To push the technology further, Schmidt and his lab members plan to add even more dynamic signal processing capabilities to their devices. This will simplify the system and combine the processing techniques needed to detect signals at both low and high concentrations of molecules. The team is also working to bring discrete parts of the setup into the integrated design of the optofluidic chip. More

  • in

    A touch-responsive fabric armband — for flexible keyboards, wearable sketchpads

    It’s time to roll up your sleeves for the next advance in wearable technology — a fabric armband that’s actually a touch pad. In ACS Nano, researchers say they have devised a way to make playing video games, sketching cartoons and signing documents easier. Their proof-of-concept silk armband turns a person’s forearm into a keyboard or sketchpad. The three-layer, touch-responsive material interprets what a user draws or types and converts it into images on a computer.
    Computer trackpads and electronic signature-capture devices seem to be everywhere, but they aren’t as widely used in wearables. Researchers have suggested making flexible touch-responsive panels from clear, electrically conductive hydrogels, but these substances are sticky, making them hard to write on and irritating to the skin. So, Xueji Zhang, Lijun Qu, Mingwei Tian and colleagues wanted to incorporate a similar hydrogel into a comfortable fabric sleeve for drawing or playing games on a computer.
    The researchers sandwiched a pressure-sensitive hydrogel between layers of knit silk. The top piece was coated in graphene nanosheets to make the fabric electrically conductive. Attaching the sensing panel to electrodes and a data collection system produced a pressure-responsive pad with real-time, rapid sensing when a finger slid over it, writing numbers and letters. The device was then incorporated into an arm-length silk sleeve with a touch-responsive area on the forearm. In experiments, a user controlled the direction of blocks in a computer game and sketched colorful cartoons in a computer drawing program from the armband. The researchers say that their proof-of-concept wearable touch panel could inspire the next generation of flexible keyboards and wearable sketchpads. More

  • in

    Joyful music could be a game changer for virtual reality headaches

    Listening to music could reduce the dizziness, nausea and headaches virtual reality users might experience after using digital devices, research suggests.
    Cybersickness — a type of motion sickness from virtual reality experiences such as computer games — significantly reduces when joyful music is part of the immersive experience, the study found.
    The intensity of the nausea-related symptoms of cybersickness was also found to substantially decrease with both joyful and calming music.
    Researchers from the University of Edinburgh assessed the effects of music in a virtual reality environment among 39 people aged between 22 and 36.
    They conducted a series of tests to assess the effect cybersickness had on a participant’s memory skills reading speed and reaction times.
    Participants were immersed in a virtual environment, where they experienced three roller coaster rides aimed at inducing cybersickness.

    Two of the three rides were accompanied by electronic music with no lyrics by artists or from music streams that people might listen to which had been selected as being calming or joyful in a previous study.
    One ride was completed in silence and the order of the rides was randomised across participants.
    After each ride, participants rated their cybersickness symptoms and performed some memory and reaction time tests.
    Eye-tracking tests were also conducted to measure their reading speed and pupil size.
    For comparison purposes the participants had completed the same tests before the rides.

    The study found that joyful music significantly decreased the overall cybersickness intensity. Joyful and calming music substantially decreased the intensity of nausea-related symptoms.
    Cybersickness among the participants was associated with a temporary reduction in verbal working memory test scores, and a decrease in pupil size. It also significantly slowed reaction times and reading speed.
    The researchers also found higher levels of gaming experience were associated with lower cybersickness. There was no difference in the intensity of the cybersickness between female and male participants with comparable gaming experience.
    Researchers say the findings show the potential of music in lessening cybersickness, understanding how gaming experience is linked to cybersickness levels, and the significant effects of cybersickness on thinking skills, reaction times, reading ability and pupil size,
    Dr Sarah E MacPherson, of the University of Edinburgh’s School of Philosophy, Psychology & Language Sciences, said: “Our study suggests calming or joyful music as a solution for cybersickness in immersive virtual reality. Virtual reality has been used in educational and clinical settings but the experience of cybersickness can temporarily impair someone’s thinking skills as well as slowing down their reaction times. The development of music as an intervention could encourage virtual reality to be used more extensively within educational and clinical settings.”
    The study was made possible through a collaboration between Psychology at the University of Edinburgh and the Inria Centre at the University of Rennes in France. More

  • in

    Self-folding origami machines powered by chemical reaction

    A Cornell-led collaboration harnessed chemical reactions to make microscale origami machines self-fold — freeing them from the liquids in which they usually function, so they can operate in dry environments and at room temperature.
    The approach could one day lead to the creation of a new fleet of tiny autonomous devices that can rapidly respond to their chemical environment.
    The group’s paper, “Gas-Phase Microactuation Using Kinetically Controlled Surface States of Ultrathin Catalytic Sheets,” published May 1 in Proceedings of the National Academy of Sciences. The paper’s co-lead authors are Nanqi Bao, Ph.D. ’22, and former postdoctoral researcher Qingkun Liu, Ph.D. ’22.
    The project was led by senior author Nicholas Abbott, a Tisch University Professor in the Robert F. Smith School of Chemical and Biomolecular Engineering in Cornell Engineering, along with Itai Cohen, professor of physics, and Paul McEuen, the John A. Newman Professor of Physical Science, both in the College of Arts and Sciences; and David Muller, the Samuel B. Eckert Professor of Engineering in Cornell Engineering.
    “There are quite good technologies for electrical to mechanical energy transduction, such as the electric motor, and the McEuen and Cohen groups have shown a strategy for doing that on the microscale, with their robots,” Abbott said. “But if you look for direct chemical to mechanical transductions, actually there are very few options.”
    Prior efforts depended on chemical reactions that could only occur in extreme conditions, such as at high temperatures of several 100 degrees Celsius, and the reactions were often tediously slow — sometimes as long as 10 minutes — making the approach impractical for everyday technological applications.

    However, Abbott’s group found a loophole of sorts while reviewing data from a catalysis experiment: a small section of the chemical reaction pathway contained both slow and fast steps.
    “If you look at the response of the chemical actuator, it’s not that it goes from one state directly to the other state. It actually goes through an excursion into a bent state, a curvature, which is more extreme than either of the two end states,” Abbott said. “If you understand the elementary reaction steps in a catalytic pathway, you can go in and sort of surgically extract out the rapid steps. You can operate your chemical actuator around those rapid steps, and just ignore the rest of it.”
    The researchers needed the right material platform to leverage that rapid kinetic moment, so they turned to McEuen and Cohen, who had worked with Muller to develop ultrathin platinum sheets capped with titanium.
    The group also collaborated with theorists, led by professor Manos Mavrikakis at the University of Wisconsin, Madison, who used electronic structure calculations to dissect the chemical reaction that occurs when hydrogen — adsorbed to the material — is exposed to oxygen.
    The researchers were then able to exploit the crucial moment that the oxygen quickly strips the hydrogen, causing the atomically thin material to deform and bend, like a hinge.

    The system actuates at 600 milliseconds per cycle and can operate at 20 degrees Celsius — i.e., room temperature — in dry environments.
    “The result is quite generalizable,” Abbott said. “There are a lot of catalytic reactions which have been developed based on all sorts of species. So carbon monoxide, nitrogen oxides, ammonia: they’re all candidates to use as fuels for chemically driven actuators.”
    The team anticipates applying the technique to other catalytic metals, such as palladium and palladium gold alloys. Eventually this work could lead to autonomous material systems in which the controlling circuitry and onboard computation are handled by the material’s response — for example, an autonomous chemical system that regulates flows based on chemical composition.
    “We are really excited because this work paves the way to microscale origami machines that work in gaseous environments,” Cohen said.
    Co-authors include postdoctoral researcher Michael Reynolds, M.S. ’17, Ph.D. ’21; doctoral student Wei Wang; Michael Cao ’14; and researchers at the University of Wisconsin, Madison.
    The research was supported by the Cornell Center for Materials Research, which is supported by the National Science Foundation’s MRSEC program, the Army Research Office, the NSF, the Air Force Office of Scientific Research and the Kavli Institute at Cornell for Nanoscale Science.
    The researchers made use of the Cornell NanoScale Facility, a member of the National Nanotechnology Coordinated Infrastructure, which is supported by the NSF; and National Energy Research Scientific Computing Center (NERSC) resources, which is supported by the U.S. Department of Energy’s Office of Science.
    The project is part of the Nanoscale Science and Microsystems Engineering (NEXT Nano) program, which is designed to push nanoscale science and microsystems engineering to the next level of design, function and integration. More

  • in

    Quantum entanglement of photons doubles microscope resolution

    Using a “spooky” phenomenon of quantum physics, Caltech researchers have discovered a way to double the resolution of light microscopes.
    In a paper appearing in the journal Nature Communications, a team led by Lihong Wang, Bren Professor of Medical Engineering and Electrical Engineering, shows the achievement of a leap forward in microscopy through what is known as quantum entanglement. Quantum entanglement is a phenomenon in which two particles are linked such that the state of one particle is tied to the state of the other particle regardless of whether the particles are anywhere near each other. Albert Einstein famously referred to quantum entanglement as “spooky action at a distance” because it could not be explained by his relativity theory.
    According to quantum theory, any type of particle can be entangled. In the case of Wang’s new microscopy technique, dubbed quantum microscopy by coincidence (QMC), the entangled particles are photons. Collectively, two entangled photons are known as a biphoton, and, importantly for Wang’s microscopy, they behave in some ways as a single particle that has double the momentum of a single photon.
    Since quantum mechanics says that all particles are also waves, and that the wavelength of a wave is inversely related to the momentum of the particle, particles with larger momenta have smaller wavelengths. So, because a biphoton has double the momentum of a photon, its wavelength is half that of the individual photons.
    This is key to how QMC works. A microscope can only image the features of an object whose minimum size is half the wavelength of light used by the microscope. Reducing the wavelength of that light means the microscope can see even smaller things, which results in increased resolution.
    Quantum entanglement is not the only way to reduce the wavelength of light being used in a microscope. Green light has a shorter wavelength than red light, for example, and purple light has a shorter wavelength than green light. But due to another quirk of quantum physics, light with shorter wavelengths carries more energy. So, once you get down to light with a wavelength small enough to image tiny things, the light carries so much energy that it will damage the items being imaged, especially living things such as cells. This is why ultraviolet (UV) light, which has a very short wavelength, gives you a sunburn.

    QMC gets around this limit by using biphotons that carry the lower energy of longer-wavelength photons while having the shorter wavelength of higher-energy photons.
    “Cells don’t like UV light,” Wang says. “But if we can use 400-nanometer light to image the cell and achieve the effect of 200-nm light, which is UV, the cells will be happy, and we’re getting the resolution of UV.”
    To achieve that, Wang’s team built an optical apparatus that shines laser light into a special kind of crystal that converts some of the photons passing through it into biphotons. Even using this special crystal, the conversion is very rare and occurs in about one in a million photons. Using a series of mirrors, lenses, and prisms, each biphoton — which actually consists of two discrete photons — is split up and shuttled along two paths, so that one of the paired photons passes through the object being imaged and the other does not. The photon passing through the object is called the signal photon, and the one that does not is called the idler photon. These photons then continue along through more optics until they reach a detector connected to a computer that builds an image of the cell based on the information carried by the signal photon. Amazingly, the paired photons remain entangled as a biphoton behaving at half the wavelength despite the presence of the object and their separate pathways.
    Wang’s lab was not the first to work on this kind of biphoton imaging, but it was the first to create a viable system using the concept. “We developed what we believe a rigorous theory as well as a faster and more accurate entanglement-measurement method. We reached microscopic resolution and imaged cells.”
    While there is no theoretical limit to the number of photons that can be entangled with each other, each additional photon would further increase the momentum of the resulting multiphoton while further decreasing its wavelength.
    Wang says future research could enable entanglement of even more photons, although he notes that each extra photon further reduces the probability of a successful entanglement, which, as mentioned above, is already as low as a one-in-a-million chance.
    The paper describing the work, “Quantum Microscopy of Cells at the Heisenberg Limit,” appears in the April 28 issue of Nature Communications. Co-authors are Zhe Heand Yide Zhang, both postdoctoral scholar research associates in medical engineering; medical engineering graduate student Xin Tong (MS ’21); and Lei Li (PhD ’19), formerly a medical engineering postdoctoral scholar and now an assistant professor of electrical and computer engineering at Rice University.
    Funding for the research was provided by the Chan Zuckerberg Initiative and the National Institutes of Health. More

  • in

    Could wearables capture well-being?

    Applying machine learning models, a type of artificial intelligence (AI), to data collected passively from wearable devices can identify a patient’s degree of resilience and well-being, according to investigators at the Icahn School of Medicine at Mount Sinai in New York.
    The findings, reported in the May 2nd issue of JAMIA Open, support wearable devices, such as the Apple Watch®, as a way to monitor and assess psychological states remotely without requiring the completion of mental health questionnaires.
    The paper points out that resilience, or an individual’s ability to overcome difficulty, is an important stress mitigator, reduces morbidity, and improves chronic disease management.
    “Wearables provide a means to continually collect information about an individual’s physical state. Our results provide insight into the feasibility of assessing psychological characteristics from this passively collected data,” said first author Robert P. Hirten, MD, Clinical Director, Hasso Plattner Institute for Digital Health at Mount Sinai. “To our knowledge, this is the first study to evaluate whether resilience, a key mental health feature, can be evaluated from devices such as the Apple Watch.”
    Mental health disorders are common, accounting for 13 percent of the burden of global disease, with a quarter of the population at some point experiencing psychological illness. Yet we have limited resources for their evaluation, say the researchers.
    “There are wide disparities in access across geography and socioeconomic status, and the need for in-person assessment or the completion of validated mental health surveys is further limiting,” said senior author Zahi Fayad, PhD, Director of the BioMedical Engineering and Imaging Institute at Icahn Mount Sinai. “A better understanding of who is at psychological risk and an improved means of tracking the impact of psychological interventions is needed. The growth of digital technology presents an opportunity to improve access to mental health services for all people.”
    To determine if machine learning models could be trained to distinguish an individual’s degree of resilience and psychological well-being using the data from wearable devices, the Icahn Mount Sinai researchers analyzed data from the Warrior Watch Study. Leveraged for the current digital observational study, the data set comprised 329 health care workers enrolled at seven hospitals in New York City.

    Subjects wore an Apple Watch® Series 4 or 5 for the duration of their participation, measuring heart rate variability and resting heart rate throughout the follow-up period. Surveys were collected measuring resilience, optimism, and emotional support at baseline. The metrics collected were found to be predictive in identifying resilience or well-being states. Despite the Warrior Watch Study not being designed to evaluate this endpoint, the findings support the further assessment of psychological characteristics from passively collected wearable data.
    “We hope that this approach will enable us to bring psychological assessment and care to a larger population, who may not have access at this time,” said Micol Zweig, MPH, co-author of the paper and Associate Director of Clinical Research, Hasso Plattner Institute for Digital Health at Mount Sinai. “We also intend to evaluate this technique in other patient populations to further refine the algorithm and improve its applicability.”
    To that end, the research team plans to continue using wearable data to observe a range of physical and psychological disorders and diseases. The simultaneous development of sophisticated analytical tools, including artificial intelligence, say the investigators, can facilitate the analysis of data collected from these devices and apps to identify patterns associated with a given mental or physical disease condition.
    The paper is titled “A machine learning approach to determine resilience utilizing wearable device data: analysis of an observational cohort.”
    Additional co-authors are Matteo Danielleto, PhD, Maria Suprun, PhD, Eddye Golden, MPH, Sparshdeep Kaur, BBA, Drew Helmus, MPH, Anthony Biello, BA, Dennis Charney, MD, Laurie Keefer, PhD, Mayte Suarez-Farinas, PhD, and Girish N Nadkami, MD, all from the Icahn School of Medicine at Mount Sinai.
    Support for this study was provided by the Ehrenkranz Lab for Human Resilience, the BioMedical Engineering and Imaging Institute, the Hasso Plattner Institute for Digital Health, the Mount Sinai Clinical Intelligence Center, and the Dr. Henry D. Janowitz Division of Gastroenterology, all at Icahn Mount Sinai, and from the National Institutes of Health, grant number K23DK129835. More