More stories

  • in

    Researchers engineer AI path to prevent power outages

    University of Texas at Dallas researchers have developed an artificial intelligence (AI) model that could help electrical grids prevent power outages by automatically rerouting electricity in milliseconds.
    The UT Dallas researchers, who collaborated with engineers at the University at Buffalo in New York, demonstrated the automated system in a study published online June 4 in Nature Communications.
    The approach is an early example of “self-healing grid” technology, which uses AI to detect and repair problems such as outages autonomously and without human intervention when issues occur, such as storm-damaged power lines.
    The North American grid is an extensive, complex network of transmission and distribution lines, generation facilities and transformers that distributes electricity from power sources to consumers.
    Using various scenarios in a test network, the researchers demonstrated that their solution can automatically identify alternative routes to transfer electricity to users before an outage occurs. AI has the advantage of speed: The system can automatically reroute electrical flow in microseconds, while current human-controlled processes to determine alternate paths could take from minutes to hours.
    “Our goal is to find the optimal path to send power to the majority of users as quickly as possible,” said Dr. Jie Zhang, associate professor of mechanical engineering in the Erik Jonsson School of Engineering and Computer Science. “But more research is needed before this system can be implemented.”
    Zhang, who is co-corresponding author of the study, and his colleagues used technology that applies machine learning to graphs in order to map the complex relationships between entities that make up a power distribution network. Graph machine learning involves describing a network’s topology, the way the various components are arranged in relation to each other and how electricity moves through the system.

    Network topology also may play a critical role in applying AI to solve problems in other complex systems, such as critical infrastructure and ecosystems, said study co-author Dr. Yulia Gel, professor of mathematical sciences in the School of Natural Sciences and Mathematics.
    “In this interdisciplinary project, by leveraging our team expertise in power systems, mathematics and machine learning, we explored how we can systematically describe various interdependencies in the distribution systems using graph abstractions,” Gel said. “We then investigated how the underlying network topology, integrated into the reinforcement learning framework, can be used for more efficient outage management in the power distribution system.”
    The researchers’ approach relies on reinforcement learning that makes the best decisions to achieve optimal results. Led by co-corresponding author Dr. Souma Chowdhury, associate professor of mechanical and aerospace engineering, University at Buffalo researchers focused on the reinforcement learning aspect of the project.
    If electricity is blocked due to line faults, the system is able to reconfigure using switches and draw power from available sources in close proximity, such as from large-scale solar panels or batteries on a university campus or business, said Roshni Anna Jacob, a UTD electrical engineering doctoral student and the paper’s co-first author.
    “You can leverage those power generators to supply electricity in a specific area,” Jacob said.
    After focusing on preventing outages, the researchers will aim to develop similar technology to repair and restore the grid after a power disruption. More

  • in

    Novel blood-powered chip offers real-time health monitoring

    Metabolic disorders, like diabetes and osteoporosis, are burgeoning throughout the world, especially in developing countries.
    The diagnosis for these disorders is typically a blood test, but because the existing healthcare infrastructure in remote areas is unable to support these tests, most individuals go undiagnosed and without treatment. Conventional methods also involve labor-intensive and invasive processes which tend to be time-consuming and make real-time monitoring unfeasible, particularly in real life settings and in country-side populations.
    Researchers at the University of Pittsburgh and University of Pittsburgh Medical Center are proposing a new device that uses blood to generate electricity and measure its conductivity, opening doors to medical care in any location.
    “As the fields of nanotechnology and microfluidics continue to advance, there is a growing opportunity to develop lab-on-a-chip devices capable of surrounding the constraints of modern medical care,” said Amir Alavi, assistant professor of civil and environmental engineering at Pitt’s Swanson School of Engineering. “These technologies could potentially transform healthcare by offering quick and convenient diagnostics, ultimately improving patient outcomes and the effectiveness of medical services.”
    Now, We Got Good Blood
    Blood electrical conductivity is a valuable metric for assessing various health parameters and detecting medical conditions.
    This conductivity is predominantly governed by the concentration of essential electrolytes, notably sodium and chloride ions. These electrolytes are integral to a multitude of physiological processes, helping physicians pinpoint a diagnosis.

    “Blood is basically a water-based environment that has various molecules that conduct or impede electric currents,” explained Dr. Alan Wells, the medical director of UPMC Clinical Laboratories, Executive Vice Chairman, Section of Laboratory Medicine at University of Pittsburgh and UPMC, and Thomas Gill III Professor of Pathology, Pitt School of Medicine, Department of Pathology. “Glucose, for example, is an electrical conductor. We are able to see how it affects conductivity through these measurements. Thus, allowing us to make a diagnosis on the spot.”
    Despite its vitality, the knowledge of human blood conductivity is limited because of its measurement challenges like electrode polarization, limited access to human blood samples, and the complexities associated with blood temperature maintenance. Measuring conductivity at frequencies below 100 Hz is particularly important for gaining a deeper understanding of the blood electrical properties and fundamental biological processes, but is even more difficult.
    A Pocket-Sized Lab
    The research team is proposing an innovative, portable millifluidic nanogenerator lab-on-a-chip device capable of measuring blood at low frequencies. The device utilizes blood as a conductive substance within its integrated triboelectric nanogenerator, or TENG. The proposed blood-based TENG system can convert mechanical energy into electricity via triboelectrification.
    This process involves the exchange of electrons between contacting materials, resulting in a charge transfer. In a TENG system, the electron transfer and charge separation generate a voltage difference that drives electric current when the materials experience relative motion like compression or sliding. The team analyzes the voltage generated by the device under predefined loading conditions to determine the electrical conductivity of the blood. The self-powering mechanism enables miniaturization of the proposed blood-based nanogenerator. The team also used AI models to directly estimate blood electrical conductivity using the voltage patterns generated by the device.
    To test its accuracy, the team compared its results with a traditional test which proved successful. This opens the door to taking the testing to where people live. In addition, blood-powered nanogenerators are capable of functioning in the body wherever blood is present, enabling self-powered diagnostics using the local blood chemistry. More

  • in

    Prying open the AI black box

    Artificial intelligence continues to squirm its way into many aspects of our lives. But what about biology, the study of life itself? AI can sift through hundreds of thousands of genome data points to identify potential new therapeutic targets. While these genomic insights may appear helpful, scientists aren’t sure how today’s AI models come to their conclusions in the first place. Now, a new system named SQUID arrives on the scene armed to pry open AI’s black box of murky internal logic.
    SQUID, short for Surrogate Quantitative Interpretability for Deepnets, is a computational tool created by Cold Spring Harbor Laboratory (CSHL) scientists. It’s designed to help interpret how AI models analyze the genome. Compared with other analysis tools, SQUID is more consistent, reduces background noise, and can lead to more accurate predictions about the effects of genetic mutations.
    How does it work so much better? The key, CSHL Assistant Professor Peter Koo says, lies in SQUID’s specialized training.
    “The tools that people use to try to understand these models have been largely coming from other fields like computer vision or natural language processing. While they can be useful, they’re not optimal for genomics. What we did with SQUID was leverage decades of quantitative genetics knowledge to help us understand what these deep neural networks are learning,” explains Koo.
    SQUID works by first generating a library of over 100,000 variant DNA sequences. It then analyzes the library of mutations and their effects using a program called MAVE-NN (Multiplex Assays of Variant Effects Neural Network). This tool allows scientists to perform thousands of virtual experiments simultaneously. In effect, they can “fish out” the algorithms behind a given AI’s most accurate predictions. Their computational “catch” could set the stage for experiments that are more grounded in reality.
    “In silico [virtual] experiments are no replacement for actual laboratory experiments. Nevertheless, they can be very informative. They can help scientists form hypotheses for how a particular region of the genome works or how a mutation might have a clinically relevant effect,” explains CSHL Associate Professor Justin Kinney, a co-author of the study.
    There are tons of AI models in the sea. More enter the waters each day. Koo, Kinney, and colleagues hope that SQUID will help scientists grab hold of those that best meet their specialized needs.
    Though mapped, the human genome remains an incredibly challenging terrain. SQUID could help biologists navigate the field more effectively, bringing them closer to their findings’ true medical implications. More

  • in

    Unifying behavioral analysis through animal foundation models

    Behavioral analysis can provide a lot of information about the health status or motivations of a living being. A new technology developed at EPFL makes it possible for a single deep learning model to detect animal motion across many species and environments. This “foundational model,” called SuperAnimal, can be used for animal conservation, biomedicine, and neuroscience research.
    Although there is the saying, “straight from the horse’s mouth,” it’s impossible to get a horse to tell you if it’s in pain or experiencing joy. Yet, its body will express the answer in its movements. To a trained eye, pain will manifest as a change in gait, or in the case of joy, the facial expressions of the animal could change. But what if we can automate this with AI? And what about AI models for cows, dogs, cats, or even mice? Automating animal behavior not only removes observer bias, but it helps humans more efficiently get to the right answer.
    Today marks the beginning of a new chapter in posture analysis for behavioral phenotyping. Mackenzie Mathis’ laboratory at EPFL publishes a Nature Communications article describing a particularly effective new open-source tool that requires no human annotations to get the model to track animals. Named “SuperAnimal,” it can automatically recognize, without human supervision, the location of “keypoints” (typically joints) in a whole range of animals — over 45 animal species — and even in mythical ones!
    “The current pipeline allows users to tailor deep learning models, but this then relies on human effort to identify keypoints on each animal to create a training set,” explains Mackenzie Mathis. “This leads to duplicated labeling efforts across researchers and can lead to different semantic labels for the same keypoints, making merging data to train large foundation models very challenging. Our new method provides a new approach to standardize this process and train large-scale datasets. It also makes labeling 10 to 100 times more effective than current tools.”
    The “SuperAnimal method” is an evolution of a pose estimation technique that Mackenzie Mathis’ laboratory had already distributed under the name “DeepLabCut™.” You can read more about this game-changing tool and its origin in this new Nature technology feature.
    “Here, we have developed an algorithm capable of compiling a large set of annotations across databases and train the model to learn a harmonized language — we call this pre-training the foundation model,” explains Shaokai Ye, a PhD student researcher and first author of the study. “Then users can simply deploy our base model or fine-tune it on their own data, allowing for further customization if needed.”
    These advances will make motion analysis much more accessible. “Veterinarians could be particularly interested, as well as those in biomedical research — especially when it comes to observing the behavior of laboratory mice. But it can go further,” says Mackenzie Mathis, mentioning neuroscience and… athletes (canine or otherwise)! Other species — birds, fish, and insects — are also within the scope of the model’s next evolution. “We also will leverage these models in natural language interfaces to build even more accessible and next-generation tools. For example, Shaokai and I, along with our co-authors at EPFL, recently developed AmadeusGPT, published recently at NeurIPS, that allows for querying video data with written or spoken text. Expanding this for complex behavioral analysis will be very exciting.” SuperAnimal is now available to researchers worldwide through its open-source distribution. More

  • in

    Controlling electronics with light: The magnetite breakthrough

    “Some time ago, we showed that it is possible to induce an inverse phase transition in magnetite,” says physicist Fabrizio Carbone at EPFL. “It’s as if you took water and you could turn it into ice by putting energy into it with a laser. This is counterintuitive as normally to freeze water you cool it down, i.e. remove energy from it.”
    Now, Carbone has led a research project to elucidate and control the microscopic structural properties of magnetite during such light-induced phase transitions. The study discovered that using specific light wavelengths for photoexcitation the system can drive magnetite into distinct non-equilibrium metastable states (“metastable” means that the state can change under certain conditions) called “hidden phases,” thus revealing a new protocol to manipulate material properties at ultrafast timescales. The findings, which could impact the future of electronics, are published in PNAS.
    What are “non-equilibrium states?” An “equilibrium state” is basically a stable state where a material’s properties do not change over time because the forces within it are balanced. When this is disrupted, the material (the “system,” to be accurate in terms of physics) is said to enter a non-equilibrium state, exhibiting properties that can border on the exotic and unpredictable.
    The “hidden phases” of magnetite
    A phase transition is a change in a material’s state, due to changes in temperature, pressure, or other external conditions. An everyday example is water going from solid ice to liquid or from liquid to gas when it boils.
    Phase transitions in materials usually follow predictable pathways under equilibrium conditions. But when materials are driven out of equilibrium, they can start showing so called “hidden phases” — intermediate states that are not normally accessible. Observing hidden phases requires advanced techniques that can capture rapid and minute changes in the material’s structure.
    Magnetite (Fe3O4) is a well-studied material known for its intriguing metal-to-insulator transition at low temperatures — from being able to conduct electricity to actively blocking it. This is known as the Verwey transition, and it changes magnetite’s electronic and structural properties significantly.

    With its complex interplay of crystal structure, charge, and orbital orders, magnetite can undergo this metal-insulator transition at around 125 K.
    Ultrafast lasers induce hidden transitions in magnetite
    “To understand this phenomenon better, we did this experiment where we directly looked at the atomic motions happening during such a transformation,” says Carbone. “We found out that laser excitation takes the solid into some different phases that don’t exist in equilibrium conditions.”
    The experiments used two different wavelengths of light: near-infrared (800 nm) and visible (400 nm). When excited with 800 nm light pulses, the magnetite’s structure was disrupted, creating a mix of metallic and insulating regions. In contrast, 400 nm light pulses made the magnetite a more stable insulator.
    To monitor the structural changes in magnetite induced by laser pulses, the researchers used ultrafast electron diffraction, a technique that can “see” the movements of atoms in materials on sub-picosecond timescales (a picosecond is a trillionth of a second).
    The technique allowed the scientists to observe how the different wavelengths of laser light actually affect the structure of the magnetite on an atomic scale.

    Magnetite’s crystal structure is what is referred to as a “monoclinic lattice,” where the unit cell is shaped like a skewed box, with three unequal edges, and two of its angles are 90 degrees while the third is different.
    When the 800 nm light shone on the magnetite, it induced a rapid compression of the magnetite’s monoclinic lattice, transforming it towards a cubic structure. This takes place in three stages over 50 picoseconds, and suggests that there are complex dynamic interactions happening within the material. Conversely, the 400 nm, visible light caused the lattice to expand, reinforcing the monoclinic lattice, and creating a more ordered phase — a stable insulator.
    Fundamental implications and technological applications
    The study reveals that the electronic properties of magnetite can be controlled by selectively using different light wavelengths. Understanding these light-induced transitions provides valuable insights into the fundamental physics of strongly correlated systems.
    “Our study breaks ground for a novel approach to control matter at ultrafast timescale using tailored photon pulses,” write the researchers. Being able to induce and control hidden phases in magnetite could have significant implications for the development of advanced materials and devices. For instance, materials that can switch between different electronic states quickly and efficiently could be used in next-generation computing and memory devices. More

  • in

    Scientists at uOttawa develop innovative method to validate quantum photonics circuits performance

    A team of researchers from the University of Ottawa’s Nexus for Quantum Technologies Institute (NexQT), led by Dr. Francesco Di Colandrea under the supervision of Professor Ebrahim Karimi, associate professor of physics, has developed an innovative technique for evaluating the performance of quantum circuits. This significant advancement, recently published in the journal npj Quantum Information, represents a substantial leap forward in the field of quantum computing.
    In the rapidly evolving landscape of quantum technologies, ensuring the functionality and reliability of quantum devices is critical. The ability to characterize these devices with high accuracy and speed is essential for their efficient integration into quantum circuits and computers, impacting both fundamental studies and practical applications.
    Characterization helps determine if a device operates as expected, which is necessary when devices exhibit anomalies or errors. Identifying and addressing these issues is crucial for advancing the development of future quantum technologies.
    Traditionally, scientists have relied on Quantum Process Tomography (QPT), a method that requires a large number of “projective measurements” to reconstruct a device’s operations fully. However, the number of required measurements in QPT scales quadratically with the dimensionality of the operations, posing significant experimental and computational challenges, especially for high-dimensional quantum information processors.
    The University of Ottawa research team has pioneered an optimized technique named Fourier Quantum Process Tomography (FQPT). This method allows for the complete characterization of quantum operations with a minimal number of measurements. Instead of performing a large number of projective measurements, FQPT utilises a well-known map, the Fourier transform, to perform a portion of the measurements in two different mathematical spaces. The physical relation between these spaces enhances the information extracted from single measurements, significantly reducing the number of measurements needed. For instance, for processes with dimensions 2d (where d can be arbitrarily high), only seven measurements are required.
    To validate their technique, the researchers conducted a photonic experiment using optical polarisation to encode a qubit. The quantum process was realized as a complex space-dependent polarisation transformation, leveraging state-of-the-art liquid-crystal technology. This experiment demonstrated the flexibility and robustness of the method.
    “The experimental validation is a fundamental step to probe the technique’s resilience to noise, ensuring robust and high-fidelity reconstructions in realistic experimental scenarios,” said Francesco Di Colandrea, a postdoctoral fellow at the University of Ottawa.
    This novel technique represents a remarkable advancement in quantum computing. The research team is already actively working on extending FQPT to arbitrary quantum operations, including non-Hermitian and higher-dimensional implementations, and in implementing AI techniques to increase accuracy and reduce measurement. This new technique represents a promising avenue for further advancements in quantum technology. More

  • in

    Changing climate will make home feel like somewhere else

    The impacts of climate change are being felt all over the world, but how will it impact how your hometown feels? An interactive web application from the University of Maryland Center for Environmental Science allows users to search 40,581 places and 5,323 metro areas around the globe to match the expected future climate in each city with the current climate of another location, providing a relatable picture of what is likely in store.
    You may have already experienced these changes where you live and may be wondering: What will climate of the future be like where I live? How hot will summers be? Will it still snow in winter? And perhaps How might things change course if we act to reduce emissions? This web application helps to provide answers to these questions.
    For example, if you live in Washington, D.C., you would need to travel to northern Louisiana to experience what Washington, D.C., will feel like by 2080, where summers are expected to be 11.5°F warmer in 50 years. If you live in Shanghai, China, you would need to travel to northern Pakistan to experience what Shanghai’s climate could be like in 2080.
    “In 50 years, the northern hemisphere cities to the north are going to become much more like cities to the south. Everything is moving towards the equator in terms of the climate that’s coming for you,” said Professor Matthew Fitzpatrick. “And the closer you get to the equator there are fewer and fewer good matches for climates in places like Central America, south Florida, and northern Africa. There is no place on earth representative of what those places they will be like in the future.”
    A spatial ecologist, Fitzpatrick used climate-analog mapping, a statistical technique that matches the expected future climate at one location — your city of residence, for instance — with the current climate of another familiar location, to provide a place-based understanding of climate change. He used the most recent available data from the Intergovernmental Panel on Climate Change (IPCC), the United Nations body for assessing the science related to climate change, to illustrate anticipated temperature changes over 30 years under two different scenarios.
    Because the answer to these questions depends on how climate is expected to change and the specific nature of those changes is uncertain, the app provides results for both high and reduced emissions scenarios, as well as for several different climate forecast models. You can map the best matches for your city for these different scenarios and models as well as map the similarity between your city’s future climate and present climates everywhere (based on the average of the five forecasts for each emission scenario).
    The first scenario that users can search is similar to our current trajectory and assumes very high greenhouse gas emissions, in which the planet is on track to warm around 9 degrees F by the end of this century. This scenario would make the earth warmer than it likely has been in millions of years. The second scenario is similar to what the planet would feel like if nations pursue Paris Climate Accord goals. Under that scenario, the planet warms by about 3 degrees F by immediately and drastically reducing human-caused greenhouse gases emissions.
    “I hope that it continues to inform the conversation about climate change. I hope it helps people better understand the magnitude of the impacts and why scientists are so concerned,” said Fitzpatrick.
    Further information: https://fitzlab.shinyapps.io/cityapp/ More

  • in

    Sweat health monitor measures levels of disease markers

    A wearable health monitor developed by Washington State University researchers can reliably measure levels of important biochemicals in sweat during physical exercise.
    The 3D-printed monitor could someday provide a simple and non-invasive way to track health conditions and diagnose common diseases, such as diabetes, gout, kidney disease or heart disease.
    Reporting in the journal, ACS Sensors, the researchers were able to accurately monitor the levels of volunteers’ glucose, lactate and uric acid as well as the rate of their sweating during exercise.
    “Diabetes is a major problem worldwide,” said Chuchu Chen, a WSU Ph.D. student and first author on the paper. “I think 3D printing can make a difference to the healthcare fields, and I wanted to see if we can combine 3D printing with disease detection methods to create a device like this.”
    For their proof-of-concept health monitor, the researchers used 3D printing to make the health monitors in a unique, one-step manufacturing process. The researchers used a single-atom catalyst and enzymatic reactions to enhance the signal and measure low levels of the biomarkers. Three biosensors on the monitor change color to indicate the specific biochemical levels.
    Sweat contains many important metabolites that can indicate health conditions, but, unlike blood sampling, it’s non-invasive. Levels of uric acid in sweat can indicate the risk of developing gout, kidney disease or heart disease. Glucose levels are used to monitor diabetes, and lactate levels can indicate exercise intensity.
    “Sweat rate is also an important parameter and physiological indicator for people’s health,” said Kaiyan Qiu, Berry Assistant Professor in WSU’s School of Mechanical and Materials Engineering.

    But the amount of these chemicals in sweat is tiny and can be hard to measure, the researchers noted. While other sweat sensors have been developed, they are complex and need specialized equipment and expertise to fabricate. The sensors also have to be flexible and stretchable.
    “It’s novel to use single-atom catalysts to enhance the sensitivity and accuracy of the health monitor,” said Annie Du, research professor in WSU’s School of Mechanical and Materials Engineering. Qiu and Du led the study.
    The health monitor the researchers developed is made of very tiny channels to measure the sweat rate and biomarkers’ concentration. As they’re being fabricated via 3D printing, the micro-channels don’t require any supporting structure, which can cause contamination problems as they’re removed, said Qiu.
    “We need to measure the tiny concentrations of biomarkers, so we don’t want these supporting materials to be present or to have to remove them,” he said. “That’s why we’re using a unique method to print the self-supporting microfluidic channels.”
    When the researchers compared the monitors on volunteers’ arms to lab results, they found that their monitor was accurately and reliably measuring the concentration of the chemicals as well as the sweating rate.
    While the researchers initially chose three biomarkers to measure, they can add more, and the biomarkers can be customized. The monitors were also comfortable for volunteers to wear.
    The researchers are now working to further improve the device design and validation. They are also hoping to commercialize the technology. The WSU Office of Commercialization has also filed a provisional patent application to protect the intellectual property associated with this technology.
    The work was funded by the National Science Foundation and the Centers for Disease Control and Prevention, as well as through startup funds. More