More stories

  • in

    International collaboration lays the foundation for future AI for materials

    Artificial intelligence (AI) is accelerating the development of new materials. A prerequisite for AI in materials research is large-scale use and exchange of data on materials, which is facilitated by a broad international standard. A major international collaboration now presents an extended version of the OPTIMADE standard.
    New technologies in areas such as energy and sustainability involving for example batteries, solar cells, LED lighting and biodegradable materials require new materials. Many researchers around the world are working to create materials that have not existed before. But there are major challenges in creating materials with the exact properties required, such as not containing environmentally hazardous substances and at the same time being durable enough to not break down.
    “We’re now seeing an explosive development where researchers in materials science are adopting AI methods from other fields and also developing their own models to use in materials research. Using AI to predict properties of different materials opens up completely new possibilities,” says Rickard Armiento, associate professor at the Department of Physics, Chemistry and Biology (IFM) at Linköping University in Sweden.
    Today, many demanding simulations are performed on supercomputers that describe how electrons move in materials, which gives rise to different material properties. These advanced calculations yield large amounts of data that can be used to train machine learning models.
    These AI models can then immediately predict the responses to new calculations that have not yet been made, and by extension predict the properties of new materials. But huge amounts of data are required to train the models.
    “We’re moving into an era where we want to train models on all data that exist,” says Rickard Armiento.
    Data from large-scale simulations, and general data about materials, are collected in large databases. Over time, many such databases have emerged from different research groups and projects, like isolated islands in the sea. They work differently and often use properties that are defined in different ways.
    “Researchers at universities or in industry who want to map materials on a large scale or want to train an AI model must retrieve information from these databases. Therefore, a standard is needed so that users can communicate with all these data libraries and understand the information they receive,” says Gian-Marco Rignanese, professor at the Institute of Condensed Matter and Nanosciences at UCLouvain in Belgium.
    The OPTIMADE (Open databases integration for materials design) standard has been developed over the past eight years. Behind this standard is a large international network with over 30 institutions worldwide and large materials databases in Europe and the USA. The aim is to give users easier access to both leading and lesser-known materials databases. A new version of the standard, v1.2, is now being released, and is described in an article published in the journal Digital Discovery. One of the biggest changes in the new version is a greatly enhanced possibility to accurately describe different material properties and other data using common, well-founded definitions.
    The international collaboration spans the EU, the UK, the US, Mexico, Japan and China together with institutions such as École Polytechnique Fédérale de Lausanne (EPFL), University of California Berkeley, University of Cambridge, Northwestern University, Duke University, Paul Scherrer Institut, and Johns Hopkins University. Much of the collaboration takes place in meetings with annual workshops funded by CECAM (Centre Européen de Calcul Atomique et Moléculaire) in Switzerland, with the first one funded by the Lorentz Center in the Netherlands. Other activities have been supported by the organisation Psi-k, the competence centre NCCR MARVEL in Switzerland, and the e-Science Research Centre (SeRC) in Sweden. The researchers in the collaboration receive support from many different financiers. More

  • in

    Researchers engineer AI path to prevent power outages

    University of Texas at Dallas researchers have developed an artificial intelligence (AI) model that could help electrical grids prevent power outages by automatically rerouting electricity in milliseconds.
    The UT Dallas researchers, who collaborated with engineers at the University at Buffalo in New York, demonstrated the automated system in a study published online June 4 in Nature Communications.
    The approach is an early example of “self-healing grid” technology, which uses AI to detect and repair problems such as outages autonomously and without human intervention when issues occur, such as storm-damaged power lines.
    The North American grid is an extensive, complex network of transmission and distribution lines, generation facilities and transformers that distributes electricity from power sources to consumers.
    Using various scenarios in a test network, the researchers demonstrated that their solution can automatically identify alternative routes to transfer electricity to users before an outage occurs. AI has the advantage of speed: The system can automatically reroute electrical flow in microseconds, while current human-controlled processes to determine alternate paths could take from minutes to hours.
    “Our goal is to find the optimal path to send power to the majority of users as quickly as possible,” said Dr. Jie Zhang, associate professor of mechanical engineering in the Erik Jonsson School of Engineering and Computer Science. “But more research is needed before this system can be implemented.”
    Zhang, who is co-corresponding author of the study, and his colleagues used technology that applies machine learning to graphs in order to map the complex relationships between entities that make up a power distribution network. Graph machine learning involves describing a network’s topology, the way the various components are arranged in relation to each other and how electricity moves through the system.

    Network topology also may play a critical role in applying AI to solve problems in other complex systems, such as critical infrastructure and ecosystems, said study co-author Dr. Yulia Gel, professor of mathematical sciences in the School of Natural Sciences and Mathematics.
    “In this interdisciplinary project, by leveraging our team expertise in power systems, mathematics and machine learning, we explored how we can systematically describe various interdependencies in the distribution systems using graph abstractions,” Gel said. “We then investigated how the underlying network topology, integrated into the reinforcement learning framework, can be used for more efficient outage management in the power distribution system.”
    The researchers’ approach relies on reinforcement learning that makes the best decisions to achieve optimal results. Led by co-corresponding author Dr. Souma Chowdhury, associate professor of mechanical and aerospace engineering, University at Buffalo researchers focused on the reinforcement learning aspect of the project.
    If electricity is blocked due to line faults, the system is able to reconfigure using switches and draw power from available sources in close proximity, such as from large-scale solar panels or batteries on a university campus or business, said Roshni Anna Jacob, a UTD electrical engineering doctoral student and the paper’s co-first author.
    “You can leverage those power generators to supply electricity in a specific area,” Jacob said.
    After focusing on preventing outages, the researchers will aim to develop similar technology to repair and restore the grid after a power disruption. More

  • in

    Novel blood-powered chip offers real-time health monitoring

    Metabolic disorders, like diabetes and osteoporosis, are burgeoning throughout the world, especially in developing countries.
    The diagnosis for these disorders is typically a blood test, but because the existing healthcare infrastructure in remote areas is unable to support these tests, most individuals go undiagnosed and without treatment. Conventional methods also involve labor-intensive and invasive processes which tend to be time-consuming and make real-time monitoring unfeasible, particularly in real life settings and in country-side populations.
    Researchers at the University of Pittsburgh and University of Pittsburgh Medical Center are proposing a new device that uses blood to generate electricity and measure its conductivity, opening doors to medical care in any location.
    “As the fields of nanotechnology and microfluidics continue to advance, there is a growing opportunity to develop lab-on-a-chip devices capable of surrounding the constraints of modern medical care,” said Amir Alavi, assistant professor of civil and environmental engineering at Pitt’s Swanson School of Engineering. “These technologies could potentially transform healthcare by offering quick and convenient diagnostics, ultimately improving patient outcomes and the effectiveness of medical services.”
    Now, We Got Good Blood
    Blood electrical conductivity is a valuable metric for assessing various health parameters and detecting medical conditions.
    This conductivity is predominantly governed by the concentration of essential electrolytes, notably sodium and chloride ions. These electrolytes are integral to a multitude of physiological processes, helping physicians pinpoint a diagnosis.

    “Blood is basically a water-based environment that has various molecules that conduct or impede electric currents,” explained Dr. Alan Wells, the medical director of UPMC Clinical Laboratories, Executive Vice Chairman, Section of Laboratory Medicine at University of Pittsburgh and UPMC, and Thomas Gill III Professor of Pathology, Pitt School of Medicine, Department of Pathology. “Glucose, for example, is an electrical conductor. We are able to see how it affects conductivity through these measurements. Thus, allowing us to make a diagnosis on the spot.”
    Despite its vitality, the knowledge of human blood conductivity is limited because of its measurement challenges like electrode polarization, limited access to human blood samples, and the complexities associated with blood temperature maintenance. Measuring conductivity at frequencies below 100 Hz is particularly important for gaining a deeper understanding of the blood electrical properties and fundamental biological processes, but is even more difficult.
    A Pocket-Sized Lab
    The research team is proposing an innovative, portable millifluidic nanogenerator lab-on-a-chip device capable of measuring blood at low frequencies. The device utilizes blood as a conductive substance within its integrated triboelectric nanogenerator, or TENG. The proposed blood-based TENG system can convert mechanical energy into electricity via triboelectrification.
    This process involves the exchange of electrons between contacting materials, resulting in a charge transfer. In a TENG system, the electron transfer and charge separation generate a voltage difference that drives electric current when the materials experience relative motion like compression or sliding. The team analyzes the voltage generated by the device under predefined loading conditions to determine the electrical conductivity of the blood. The self-powering mechanism enables miniaturization of the proposed blood-based nanogenerator. The team also used AI models to directly estimate blood electrical conductivity using the voltage patterns generated by the device.
    To test its accuracy, the team compared its results with a traditional test which proved successful. This opens the door to taking the testing to where people live. In addition, blood-powered nanogenerators are capable of functioning in the body wherever blood is present, enabling self-powered diagnostics using the local blood chemistry. More

  • in

    Prying open the AI black box

    Artificial intelligence continues to squirm its way into many aspects of our lives. But what about biology, the study of life itself? AI can sift through hundreds of thousands of genome data points to identify potential new therapeutic targets. While these genomic insights may appear helpful, scientists aren’t sure how today’s AI models come to their conclusions in the first place. Now, a new system named SQUID arrives on the scene armed to pry open AI’s black box of murky internal logic.
    SQUID, short for Surrogate Quantitative Interpretability for Deepnets, is a computational tool created by Cold Spring Harbor Laboratory (CSHL) scientists. It’s designed to help interpret how AI models analyze the genome. Compared with other analysis tools, SQUID is more consistent, reduces background noise, and can lead to more accurate predictions about the effects of genetic mutations.
    How does it work so much better? The key, CSHL Assistant Professor Peter Koo says, lies in SQUID’s specialized training.
    “The tools that people use to try to understand these models have been largely coming from other fields like computer vision or natural language processing. While they can be useful, they’re not optimal for genomics. What we did with SQUID was leverage decades of quantitative genetics knowledge to help us understand what these deep neural networks are learning,” explains Koo.
    SQUID works by first generating a library of over 100,000 variant DNA sequences. It then analyzes the library of mutations and their effects using a program called MAVE-NN (Multiplex Assays of Variant Effects Neural Network). This tool allows scientists to perform thousands of virtual experiments simultaneously. In effect, they can “fish out” the algorithms behind a given AI’s most accurate predictions. Their computational “catch” could set the stage for experiments that are more grounded in reality.
    “In silico [virtual] experiments are no replacement for actual laboratory experiments. Nevertheless, they can be very informative. They can help scientists form hypotheses for how a particular region of the genome works or how a mutation might have a clinically relevant effect,” explains CSHL Associate Professor Justin Kinney, a co-author of the study.
    There are tons of AI models in the sea. More enter the waters each day. Koo, Kinney, and colleagues hope that SQUID will help scientists grab hold of those that best meet their specialized needs.
    Though mapped, the human genome remains an incredibly challenging terrain. SQUID could help biologists navigate the field more effectively, bringing them closer to their findings’ true medical implications. More

  • in

    Unifying behavioral analysis through animal foundation models

    Behavioral analysis can provide a lot of information about the health status or motivations of a living being. A new technology developed at EPFL makes it possible for a single deep learning model to detect animal motion across many species and environments. This “foundational model,” called SuperAnimal, can be used for animal conservation, biomedicine, and neuroscience research.
    Although there is the saying, “straight from the horse’s mouth,” it’s impossible to get a horse to tell you if it’s in pain or experiencing joy. Yet, its body will express the answer in its movements. To a trained eye, pain will manifest as a change in gait, or in the case of joy, the facial expressions of the animal could change. But what if we can automate this with AI? And what about AI models for cows, dogs, cats, or even mice? Automating animal behavior not only removes observer bias, but it helps humans more efficiently get to the right answer.
    Today marks the beginning of a new chapter in posture analysis for behavioral phenotyping. Mackenzie Mathis’ laboratory at EPFL publishes a Nature Communications article describing a particularly effective new open-source tool that requires no human annotations to get the model to track animals. Named “SuperAnimal,” it can automatically recognize, without human supervision, the location of “keypoints” (typically joints) in a whole range of animals — over 45 animal species — and even in mythical ones!
    “The current pipeline allows users to tailor deep learning models, but this then relies on human effort to identify keypoints on each animal to create a training set,” explains Mackenzie Mathis. “This leads to duplicated labeling efforts across researchers and can lead to different semantic labels for the same keypoints, making merging data to train large foundation models very challenging. Our new method provides a new approach to standardize this process and train large-scale datasets. It also makes labeling 10 to 100 times more effective than current tools.”
    The “SuperAnimal method” is an evolution of a pose estimation technique that Mackenzie Mathis’ laboratory had already distributed under the name “DeepLabCut™.” You can read more about this game-changing tool and its origin in this new Nature technology feature.
    “Here, we have developed an algorithm capable of compiling a large set of annotations across databases and train the model to learn a harmonized language — we call this pre-training the foundation model,” explains Shaokai Ye, a PhD student researcher and first author of the study. “Then users can simply deploy our base model or fine-tune it on their own data, allowing for further customization if needed.”
    These advances will make motion analysis much more accessible. “Veterinarians could be particularly interested, as well as those in biomedical research — especially when it comes to observing the behavior of laboratory mice. But it can go further,” says Mackenzie Mathis, mentioning neuroscience and… athletes (canine or otherwise)! Other species — birds, fish, and insects — are also within the scope of the model’s next evolution. “We also will leverage these models in natural language interfaces to build even more accessible and next-generation tools. For example, Shaokai and I, along with our co-authors at EPFL, recently developed AmadeusGPT, published recently at NeurIPS, that allows for querying video data with written or spoken text. Expanding this for complex behavioral analysis will be very exciting.” SuperAnimal is now available to researchers worldwide through its open-source distribution. More

  • in

    Controlling electronics with light: The magnetite breakthrough

    “Some time ago, we showed that it is possible to induce an inverse phase transition in magnetite,” says physicist Fabrizio Carbone at EPFL. “It’s as if you took water and you could turn it into ice by putting energy into it with a laser. This is counterintuitive as normally to freeze water you cool it down, i.e. remove energy from it.”
    Now, Carbone has led a research project to elucidate and control the microscopic structural properties of magnetite during such light-induced phase transitions. The study discovered that using specific light wavelengths for photoexcitation the system can drive magnetite into distinct non-equilibrium metastable states (“metastable” means that the state can change under certain conditions) called “hidden phases,” thus revealing a new protocol to manipulate material properties at ultrafast timescales. The findings, which could impact the future of electronics, are published in PNAS.
    What are “non-equilibrium states?” An “equilibrium state” is basically a stable state where a material’s properties do not change over time because the forces within it are balanced. When this is disrupted, the material (the “system,” to be accurate in terms of physics) is said to enter a non-equilibrium state, exhibiting properties that can border on the exotic and unpredictable.
    The “hidden phases” of magnetite
    A phase transition is a change in a material’s state, due to changes in temperature, pressure, or other external conditions. An everyday example is water going from solid ice to liquid or from liquid to gas when it boils.
    Phase transitions in materials usually follow predictable pathways under equilibrium conditions. But when materials are driven out of equilibrium, they can start showing so called “hidden phases” — intermediate states that are not normally accessible. Observing hidden phases requires advanced techniques that can capture rapid and minute changes in the material’s structure.
    Magnetite (Fe3O4) is a well-studied material known for its intriguing metal-to-insulator transition at low temperatures — from being able to conduct electricity to actively blocking it. This is known as the Verwey transition, and it changes magnetite’s electronic and structural properties significantly.

    With its complex interplay of crystal structure, charge, and orbital orders, magnetite can undergo this metal-insulator transition at around 125 K.
    Ultrafast lasers induce hidden transitions in magnetite
    “To understand this phenomenon better, we did this experiment where we directly looked at the atomic motions happening during such a transformation,” says Carbone. “We found out that laser excitation takes the solid into some different phases that don’t exist in equilibrium conditions.”
    The experiments used two different wavelengths of light: near-infrared (800 nm) and visible (400 nm). When excited with 800 nm light pulses, the magnetite’s structure was disrupted, creating a mix of metallic and insulating regions. In contrast, 400 nm light pulses made the magnetite a more stable insulator.
    To monitor the structural changes in magnetite induced by laser pulses, the researchers used ultrafast electron diffraction, a technique that can “see” the movements of atoms in materials on sub-picosecond timescales (a picosecond is a trillionth of a second).
    The technique allowed the scientists to observe how the different wavelengths of laser light actually affect the structure of the magnetite on an atomic scale.

    Magnetite’s crystal structure is what is referred to as a “monoclinic lattice,” where the unit cell is shaped like a skewed box, with three unequal edges, and two of its angles are 90 degrees while the third is different.
    When the 800 nm light shone on the magnetite, it induced a rapid compression of the magnetite’s monoclinic lattice, transforming it towards a cubic structure. This takes place in three stages over 50 picoseconds, and suggests that there are complex dynamic interactions happening within the material. Conversely, the 400 nm, visible light caused the lattice to expand, reinforcing the monoclinic lattice, and creating a more ordered phase — a stable insulator.
    Fundamental implications and technological applications
    The study reveals that the electronic properties of magnetite can be controlled by selectively using different light wavelengths. Understanding these light-induced transitions provides valuable insights into the fundamental physics of strongly correlated systems.
    “Our study breaks ground for a novel approach to control matter at ultrafast timescale using tailored photon pulses,” write the researchers. Being able to induce and control hidden phases in magnetite could have significant implications for the development of advanced materials and devices. For instance, materials that can switch between different electronic states quickly and efficiently could be used in next-generation computing and memory devices. More

  • in

    A heat dome is baking the United States. Here’s why that’s so dangerous

    June is the new July. Or maybe even August. At least it feels that way, as summer heat has already soared to record highs.

    In the United States, West Coast residents sweltered earlier in the month as a high-pressure weather system called a heat dome trapped record-breaking high temperatures over the region (SN: 7/19/23). Now, another heat dome is bringing another wave of extreme heat to swaths of the Midwest and East Coast, with temperatures forecasted to reach close to 38° Celsius (100° Fahrenheit) in many cities. More

  • in

    Scientists at uOttawa develop innovative method to validate quantum photonics circuits performance

    A team of researchers from the University of Ottawa’s Nexus for Quantum Technologies Institute (NexQT), led by Dr. Francesco Di Colandrea under the supervision of Professor Ebrahim Karimi, associate professor of physics, has developed an innovative technique for evaluating the performance of quantum circuits. This significant advancement, recently published in the journal npj Quantum Information, represents a substantial leap forward in the field of quantum computing.
    In the rapidly evolving landscape of quantum technologies, ensuring the functionality and reliability of quantum devices is critical. The ability to characterize these devices with high accuracy and speed is essential for their efficient integration into quantum circuits and computers, impacting both fundamental studies and practical applications.
    Characterization helps determine if a device operates as expected, which is necessary when devices exhibit anomalies or errors. Identifying and addressing these issues is crucial for advancing the development of future quantum technologies.
    Traditionally, scientists have relied on Quantum Process Tomography (QPT), a method that requires a large number of “projective measurements” to reconstruct a device’s operations fully. However, the number of required measurements in QPT scales quadratically with the dimensionality of the operations, posing significant experimental and computational challenges, especially for high-dimensional quantum information processors.
    The University of Ottawa research team has pioneered an optimized technique named Fourier Quantum Process Tomography (FQPT). This method allows for the complete characterization of quantum operations with a minimal number of measurements. Instead of performing a large number of projective measurements, FQPT utilises a well-known map, the Fourier transform, to perform a portion of the measurements in two different mathematical spaces. The physical relation between these spaces enhances the information extracted from single measurements, significantly reducing the number of measurements needed. For instance, for processes with dimensions 2d (where d can be arbitrarily high), only seven measurements are required.
    To validate their technique, the researchers conducted a photonic experiment using optical polarisation to encode a qubit. The quantum process was realized as a complex space-dependent polarisation transformation, leveraging state-of-the-art liquid-crystal technology. This experiment demonstrated the flexibility and robustness of the method.
    “The experimental validation is a fundamental step to probe the technique’s resilience to noise, ensuring robust and high-fidelity reconstructions in realistic experimental scenarios,” said Francesco Di Colandrea, a postdoctoral fellow at the University of Ottawa.
    This novel technique represents a remarkable advancement in quantum computing. The research team is already actively working on extending FQPT to arbitrary quantum operations, including non-Hermitian and higher-dimensional implementations, and in implementing AI techniques to increase accuracy and reduce measurement. This new technique represents a promising avenue for further advancements in quantum technology. More