More stories

  • in

    Using the power of artificial intelligence, new open-source tool simplifies animal behavior analysis

    A team from the University of Michigan has developed a new software tool to help researchers across the life sciences more efficiently analyze animal behaviors. 
    The open-source software, LabGym, capitalizes on artificial intelligence to identify, categorize and count defined behaviors across various animal model systems.
    Scientists need to measure animal behaviors for a variety of reasons, from understanding all the ways a particular drug may affect an organism to mapping how circuits in the brain communicate to produce a particular behavior.
    Researchers in the lab of U-M faculty member Bing Ye, for example, analyze movements and behaviors in Drosophila melanogaster—or fruit flies—as a model to study the development and functions of the nervous system. Because fruit flies and humans share many genes, these studies of fruit flies often offer insights into human health and disease.
    “Behavior is a function of the brain. So analyzing animal behavior provides essential information about how the brain works and how it changes in response to disease,” said Yujia Hu, a neuroscientist in Ye’s lab at the U-M Life Sciences Institute and lead author of a Feb. 24 Cell Reports Methods study describing the new software.  
    But identifying and counting animal behaviors manually is time-consuming and highly subjective to the researcher who is analyzing the behavior. And while a few software programs exist to automatically quantify animal behaviors, they present challenges.

    “Many of these behavior analysis programs are based on pre-set definitions of a behavior,” said Ye, who is also a professor of cell and developmental biology at the Medical School. “If a Drosophila larva rolls 360 degrees, for example, some programs will count a roll. But why isn’t 270 degrees also a roll? Many programs don’t necessarily have the flexibility to count that, without the user knowing how to recode the program.”
    Thinking more like a scientist
     
    To overcome these challenges, Hu and his colleagues decided to design a new program that more closely replicates the human cognition process—that “thinks” more like a scientist would—and is more user-friendly for biologists who may not have expertise in coding. Using LabGym, researchers can input examples of the behavior they want to analyze and teach the software what it should count. The program then uses deep learning to improve its ability to recognize and quantify the behavior.
    One new development in LabGym that helps it apply this more flexible cognition is the use of both video data and a so-called “pattern image” to improve the program’s reliability. Scientists use videos of animals to analyze their behavior, but videos involve time series data that can be challenging for AI programs to analyze.

    To help the program identify behaviors more easily, Hu created a still image that shows the pattern of the animal’s movement by merging outlines of the animal’s position at different timepoints. The team found that combining the video data with the pattern images increased the program’s accuracy in recognizing behavior types.
    LabGym is also designed to overlook irrelevant background information and consider both the animal’s overall movement and the changes in position over space and time, much as a human researcher would. The program can also track multiple animals simultaneously.
    Species flexibility improves utility
     
    Another key feature of LabGym is its species flexibility, Ye said. While it was designed using Drosophila, it is not restricted to any one species.
    “That’s actually rare,” he said. “It’s written for biologists, so they can adapt it to the species and the behavior they want to study without needing any programming skills or high-powered computing.”
    After hearing a presentation about the program’s early development, U-M pharmacologist Carrie Ferrario offered to help Ye and his team test and refine the program in the rodent model system she works with.
    Ferrario, an associate professor of pharmacology and adjunct associate professor of psychology, studies the neural mechanisms that contribute to addiction and obesity, using rats as a model system. To complete the necessary observation of drug-induced behaviors in the animals, she and her lab members have had to rely largely on hand-scoring, which is subjective and extremely time-consuming.
    “I’ve been trying to solve this problem since graduate school, and the technology just wasn’t there, in terms of artificial intelligence, deep learning and computation,” Ferrario said. “This program solved an existing problem for me, but it also has really broad utility. I see the potential for it to be useful in almost limitless conditions to analyze animal behavior.”
    The team next plans to further refine the program to improve its performance under even more complex conditions, such as observing animals in nature.
    This research was supported by the National Institutes of Health.
    In addition to Ye, Hu and Ferrario, study authors are: Alexander Maitland, Rita Ionides, Anjesh Ghimire, Brendon Watson, Kenichi Iwasaki, Hope White and Yitao Xi of the University of Michigan, and Jie Zhou of Northern Illinois University.
    Study: LabGym: quantification of user-defined animal behaviors 1 using learning-based holistic assessment (DOI: 10.1016/j.crmeth.2023.100415) (available once embargo lifts) More

  • in

    A new chip for decoding data transmissions demonstrates record-breaking energy efficiency

    Imagine using an online banking app to deposit money into your account. Like all information sent over the internet, those communications could be corrupted by noise that inserts errors into the data.
    To overcome this problem, senders encode data before they are transmitted, and then a receiver uses a decoding algorithm to correct errors and recover the original message. In some instances, data are received with reliability information that helps the decoder figure out which parts of a transmission are likely errors.
    Researchers at MIT and elsewhere have developed a decoder chip that employs a new statistical model to use this reliability information in a way that is much simpler and faster than conventional techniques.
    Their chip uses a universal decoding algorithm the team previously developed, which can unravel any error correcting code. Typically, decoding hardware can only process one particular type of code. This new, universal decoder chip has broken the record for energy-efficient decoding, performing between 10 and 100 times better than other hardware.
    This advance could enable mobile devices with fewer chips, since they would no longer need separate hardware for multiple codes. This would reduce the amount of material needed for fabrication, cutting costs and improving sustainability. By making the decoding process less energy intensive, the chip could also improve device performance and lengthen battery life. It could be especially useful for demanding applications like augmented and virtual reality and 5G networks.
    “This is the first time anyone has broken below the 1 picojoule-per-bit barrier for decoding. That is roughly the same amount of energy you need to transmit a bit inside the system. It had been a big symbolic threshold, but it also changes the balance in the receiver of what might be the most pressing part from an energy perspective — we can move that away from the decoder to other elements,” says Muriel Médard, the School of Science NEC Professor of Software Science and Engineering, a professor in the Department of Electrical Engineering and Computer Science, and a co-author of a paper presenting the new chip.

    Médard’s co-authors include lead author Arslan Riaz, a graduate student at Boston University (BU); Rabia Tugce Yazicigil, assistant professor of electrical and computer engineering at BU; and Ken R. Duffy, then director of the Hamilton Institute at Maynooth University and now a professor at Northeastern University, as well as others from MIT, BU, and Maynooth University. The work is being presented at the International Solid-States Circuits Conference.
    Smarter sorting
    Digital data are transmitted over a network in the form of bits (0s and 1s). A sender encodes data by adding an error-correcting code, which is a redundant string of 0s and 1s that can be viewed as a hash. Information about this hash is held in a specific code book. A decoding algorithm at the receiver, designed for this particular code, uses its code book and the hash structure to retrieve the original information, which may have been jumbled by noise. Since each algorithm is code-specific, and most require dedicated hardware, a device would need many chips to decode different codes.
    The researchers previously demonstrated GRAND (Guessing Random Additive Noise Decoding), a universal decoding algorithm that can crack any code. GRAND works by guessing the noise that affected the transmission, subtracting that noise pattern from the received data, and then checking what remains in a code book. It guesses a series of noise patterns in the order they are likely to occur.
    Data are often received with reliability information, also called soft information, that helps a decoder figure out which pieces are errors. The new decoding chip, called ORBGRAND (Ordered Reliability Bits GRAND), uses this reliability information to sort data based on how likely each bit is to be an error.

    But it isn’t as simple as ordering single bits. While the most unreliable bit might be the likeliest error, perhaps the third and fourth most unreliable bits together are as likely to be an error as the seventh-most unreliable bit. ORBGRAND uses a new statistical model that can sort bits in this fashion, considering that multiple bits together are as likely to be an error as some single bits.
    “If your car isn’t working, soft information might tell you that it is probably the battery. But if it isn’t the battery alone, maybe it is the battery and the alternator together that are causing the problem. This is how a rational person would troubleshoot — you’d say that it could actually be these two things together before going down the list to something that is much less likely,” Médard says.
    This is a much more efficient approach than traditional decoders, which would instead look at the code structure and have a performance that is generally designed for the worst-case.
    “With a traditional decoder, you’d pull out the blueprint of the car and examine each and every piece. You’ll find the problem, but it will take you a long time and you’ll get very frustrated,” Médard explains.
    ORBGRAND stops sorting as soon as a code word is found, which is often very soon. The chip also employs parallelization, generating and testing multiple noise patterns simultaneously so it finds the code word faster. Because the decoder stops working once it finds the code word, its energy consumption stays low even though it runs multiple processes simultaneously.
    Record-breaking efficiency
    When they compared their approach to other chips, ORBGRAND decoded with maximum accuracy while consuming only 0.76 picojoules of energy per bit, breaking the previous performance record. ORBGRAND consumes between 10 and 100 times less energy than other devices.
    One of the biggest challenges of developing the new chip came from this reduced energy consumption, Médard says. With ORBGRAND, generating noise sequences is now so energy-efficient that other processes the researchers hadn’t focused on before, like checking the code word in a code book, consume most of the effort.
    “Now, this checking process, which is like turning on the car to see if it works, is the hardest part. So, we need to find more efficient ways to do that,” she says.
    The team is also exploring ways to change the modulation of transmissions so they can take advantage of the improved efficiency of the ORBGRAND chip. They also plan to see how their technique could be utilized to more efficiently manage multiple transmissions that overlap.
    The research is funded, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) and Science Foundation Ireland. More

  • in

    MoBIE enables modern microscopy with massive data sets

    High-resolution microscopy techniques, for example electron microscopy or super-resolution microscopy, produce huge amounts of data. The visualization, analysis and dissemination of such large imaging data sets poses significant challenges. Now, these tasks can be carried out using MoBIE, which stands for Multimodal Big Image Data Exploration, a new user-friendly, freely available tool developed by researchers from the University of Göttingen and EMBL Heidelberg. This means that researchers such as biologists, who rely on high-resolution microscopy techniques, can incorporate multiple data sets to study the processes of life at the very smallest scales. Their method has now been published in Nature Methods.

    advertisement More

  • in

    Let there be (controlled) light

    In the very near future, quantum computers are expected to revolutionize the way we compute, with new approaches to database searches, AI systems, simulations and more. But to achieve such novel quantum technology applications, photonic integrated circuits which can effectively control photonic quantum states — the so-called qubits — are needed. Physicists from the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), TU Dresden and Leibniz-Institut für Kristallzüchtung (IKZ) have made a breakthrough in this effort: for the first time, they demonstrated the controlled creation of single-photon emitters in silicon at the nanoscale, as they report in Nature Communications.
    Photonic integrated circuits, or in short, PICs, utilize particles of light, better known as photons, as opposed to electrons that run in electronic integrated circuits. The main difference between the two: A photonic integrated circuit provides functions for information signals imposed on optical wavelengths typically in the near infrared spectrum. “Actually, these PICs with many integrated photonic components are able to generate, route, process and detect light on a single chip,” says Dr. Georgy Astakhov, Head of Quantum Technologies at HZDR’s Institute of Ion Beam Physics and Materials Research, and adds: “This modality is poised to play a key role in upcoming future technology, such as quantum computing. And PICs will lead the way.”
    Before, quantum photonics experiments were notorious for the massive use of “bulk optics” distributed across the optical table and occupying the entire lab. Now, photonic chips are radically changing this landscape. Miniaturization, stability and suitability for mass production might turn them into the workhorse of modern-day quantum photonics.
    From random to control mode
    Monolithic integration of single-photon sources in a controllable way would give a resource-efficient route to implement millions of photonic qubits in PICs. To run quantum computation protocols, these photons must be indistinguishable. With this, industrial-scale photonic quantum processor production would become feasible.
    However, the currently established fabrication method stands in the way of the compatibility of this promising concept with today’s semiconductor technology.
    In a first attempt reported about two years ago, the researchers were already able to generate single photons on a silicon wafer, but only in a random and non-scalable way. Since then, they have come far. “Now, we show how focused ion beams from liquid metal alloy ion sources are used to place single-photon emitters at desired positions on the wafer while obtaining a high creation yield and high spectral quality,” says Dr. Nico Klingner, physicist.
    Furthermore, the scientists at HZDR subjected the same single-photon emitters to a rigorous material testing program: After several cooling-down and warming-up cycles, they did not observe any degradation of their optical properties. These findings meet the preconditions required for mass production later on.
    To translate this achievement into a widespread technology, and allow for wafer-scale engineering of individual photon emitters on the atomic scale compatible with established foundry manufacturing, the team implemented broad-beam implantation in a commercial implanter through a lithographically defined mask. “This work really allowed us to take advantage of the state-of-the-art silicon processing cleanroom and electron beam lithography machines at the Nano Fabrication facility Rossendorf,” explains Dr. Ciarán Fowley, Cleanroom group leader and Head of Nanofabrication and Analysis.
    Using both methods, the team can create dozens of telecom single-photon emitters at predefined locations with a spatial accuracy of about 50 nm. They emit in the strategically important telecommunication O-band and exhibit stable operation over days under continuous-wave excitation.
    The scientists are convinced that the realization of controllable fabrication of single-photon emitters in silicon makes them a highly promising candidate for photonic quantum technologies, with a fabrication pathway compatible with very large-scale integration. These single-photon emitters are now technologically ready for production in semiconductor fabs and incorporation into the existing telecommunication infrastructure. More

  • in

    Theory can sort order from chaos in complex quantum systems

    It’s not easy to make sense of quantum-scale motion, but a new mathematical theory developed by scientists at Rice University and Oxford University could help — and may provide insight into improving a variety of computing, electrochemical and biological systems.
    The theory developed by Rice theorist Peter Wolynes and Oxford theoretical chemist David Logan gives a simple prediction for the threshold at which large quantum systems switch from orderly motion like a clock to random, erratic motion like asteroids moving around in the early solar system. Using a computational analysis of a photosynthesis model, collaborators at the University of Illinois Urbana-Champaign showed that the theory can predict the nature of the motions in a chlorophyll molecule when it absorbs energy from sunlight.
    The theory applies to any sufficiently complex quantum system and may give insights into building better quantum computers. It could also, for instance, help design features of next-generation solar cells or perhaps make batteries last longer.
    The study is published this week in the Proceedings of the National Academy of Sciences.
    Nothing is ever completely still on the molecular level, especially when quantum physics plays a role. A water droplet gleaming on a leaf may look motionless, but inside, over a sextillion molecules are vibrating nonstop. Hydrogen and oxygen atoms and the subatomic particles within them — the nuclei and electrons — constantly move and interact.
    “In thinking about the motions of individual molecules at quantum scale, there is often this comparison to the way we think of the solar system,” Wolynes said. “You learn that there are eight planets in our solar system, each one with a well-defined orbit. But in fact, the orbits interact with each other. Nevertheless, the orbits are very predictable. You can go to a planetarium, and they’ll show you what the sky looked like 2,000 years ago. A lot of the motions of the atoms in molecules are exactly that regular or clocklike.”
    When Wolynes and Logan first posed the question of predicting the regularity or randomness of quantum motion, they tested their math against observations of vibrational motions in individual molecules.

    “You only have to know two things about a molecule to be able to analyze its quantum motion patterns,” Wolynes said. “First, you need to know the vibrational frequencies of its particles, that’s to say the frequencies at which the vibrations occur which are like the orbits, and, second, how these vibrations nonlinearly interact with each other. These anharmonic interactions depend mostly on the mass of atoms. For organic molecules, you can predict how strongly those vibrational orbits would interact with one other.”
    Things are more complicated when the molecules also dramatically change structure, for instance as a result of a chemical reaction.
    “As soon as we start looking at molecules that chemically react or rearrange their structure, we know that there’s at least some element of unpredictability or randomness in the process because, even in classical terms, the reaction either happens, or it doesn’t happen,” Wolynes said. “When we try to understand how chemical changes occur, there’s this question: Is the overall motion more clocklike or is it more irregular?”
    Aside from their nonstop vibrations, which happen without light, electrons can have quantum-level interactions that sometimes lead to a more dramatic turn.
    “Because they’re very light, electrons normally move thousands of times faster than the centers of the atoms, the nuclei,” he said. “So though they are constantly moving, the electrons’ orbits smoothly adjust to what the nuclei do. But every now and again, the nuclei come to a place where the electronic energies will almost be equal whether the excitation is on one molecule or on the other. That’s what’s called a surface crossing. At that point, the excitation has a chance to jump from one electronic level to another.”
    Predicting at which point the transfer of energy that takes place during photosynthesis turns from orderly motion to randomness or dissipation would take a significant amount of time and effort by direct computation.

    “It is very nice that we have a very simple formula that determines when this happens,” said Martin Gruebele, a chemist at the University of Illinois Urbana-Champaign and co-author on the study who is a part of the joint Rice-Illinois Center for Adapting Flaws into Features (CAFF) funded by the National Science Foundation. “That’s something we just didn’t have before and figuring it out required very lengthy calculations.”
    The Logan-Wolynes theory opens up a wide array of scientific inquiry ranging from the theoretical exploration of the fundamentals of quantum mechanics to practical applications.
    “The Logan-Wolynes theory did pretty well in terms of telling you at roughly what energy input you’d get a change in quantum-system behavior,” Wolynes said. “But one of the interesting things that the large-scale computations of (co-author Chenghao) Zhang and Gruebele found is that there are these exceptions that stand out from all the possible orbiting patterns you might have. Occasionally there’s a few stragglers where simple motions persist for long times and don’t seem to get randomized. One of the questions we’re going to pursue in the future is how much that persistent regularity is actually influencing processes like photosynthesis.
    “Another direction that is being pursued at Rice where this theory can help is the problem of making a quantum computer that behaves as much as possible in a clocklike way,” he said. “You don’t want your computers to be randomly changing information. The larger and more sophisticated you make a computer, the likelier it is that you’ll run into some kind of randomization effects.”
    Gruebele and collaborators at Illinois also plan to use these ideas in other scientific contexts. “One of our goals, for instance, is to design better human-built light-harvesting molecules that might consist of carbon dots that can transfer the energy to their periphery where it can be harvested,” Gruebele said.
    Wolynes is Rice’s Bullard-Welch Foundation Professor of Science and a professor of chemistry, of biochemistry and cell biology, of physics and astronomy and of materials science and nanoengineering and co-director of its Center for Theoretical Biological Physics (CTBP), which is funded by the National Science Foundation. Logan is the Coulson Professor of Theoretical Chemistry at Oxford. Gruebele is the James R. Eiszner Endowed Chair in Chemistry and Zhang is a graduate student in physics at the University of Illinois Urbana-Champaign.
    The James R. Eiszner Chair in Chemistry and the Physics Department at Illinois, the Bullard-Welch Chair at Rice (C-0016) and the National Science Foundation (PHY-2019745) supported the research. More

  • in

    The quantum twisting microscope: A new lens on quantum materials

    One of the striking aspects of the quantum world is that a particle, say, an electron, is also a wave, meaning that it exists in many places at the same time. In a new study, reported today in Nature, researchers from the Weizmann Institute of Science make use of this property to develop a new type of tool — the quantum twisting microscope (QTM) — that can create novel quantum materials while simultaneously gazing into the most fundamental quantum nature of their electrons. The study’s findings may be used to create electronic materials with unprecedented functionalities.
    The QTM involves the “twisting,” or rotating, of two atomically-thin layers of material with respect to one another. In recent years, such twisting has become a major source of discoveries. It began with the discovery that placing two layers of graphene, one-atom-thick crystalline sheets of carbon, one atop the other with a slight relative twist angle, leads to a “sandwich” with unexpected new properties. The twist angle turned out to be the most critical parameter for controlling the behavior of electrons: Changing it by merely one-tenth of a degree could transform the material from an exotic superconductor into an unconventional insulator. But critical as it is, this parameter is also the hardest to control in experiments. By and large, twisting two layers to a new angle requires building a new “sandwich” from scratch, a process that is very long and tedious.
    “Our original motivation was to solve this problem by building a machine that could continuously twist any two materials with respect to one another, readily producing an infinite range of novel materials,” says team leader Prof. Shahal Ilani of Weizmann’s Condensed Matter Physics Department. “However, while building this machine, we discovered that it can also be turned into a very powerful microscope, capable of seeing quantum electronic waves in ways that were unimaginable before.”
    Creating a quantum picture
    Pictures have long played a central role in scientific discovery. Light microscopes and telescopes routinely provide images that allow scientists to gain a deeper understanding of biological and astrophysical systems. Taking pictures of electrons inside materials, on the other hand, has for many years been notoriously hard, owing to the small dimensions involved. This was transformed some 40 years ago with the invention of the scanning tunneling microscope, which earned its developers the 1986 Nobel Prize in Physics. This microscope uses an atomically sharp needle to scan the surface of a material, measuring the electric current and gradually building an image of the distribution of electrons in the sample.
    “Many different scanning probes have been developed since this invention, each measuring a different electronic property, but all of them measure these properties at one location at a time. So, they mostly see electrons as particles, and can only indirectly learn about their wave nature,” explains Prof. Ady Stern from the Weizmann Institute, who took part in the study along with three other theoretical physicists from the same department: Profs. Binghai Yan, Yuval Oreg and Erez Berg. “As it turned out, the tool that we have built can visualize the quantum electronic waves directly, giving us a way to unravel the quantum dances they perform inside the material,” Stern says.

    Spotting an electron in several places at once
    “The trick for seeing quantum waves is to spot the same electron in different locations at the same time,” says Alon Inbar, a lead author on the paper. “The measurement is conceptually similar to the famous two-slit experiment, which was used a century ago to prove for the first time that electrons in quantum mechanics have a wave nature,” adds Dr. John Birkbeck, another lead author. “The only difference is that we perform such an experiment at the tip of our scanning microscope.”
    To achieve this, the researchers replaced the atomically sharp tip of the scanning tunneling microscope with a tip that contains a flat layer of a quantum material, such as a single layer of graphene. When this layer is brought into contact with the surface of the sample of interest, it forms a two-dimensional interface across which electrons can tunnel at many different locations. Quantum mechanically, they tunnel in all locations simultaneously, and the tunneling events at different locations interfere with each other. This interference allows an electron to tunnel only if its wave functions on both sides of the interface match exactly. “To see a quantum electron, we have to be gentle,” says Ilani. “If we don’t ask it the rude question ‘Where are you?’ but instead provide it with multiple routes to cross into our detector without us knowing where it actually crossed, we allow it to preserve its fragile wave-like nature.”
    Twist and tunnel
    Generally, the electronic waves in the tip and the sample propagate in different directions and therefore do not match. The QTM uses its twisting capability to find the angle at which matching occurs: By continuously twisting the tip with respect to the sample, the tool causes their corresponding wave functions to also twist with respect to one another. Once these wave functions match on both sides of the interface, tunneling can occur. The twisting therefore allows the QTM to map how the electronic wave function depends on momentum, similarly to the way lateral translations of the tip enable the mapping of its dependence on position. Merely knowing at which angles electrons cross the interface supplies the researchers with a great deal of information about the probed material. In this manner they can learn about the collective organization of electrons within the sample, their speed, energy distribution, patterns of interference and even the interactions of different waves with one another.
    A new twist on quantum materials
    “Our microscope will give scientists a new kind of ‘lens’ for observing and measuring the properties of quantum materials,” says Jiewen Xiao, another lead author.
    The Weizmann team has already applied their microscope to studying the properties of several key quantum materials at room temperature and is now gearing up toward doing new experiments at temperatures of a few kelvins, where some of the most exciting quantum mechanical effects are known to take place.
    Peering so deeply into the quantum world can help reveal fundamental truths about nature. In the future, it might also have a tremendous effect on emerging technologies. The QTM will provide researchers with access to an unprecedented spectrum of new quantum interfaces, as well as new “eyes” for discovering quantum phenomena within them. More