More stories

  • in

    A cocktail party of 3D-printed robot heads

    Imagine a cocktail party full of 3D-printed, humanoid robots listening and talking to each other. That seemingly sci-fi scene is the goal of the Augmented Listening Laboratory at the University of Illinois Urbana-Champaign. Realistic talking (and listening) heads are crucial for investigating how humans receive sound and developing audio technology.
    The team will describe the talking human head simulators in their presentation, “3D-printed acoustic head simulators that talk and move,” on May 8. Eastern U.S. in the Northwestern/Ohio State room of the Chicago Marriott Downtown Magnificent Mile Hotel. The talk comes as part of the 184th Meeting of the Acoustical Society of America running May 8-12.
    Algorithms used to improve human hearing must consider the acoustic properties of the human head. For example, hearing aids adjust the sound received at each ear to create a more realistic listening experience. For the adjustment to succeed, an algorithm must realistically assess the difference between the arrival time at each ear and amplitude of the sound.
    It is important to study human listening in natural environments, like cocktail parties, where many conversations occur at once.
    “Simulating realistic scenarios for conversation enhancement often requires hours of recording with human subjects. The entire process can be exhausting for the subjects, and it is extremely hard for a subject to remain perfectly still in between and during recordings, which affects the measured acoustic pressures,” said Austin Lu, a student member of the team. “Acoustic head simulators can overcome both drawbacks. They can be used to create large data sets with continuous recording and are guaranteed to remain still.”
    Since researchers have precise control over the simulated subject, they can adjust the parameters of the experiment and even set the machines in motion to simulate neck movements.
    In a feat of design and engineering, the heads are 3D-printed into components and assembled, enabling customization at low cost. The highly detailed ears are fitted with microphones along different parts to simulate both human hearing and Bluetooth earpieces. The “talkbox,” or mouthlike loudspeaker, closely mimics human vocals. To facilitate motion, the researchers paid special attention to the neck. Because the 3D model of the head design is open source, other teams can download and modify it as needed. The diminishing cost of 3D printing means there is a relatively low barrier for fabricating these heads.
    “Our acoustic head project is the culmination of the work done by many students with highly varied technical backgrounds,” said Manan Mittal, a graduate researcher with the team. “Projects like this are due to interdisciplinary research that requires engineers to work with designers.”
    The Augmented Listening Laboratory has also created wheeled and pully-driven systems to simulate walking and more complex motion. More

  • in

    An unprecedented view of gene regulation

    Much of the human genome is made of regulatory regions that control which genes are expressed at a given time within a cell. Those regulatory elements can be located near a target gene or up to 2 million base pairs away from the target.
    To enable those interactions, the genome loops itself in a 3D structure that brings distant regions close together. Using a new technique, MIT researchers have shown that they can map these interactions with 100 times higher resolution than has previously been possible.
    “Using this method, we generate the highest-resolution maps of the 3D genome that have ever been generated, and what we see are a lot of interactions between enhancers and promoters that haven’t been seen previously,” says Anders Sejr Hansen, the Underwood-Prescott Career Development Assistant Professor of Biological Engineering at MIT and the senior author of the study. “We are excited to be able to reveal a new layer of 3D structure with our high resolution.”
    The researchers’ findings suggest that many genes interact with dozens of different regulatory elements, although further study is needed to determine which of those interactions are the most important to the regulation of a given gene.
    “Researchers can now affordably study the interactions between genes and their regulators, opening a world of possibilities not just for us but also for dozens of labs that have already expressed interest in our method,” says Viraat Goel, an MIT graduate student and one of the lead authors of the paper. “We’re excited to bring the research community a tool that help them disentangle the mechanisms driving gene regulation.”
    MIT postdoc Miles Huseyin is also a lead author of the paper, which appears today in Nature Genetics.

    High-resolution mapping
    Scientists estimate that more than half of the genome consists of regulatory elements that control genes, which make up only about 2 percent of the genome. Genome-wide association studies, which link genetic variants with specific diseases, have identified many variants that appear in these regulatory regions. Determining which genes these regulatory elements interact with could help researchers understand how those diseases arise and, potentially, how to treat them.
    Discovering those interactions requires mapping which parts of the genome interact with each other when chromosomes are packed into the nucleus. Chromosomes are organized into structural units called nucleosomes — strands of DNA tightly wound around proteins — helping the chromosomes fit within the small confines of the nucleus.
    Over a decade ago, a team that included researchers from MIT developed a method called Hi-C, which revealed that the genome is organized as a “fractal globule,” which allows the cell to tightly pack its DNA while avoiding knots. This architecture also allows the DNA to easily unfold and refold when needed.
    To perform Hi-C, researchers use restriction enzymes to chop the genome into many small pieces and biochemically link pieces that are near each other in 3D space within the cell’s nucleus. They then determine the identities of the interacting pieces by amplifying and sequencing them.

    While Hi-C reveals a great deal about the overall 3D organization of the genome, it has limited resolution to pick out specific interactions between genes and regulatory elements such as enhancers. Enhancers are short sequences of DNA that can help to activate the transcription of a gene by binding to the gene’s promoter — the site where transcription begins.
    To achieve the resolution necessary to find these interactions, the MIT team built on a more recent technology called Micro-C, which was invented by researchers at the University of Massachusetts Medical School, led by Stanley Hsieh and Oliver Rando. Micro-C was first applied in budding yeast in 2015 and subsequently applied to mammalian cells in three papers in 2019 and 2020 by researchers including Hansen, Hsieh, Rando and others at University of California at Berkeley and at UMass Medical School.
    Micro-C achieves higher resolution than Hi-C by using an enzyme known as micrococcal nuclease to chop up the genome. Hi-C’s restriction enzymes cut the genome only at specific DNA sequences that are randomly distributed, resulting in DNA fragments of varying and larger sizes. By contrast, micrococcal nuclease uniformly cuts the genome into nucleosome-sized fragments, each of which contains 150 to 200 DNA base pairs. This uniformity of small fragments grants Micro-C its superior resolution over Hi-C.
    However, since Micro-C surveys the entire genome, this approach still doesn’t achieve high enough resolution to identify the types of interactions the researchers wanted to see. For example, if you want to look at how 100 different genome sites interact with each other, you need to sequence at least 100 multiplied by 100 times, or 10,000. The human genome is very large and contains around 22 million sites at nucleosome resolution. Therefore, Micro-C mapping of the entire human genome would require at least 22 million multiplied by 22 million sequencing reads, costing more than $1 billion.
    To bring that cost down, the team devised a way to perform a more targeted sequencing of the genome’s interactions, allowing them to focus on segments of the genome that contain genes of interest. By focusing on regions spanning few million base pairs, the number of possible genomic sites decreases a thousandfold and the sequencing costs decrease a millionfold, down to about $1,000. The new method, called Region Capture Micro-C (RCMC), is therefore able to inexpensively generate maps 100 times richer in information than other published techniques for a fraction of the cost.
    “Now we have a method for getting ultra-high-resolution 3D genome structure maps in a very affordable manner. Previously, it was so inaccessible financially because you would need millions, if not billions of dollars, to get high resolution,” Hansen says. “The one limitation is that you can’t get the whole genome, so you need to know approximately what region you’re interested in, but you can get very high resolution, very affordably.”
    Many interactions
    In this study, the researchers focused on five regions varying in size from hundreds of thousands to about 2 million base pairs, which they chose due to interesting features revealed by previous studies. Those include a well-characterized gene called Sox2, which plays a key role in tissue formation during embryonic development.
    After capturing and sequencing the DNA segments of interest, the researchers found many enhancers that interact with Sox2, as well as interactions between nearby genes and enhancers that were previously unseen. In other regions, especially those full of genes and enhancers, some genes interacted with as many as 50 other DNA segments, and on average each interacting site contacted about 25 others.
    “People have seen multiple interactions from one bit of DNA before, but it’s usually on the order of two or three, so seeing this many of them was quite significant in terms of difference,” Huseyin says.
    However, the researchers’ technique doesn’t reveal whether all of those interactions occur simultaneously or at different times, or which of those interactions are the most important.
    The researchers also found that DNA appears to coil itself into nested “microcompartments” that facilitate these interactions, but they weren’t able to determine how microcompartments form. The researchers hope that further study into the underlying mechanisms could shed light on the fundamental question of how genes are regulated.
    “Even though we’re not currently aware of what may be causing these microcompartments, and we have all these open questions in front of us, we at least have a tool to really stringently ask those questions,” Goel says.
    In addition to pursuing those questions, the MIT team also plans to work with researchers at Boston Children’s Hospital to apply this type of analysis to genomic regions that have been linked with blood disorders in genome-wide association studies. They are also collaborating with researchers at Harvard Medical School to study variants linked to metabolic disorders.
    The research was funded by the Koch Institute Support (core) Grant from the National Cancer Institute, the National Institutes of Health, the National Science Foundation, a Solomon Buchsbaum Research Support Committee Award, the Koch Institute Frontier Research Fund, an NIH Fellowship and an EMBO Fellowship. More

  • in

    Leaky-wave metasurfaces: A perfect interface between free-space and integrated optical systems

    Researchers at Columbia Engineering have developed a new class of integrated photonic devices — “leaky-wave metasurfaces” — that can convert light initially confined in an optical waveguide to an arbitrary optical pattern in free space. These devices are the first to demonstrate simultaneous control of all four optical degrees of freedom, namely, amplitude, phase, polarization ellipticity, and polarization orientation — a world record. Because the devices are so thin, transparent, and compatible with photonic integrated circuits (PICs), they can be used to improve optical displays, LIDAR (Light Detection and Ranging), optical communications, and quantum optics.
    “We are excited to find an elegant solution for interfacing free-space optics and integrated photonics — these two platforms have traditionally been studied by investigators from different subfields of optics and have led to commercial products addressing completely different needs,” said Nanfang Yu, associate professor of applied physics and applied mathematics who is a leader in research on nanophotonic devices. “Our work points to new ways to create hybrid systems that utilize the best of both worlds — free-space optics for shaping the wavefront of light and integrated photonics for optical data processing — to address many emerging applications such as quantum optics, optogenetics, sensor networks, inter-chip communications, and holographic displays.”
    Bridging free-space optics and integrated photonics
    The key challenge of interfacing PICs and free-space optics is to transform a simple waveguide mode confined within a waveguide — athin ridge defined on a chip — into a broad free-space wave with a complex wavefront, and vice versa. Yu’s team tackled this challenge by building on their invention last fall of “nonlocal metasurfaces” and extended the devices’ functionality from controlling free-space light waves to controlling guided waves.
    Specifically, they expanded the input waveguide mode by using a waveguide taper into a slab waveguide mode — a sheet of light propagating along the chip. “We realized that the slab waveguide mode can be decomposed into two orthogonal standing waves — waves reminiscent of those produced by plucking a string,” said Heqing Huang, a PhD student in Yu’s lab and co-first author of the study, published today in Nature Nanotechnology. “Therefore, we designed a ‘leaky-wave metasurface’ composed of two sets of rectangular apertures that have a subwavelength offset from each other to independently control these two standing waves. The result is that each standing wave is converted into a surface emission with independent amplitude and polarization; together, the two surface emission components merge into a single free-space wave with completely controllable amplitude, phase, and polarization at each point over its wavefront.”
    From quantum optics to optical communications to holographic 3D displays
    Yu’s team experimentally demonstrated multiple leaky-wave metasurfaces that can convert a waveguide mode propagating along a waveguide with a cross-section on the order of one wavelength into free-space emission with a designer wavefront over an area about 300 times the wavelength at the telecom wavelength of 1.55 microns. These include:

    A leaky-wave metalens that produces a focal spot in free space. Such a device will be ideal for forming a low-loss, high-capacity free-space optical link between PIC chips; it will also be useful for an integrated optogenetic probe that produces focused beams to optically stimulate neurons located far away from the probe.
    Aleaky-wave optical-lattice generator that can produce hundreds of focal spots forming a Kagome lattice pattern in free space. In general, the leaky-wave metasurface can produce complex aperiodic and three-dimensional optical lattices to trap cold atoms and molecules. This capability will enable researchers to study exotic quantum optical phenomena or conduct quantum simulations hitherto not easily attainable with other platforms, and enable them to substantially reduce the complexity, volume, and cost of atomic-array-based quantum devices. For example, the leaky-wave metasurface could be directly integrated into the vacuum chamber to simplify the optical system, making portable quantum optics applications, such as atomic clocks, a possibility.
    A leaky-wave vortex-beam generator that produces a beam with a corkscrew-shaped wavefront. This could lead to a free-space optical link between buildings that relies on PICs to process information carried by light, while also using light waves with shaped wavefronts for high-capacity intercommunication.
    A leaky-wave hologram that can displace four distinct images simultaneously: two at the device plane (at two orthogonal polarization states) and another two at a distance in the free space (also at two orthogonal polarization states). This function could be used to make lighter, more comfortable augmented reality goggles and more realistic holographic 3D displays.
    Device fabrication
    Device fabrication was carried out at the Columbia Nano Initiative cleanroom, and at the Advanced Science Research Center NanoFabrication Facility at the Graduate Center of the City University of New York.

    Next steps
    Yu’s current demonstration is based on a simple polymer-silicon nitride materials platform at near-infrared wavelengths. His team plans next to demonstrate devices based on the more robust silicon nitride platform, which is compatible with foundry fabrication protocols and tolerant to high optical power operation. They also plan to demonstrate designs for high output efficiency and operation at visible wavelengths, which is more suitable for applications such as quantum optics and holographic displays.
    The study was supported by the National Science Foundation (grant no. QII-TAQS-1936359 (H.H., Y.X., and N.Y.) and no. ECCS-2004685 (S.C.M., C.-C.T., and N.Y.)), the Air Force Office of Scientific Research (no. FA9550-16-1-0322 (N.Y.)), and the Simons Foundation (A.C.O. and A.A). S.C.M. acknowledges support from the NSF Graduate Research Fellowship Program (grant no. DGE-1644869). More

  • in

    Symmetric graphene quantum dots for future qubits

    Quantum dots in semiconductors such as silicon or gallium arsenide have long been considered hot candidates for hosting quantum bits in future quantum processors. Scientists at Forschungszentrum Jülich and RWTH Aachen University have now shown that bilayer graphene has even more to offer here than other materials. The double quantum dots they have created are characterized by a nearly perfect electron-hole-symmetry that allows a robust read-out mechanism — one of the necessary criteria for quantum computing. The results were published in the journal Nature.
    The development of robust semiconductor spin qubits could help the realization of large-scale quantum computers in the future. However, current quantum dot based qubit systems are still in their infancy. In 2022, researchers at QuTech in the Netherlands were able to create 6 silicon-based spin qubits for the first time. With graphene, there is still a long way to go. The material, which was first isolated in 2004, is highly attractive to many scientists. But the realization of the first quantum bit has yet to come.
    “Bilayer graphene is a unique semiconductor,” explains Prof. Christoph Stampfer of Forschungszentrum Jülich and RWTH Aachen University. “It shares several properties with single-layer graphene and also has some other special features. This makes it very interesting for quantum technologies.”
    One of these features is that it has a bandgap that can be tuned by an external electric field from zero to about 120 milli-electronvolt. The band gap can be used to confine charge carriers in individual areas, so-called quantum dots. Depending on the applied voltage, these can trap a single electron or its counterpart, a hole — basically a missing electron in the solid-state structure. The possibility of using the same gate structure to trap both electrons and holes is a feature that has no counter part in conventional semiconductors.
    “Bilayer graphene is still a fairly new material. So far, mainly experiments that have already been realized with other semiconductors have been carried out with it. Our current experiment now goes really beyond this for the first time,” Christoph Stampfer says. He and his colleagues have created a so-called double quantum dot: two opposing quantum dots, each housing an electron and a hole whose spin properties mirror each other almost perfectly.
    Wide range of applications
    “This symmetry has two remarkable consequences: it is almost perfectly preserved even when electrons and holes are spatially separated in different quantum dots,” Stampfer said. This mechanism can be used to couple qubits to other qubits over a longer distance. And what’s more, “the symmetry results in a very robust blockade mechanism which could be used to read out the spin state of the dot with high fidelity.”
    “This goes beyond what can be done in conventional semiconductors or any other two-dimensional electron systems,” says Prof. Fabian Hassler of the JARA Institute for Quantum Information at Forschungszentrum Jülich and RWTH Aachen University, co-author of the study. “The near-perfect symmetry and strong selection rules are very attractive not only for operating qubits, but also for realizing single-particle terahertz detectors. In addition, it lends itself to coupling quantum dots of bilayer graphene with superconductors, two systems in which electron-hole symmetry plays an important role. These hybrid systems could be used to create efficient sources of entangled particle pairs or artificial topological systems, bringing us one step closer to realizing topological quantum computers.”
    The research results were published in the journal Nature. The data supporting the results and the codes used for the analysis are available in a Zenodo repository. The research was funded, among others, by the European Union’s Horizon 2020 research and innovation program (Graphene Flagship) and by the European Research Council (ERC), as well as by the German Research Foundation (DFG) as part of the Matter of Light for Quantum Computing (ML4Q) cluster of excellence. More

  • in

    The influence of AI on trust in human interaction

    As AI becomes increasingly realistic, our trust in those with whom we communicate may be compromised. Researchers at the University of Gothenburg have examined how advanced AI systems impact our trust in the individuals we interact with.
    In one scenario, a would-be scammer, believing he is calling an elderly man, is instead connected to a computer system that communicates through pre-recorded loops. The scammer spends considerable time attempting the fraud, patiently listening to the “man’s” somewhat confusing and repetitive stories. Oskar Lindwall, a professor of communication at the University of Gothenburg, observes that it often takes a long time for people to realize they are interacting with a technical system.
    He has, in collaboration with Professor of informatics Jonas Ivarsson, written an article titled Suspicious Minds: The Problem of Trust and Conversational Agents, exploring how individuals interpret and relate to situations where one of the parties might be an AI agent. The article highlights the negative consequences of harboring suspicion toward others, such as the damage it can cause to relationships.
    Ivarsson provides an example of a romantic relationship where trust issues arise, leading to jealousy and an increased tendency to search for evidence of deception. The authors argue that being unable to fully trust a conversational partner’s intentions and identity may result in excessive suspicion even when there is no reason for it.
    Their study discovered that during interactions between two humans, some behaviors were interpreted as signs that one of them was actually a robot.
    The researchers suggest that a pervasive design perspective is driving the development of AI with increasingly human-like features. While this may be appealing in some contexts, it can also be problematic, particularly when it is unclear who you are communicating with. Ivarsson questions whether AI should have such human-like voices, as they create a sense of intimacy and lead people to form impressions based on the voice alone.
    In the case of the would-be fraudster calling the “older man,” the scam is only exposed after a long time, which Lindwall and Ivarsson attribute to the believability of the human voice and the assumption that the confused behavior is due to age. Once an AI has a voice, we infer attributes such as gender, age, and socio-economic background, making it harder to identify that we are interacting with a computer.
    The researchers propose creating AI with well-functioning and eloquent voices that are still clearly synthetic, increasing transparency.
    Communication with others involves not only deception but also relationship-building and joint meaning-making. The uncertainty of whether one is talking to a human or a computer affects this aspect of communication. While it might not matter in some situations, such as cognitive-behavioral therapy, other forms of therapy that require more human connection may be negatively impacted.
    Jonas Ivarsson and Oskar Lindwall analyzed data made available on YouTube. They studied three types of conversations and audience reactions and comments. In the first type, a robot calls a person to book a hair appointment, unbeknownst to the person on the other end. In the second type, a person calls another person for the same purpose. In the third type, telemarketers are transferred to a computer system with pre-recorded speech. More

  • in

    Scurrying centipedes inspire many-legged robots that can traverse difficult landscapes

    Centipedes are known for their wiggly walk. With tens to hundreds of legs, they can traverse any terrain without stopping.
    “When you see a scurrying centipede, you’re basically seeing an animal that inhabits a world that is very different than our world of movement,” said Daniel Goldman, the Dunn Family Professor in the School of Physics. “Our movement is largely dominated by inertia. If I swing my leg, I land on my foot and I move forward. But in the world of centipedes, if they stop wiggling their body parts and limbs, they basically stop moving instantly.”
    Intrigued to see if the many limbs could be helpful for locomotion in this world, a team of physicists, engineers, and mathematicians at the Georgia Institute of Technology are using this style of movement to their advantage. They developed a new theory of multilegged locomotion and created many-legged robotic models, discovering the robot with redundant legs could move across uneven surfaces without any additional sensing or control technology as the theory predicted.
    These robots can move over complex, bumpy terrain — and there is potential to use them for agriculture, space exploration, and even search and rescue.
    The researchers presented their work in the papers, “Multilegged Matter Transport: A Framework for Locomotion on Noisy Landscapes,” in Science in May and “Self-Propulsion via Slipping: Frictional Swimming in Multilegged Locomotors,” in Proceedings of the National Academy of Sciences in March.
    A Leg Up
    For the Science paper, the researchers were motivated by mathematician Claude Shannon’s communication theory, which demonstrates how to reliably transmit signals over distance, to understand why a multilegged robot was so successful at locomotion. The theory of communication suggests that one way to ensure a message gets from point A to point B on a noisy line isn’t to send it as an analog signal, but to break it into discrete digital units and repeat these units with an appropriate code.

    “We were inspired by this theory, and we tried to see if redundancy could be helpful in matter transportation,” said Baxi Chong, a physics postdoctoral researcher. “So, we started this project to see what would happen if we had more legs on the robot: four, six, eight legs, and even 16 legs.”
    A team led by Chong, including School of Mathematics postdoctoral fellow Daniel Irvine and Professor Greg Blekherman, developed a theory that proposes that adding leg pairs to the robot increases its ability to move robustly over challenging surfaces — a concept they call spatial redundancy. This redundancy makes the robot’s legs successful on their own without the need for sensors to interpret the environment. If one leg falters, the abundance of legs keeps it moving regardless. In effect, the robot becomes a reliable system to transport itself and even a load from A to B on difficult or “noisy” landscapes. The concept is comparable to how punctuality can be guaranteed on wheeled transport if the track or rail is smooth enough but without having to engineer the environment to create this punctuality.
    “With an advanced bipedal robot, many sensors are typically required to control it in real time,” Chong said. “But in applications such as search and rescue, exploring Mars, or even micro robots, there is a need to drive a robot with limited sensing. There are many reasons for such sensor-free initiative. The sensors can be expensive and fragile, or the environments can change so fast that it doesn’t allow enough sensor-controller response time.”
    To test this, Juntao He, a Ph.D. student in robotics, conducted a series of experiments where he and Daniel Soto, a master’s student in the George W. Woodruff School of Mechanical Engineering, built terrains to mimic an inconsistent natural environment. He then tested the robot by increasing its number of legs by two each time, starting with six and eventually expanding to 16. As the leg count increased, the robot could more agilely move across the terrain, even without sensors[PGR1] , as the theory predicted. Eventually, they tested the robot outdoors on real terrain, where it was able to traverse in a variety of environments.
    “It’s truly impressive to witness the multilegged robot’s proficiency in navigating both lab-based terrains and outdoor environments,” Juntao said. “While bipedal and quadrupedal robots heavily rely on sensors to traverse complex terrain, our multilegged robot utilizes leg redundancy and can accomplish similar tasks with open-loop control.”
    Next Steps

    The researchers are already applying their discoveries to farming. Goldman has co-founded a company that aspires to use these robots to weed farmland where weedkillers are ineffective.
    “They’re kind of like a Roomba but outside for complex ground,” Goldman said. “A Roomba works because it has wheels that function well on flat ground. Until the development of our framework, we couldn’t confidently predict locomotor reliability on bumpy, rocky, debris-ridden terrain. We now have the beginnings of such a scheme, which could be used to ensure that our robots traverse a crop field in a certain amount of time.”
    The researchers also want to refine the robot. They know why the centipede robot framework is functional, but now they’re determining the optimal number of legs to achieve motion without sensing in a way that is cost-effective yet still retains the benefits.
    “In this paper, we asked, ‘How do you predict the minimum number of legs to achieve such tasks?'” Chong said. “Currently we only prove that the minimum number exists, but we don’t know that exact number of legs needed. Further, we need to better understand the tradeoff between energy, speed, power, and robustness in such a complex system.” More

  • in

    AI could run a million microbial experiments per year

    An artificial intelligence system enables robots to conduct autonomous scientific experiments — as many as 10,000 per day — potentially driving a drastic leap forward in the pace of discovery in areas from medicine to agriculture to environmental science.
    Reported today in Nature Microbiology, the team was led by a professor now at the University of Michigan.
    That artificial intelligence platform, dubbed BacterAI, mapped the metabolism of two microbes associated with oral health — with no baseline information to start with. Bacteria consume some combination of the 20 amino acids needed to support life, but each species requires specific nutrients to grow. The U-M team wanted to know what amino acids are needed by the beneficial microbes in our mouths so they can promote their growth.
    “We know almost nothing about most of the bacteria that influence our health. Understanding how bacteria grow is the first step toward reengineering our microbiome,” said Paul Jensen, U-M assistant professor of biomedical engineering who was at the University of Illinois when the project started.
    Figuring out the combination of amino acids that bacteria like is tricky, however. Those 20 amino acids yield more than a million possible combinations, just based on whether each amino acid is present or not. Yet BacterAI was able to discover the amino acid requirements for the growth of both Streptococcus gordonii and Streptococcus sanguinis.
    To find the right formula for each species, BacterAI tested hundreds of combinations of amino acids per day, honing its focus and changing combinations each morning based on the previous day’s results. Within nine days, it was producing accurate predictions 90% of the time.

    Unlike conventional approaches that feed labeled data sets into a machine-learning model, BacterAI creates its own data set through a series of experiments. By analyzing the results of previous trials, it comes up with predictions of what new experiments might give it the most information. As a result, it figured out most of the rules for feeding bacteria with fewer than 4,000 experiments.
    “When a child learns to walk, they don’t just watch adults walk and then say ‘Ok, I got it,’ stand up, and start walking. They fumble around and do some trial and error first,” Jensen said.
    “We wanted our AI agent to take steps and fall down, to come up with its own ideas and make mistakes. Every day, it gets a little better, a little smarter.”
    Little to no research has been conducted on roughly 90% of bacteria, and the amount of time and resources needed to learn even basic scientific information about them using conventional methods is daunting. Automated experimentation can drastically speed up these discoveries. The team ran up to 10,000 experiments in a single day.
    But the applications go beyond microbiology. Researchers in any field can set up questions as puzzles for AI to solve through this kind of trial and error.
    “With the recent explosion of mainstream AI over the last several months, many people are uncertain about what it will bring in the future, both positive and negative,” said Adam Dama, a former engineer in the Jensen Lab and lead author of the study. “But to me, it’s very clear that focused applications of AI like our project will accelerate everyday research.”
    The research was funded by the National Institutes of Health with support from NVIDIA. More

  • in

    Researchers develop manual for engineering spin dynamics in nanomagnets

    An international team of researchers at the University of California, Riverside, and the Institute of Magnetism in Kyiv, Ukraine, has developed a comprehensive manual for engineering spin dynamics in nanomagnets — an important step toward advancing spintronic and quantum-information technologies.
    Despite their small size, nanomagnets — found in most spintronic applications — reveal rich dynamics of spin excitations, or “magnons,” the quantum-mechanical units of spin fluctuations. Due to its nanoscale confinement, a nanomagnet can be considered to be a zero-dimensional system with a discrete magnon spectrum, similar to the spectrum of an atom.
    “The magnons interact with each other, thus constituting nonlinear spin dynamics,” said Igor Barsukov, an assistant professor of physics and astronomy at UC Riverside and a corresponding author on the study that appears in the journal Physical Review Applied. “Nonlinear spin dynamics is a major challenge and a major opportunity for improving the performance of spintronic technologies such as spin-torque memory, oscillators, and neuromorphic computing.”
    Barsukov explained that the interaction of magnons follows a set of rules — the selection rules. The researchers have now postulated these rules in terms of symmetries of magnetization configurations and magnon profiles.
    The new work continues the efforts to tame nanomagnets for next-generation computation technologies. In a previous publication, the team demonstrated experimentally that symmetries can be used for engineering magnon interactions.
    “We recognized the opportunity, but also noticed that much work needed to be done to understand and formulate the selection rules,” Barsukov said.
    According to the researchers, a comprehensive set of rules reveals the mechanisms behind the magnon interaction.
    “It can be seen as a guide for spintronics labs for debugging and designing nanomagnet devices,” said Arezoo Etesamirad, the first author of the paper who worked in the Barsukov lab and recently graduated with a doctoral degree in physics. “It lays the foundation for developing an experimental toolset for tunable magnetic neurons, switchable oscillators, energy-efficient memory, and quantum-magnonic and other next-generation nanomagnetic applications.”
    Barsukov and Etesamirad were joined in the research by Rodolfo Rodriguez of UCR; and Julia Kharlan and Roman Verba of the Institute of Magnetism in Kyiv, Ukraine.
    The study was funded by the U.S. National Science Foundation, National Academy of Sciences of Ukraine, National Research Foundation of Ukraine, National Science Center — Poland, and NVIDIA Corporation. More