More stories

  • in

    This tiny metal switches magnetism without magnets — and could power the future of electronics

    Research from the University of Minnesota Twin Cities gives new insight into a material that could make computer memory faster and more energy-efficient.
    The study was recently published in Advanced Materials, a peer-reviewed scientific journal. The researchers also have a patent on the technology.
    As technology continues to grow, so does the demand for emerging memory technology. Researchers are looking for alternatives and complements to existing memory solutions that can perform at high levels with low energy consumption to increase the functionality of everyday technology.
    In this new research, the team demonstrated a more efficient way to control magnetization in tiny electronic devices using a material called Ni₄W-a combination of nickel and tungsten. The team found that this low-symmetry material produces powerful spin-orbit torque (SOT) — a key mechanism for manipulating magnetism in next-generation memory and logic technologies.
    “Ni₄W reduces power usage for writing data, potentially cutting energy use in electronics significantly,” said Jian-Ping Wang, a senior author on the paper and a Distinguished McKnight Professor and Robert F. Hartmann Chair in the Department of Electrical and Computer Engineering (ECE) at the University of Minnesota Twin Cities.
    This technology could help reduce the electricity consumption of devices like smartphones and data centers making future electronics both smarter and more sustainable.
    “Unlike conventional materials, Ni₄W can generate spin currents in multiple directions, enabling ‘field-free’ switching of magnetic states without the need for external magnetic fields. We observed high SOT efficiency with multi-direction in Ni₄W both on its own and when layered with tungsten, pointing to its strong potential for use in low-power, high-speed spintronic devices.” said Yifei Yang, a fifth-year Ph.D. student in Wang’s group and a co-first author on the paper.

    Ni₄W is made from common metals and can be manufactured using standard industrial processes. The low-cost material makes it very attractive to industry partners and soon could be implemented into technology we use everyday like smart watches, phones, and more.
    “We are very excited to see that our calculations confirmed the choice of the material and the SOT experimental observation,” said Seungjun Lee, a postdoctoral fellow in ECE and the co-first author on the paper.
    The next steps are to grow these materials into a device that is even smaller from their previous work.
    In addition to Wang, Yang and Lee, the ECE team included Paul Palmberg Professor Tony Low, another senior author on the paper, Yu-Chia Chen, Qi Jia, Brahmudutta Dixit, Duarte Sousa, Yihong Fan, Yu-Han Huang, Deyuan Lyu and Onri Jay Benally. This work was done with Michael Odlyzko, Javier Garcia-Barriocanal, Guichuan Yu and Greg Haugstad from the University of Minnesota Characterization Facility, along with Zach Cresswell and Shuang Liang from the Department of Chemical Engineering and Materials Science.
    This work was supported by SMART (Spintronic Materials for Advanced InforRmation Technologies), a world-leading research center that brings together experts from across the nation to develop technologies for spin-based computing and memory systems. SMART was one of the seven centers of nCORE, a Semiconductor Research Corporation program sponsored by the National Institute of Standards and Technology. This work is being supported by the Global Research Collaboration Logic and Memory program. This study was done in collaboration with the University of Minnesota Characterization Facility and the Minnesota Nano Center. More

  • in

    What to know about the extreme U.S. flooding — and ways to stay safe

    July has washed across the United States with unusually destructive, deadly torrents of rain.

    In the first half of the month alone, historically heavy downpours sent rivers in Central Texas spilling far beyond their banks, causing at least 130 deaths. Rains prompted flash flooding across wildfire-scarred landscapes in New Mexico and flooded subway stations in New York City. Roadways in New Jersey turned into rivers, sweeping two people to their deaths as the floodwaters carried away their car. A tropical depression dumped up to 30 centimeters of rain in one day on parts of North Carolina, leading to at least six more deaths. More

  • in

    This flat chip uses twisted light to reveal hidden images

    Imagine trying to wear a left-handed glove on your right hand: it doesn’t fit because left and right hands are mirror images that can’t be superimposed on each other. This ‘handedness’ is what scientists call chirality, and it plays a fundamental role in biology, chemistry, and materials science. Most DNA molecules and sugars are right-handed, while most amino acids are left-handed. Reversing a molecule’s handedness can render a nutrient useless or a drug inactive and even harmful.
    Light can also be left or right ‘handed’. When a light beam is circularly polarized, its electric field corkscrews through space in either a left-handed or right-handed spiral. Because chiral structures interact differently with these two types of twisted light beams, shining a circularly polarized light on a sample – and comparing how much of each twist is absorbed, reflected, or delayed – lets scientists read out the sample’s own handedness. However, this effect is extremely weak, which makes precise control of chirality an essential but challenging task.
    Now, scientists from the Bionanophotonic Systems Laboratory in EPFL’s School of Engineering have collaborated with those in Australia to create artificial optical structures called metasurfaces: 2D lattices composed of tiny elements (meta-atoms) that can easily tune their chiral properties. By varying the orientation of meta-atoms within a lattice, scientists can control the resulting metasurface’s interaction with polarized light.
    “Our ‘chiral design toolkit’ is elegantly simple, and yet more powerful than previous approaches, which tried to control light through very complex meta-atom geometries. Instead, we leverage the interplay between the shape of the meta-atom and the symmetry of the metasurface lattice,” explains Bionanophotonics Lab head Hatice Altug.
    The innovation, which has potential applications in data encryption, biosensing, and quantum technologies, has been published in Nature Communications.
    An invisible, dual layer watermark
    The team’s metasurface, made of germanium and calcium difloride, presents a gradient of meta-atoms with orientations that vary continuously along a chip. The shape and angles of these meta-atoms, as well as the lattice symmetry, all work together to tune the response of the metasurface to polarized light.

    In a proof-of-concept experiment, the scientists encoded two different images simultaneously on a metasurface optimized for the invisible mid-infrared range of the electromagnetic spectrum. For the first image of an Australian cockatoo, the image data were encoded in the size of the meta-atoms – which represented pixels – and decoded with unpolarized light. The second image was encoded using the orientation of the meta-atoms so that, when exposed to circularly polarized light, the metasurface revealed a picture of the iconic Swiss Matterhorn.
    “This experiment showcased our technique’s ability to produce a dual layer ‘watermark’ invisible to the human eye, paving the way for advanced anticounterfeiting, camouflage and security applications,” says Bionanophotonics Systems Lab researcher Ivan Sinev.
    Beyond encryption, the team’s approach has potential applications for quantum technologies, many of which rely on polarized light to perform computations. The ability to map chiral responses across large surfaces could also streamline biosensing.
    “We can use chiral metastructures like ours to sense, for example, drug composition or purity from small-volume samples. Nature is chiral, and the ability to distinguish between left- and right-handed molecules is essential, as it could make the difference between a medicine and a toxin,” says Bionanophotonic Systems Lab researcher Felix Richter. More

  • in

    This AI-powered lab runs itself—and discovers new materials 10x faster

    Researchers have demonstrated a new technique that allows “self-driving laboratories” to collect at least 10 times more data than previous techniques at record speed. The advance – which is published in Nature Chemical Engineering – dramatically expedites materials discovery research, while slashing costs and environmental impact.
    Self-driving laboratories are robotic platforms that combine machine learning and automation with chemical and materials sciences to discover materials more quickly. The automated process allows machine-learning algorithms to make use of data from each experiment when predicting which experiment to conduct next to achieve whatever goal was programmed into the system.
    “Imagine if scientists could discover breakthrough materials for clean energy, new electronics, or sustainable chemicals in days instead of years, using just a fraction of the materials and generating far less waste than the status quo,” says Milad Abolhasani, corresponding author of a paper on the work and ALCOA Professor of Chemical and Biomolecular Engineering at North Carolina State University. “This work brings that future one step closer.”
    Until now, self-driving labs utilizing continuous flow reactors have relied on steady-state flow experiments. In these experiments, different precursors are mixed together and chemical reactions take place, while continuously flowing in a microchannel. The resulting product is then characterized by a suite of sensors once the reaction is complete.
    “This established approach to self-driving labs has had a dramatic impact on materials discovery,” Abolhasani says. “It allows us to identify promising material candidates for specific applications in a few months or weeks, rather than years, while reducing both costs and the environmental impact of the work. However, there was still room for improvement.”
    Steady-state flow experiments require the self-driving lab to wait for the chemical reaction to take place before characterizing the resulting material. That means the system sits idle while the reactions take place, which can take up to an hour per experiment.
    “We’ve now created a self-driving lab that makes use of dynamic flow experiments, where chemical mixtures are continuously varied through the system and are monitored in real time,” Abolhasani says. “In other words, rather than running separate samples through the system and testing them one at a time after reaching steady-state, we’ve created a system that essentially never stops running. The sample is moving continuously through the system and, because the system never stops characterizing the sample, we can capture data on what is taking place in the sample every half second.

    “For example, instead of having one data point about what the experiment produces after 10 seconds of reaction time, we have 20 data points – one after 0.5 seconds of reaction time, one after 1 second of reaction time, and so on. It’s like switching from a single snapshot to a full movie of the reaction as it happens. Instead of waiting around for each experiment to finish, our system is always running, always learning.”
    Collecting this much additional data has a big impact on the performance of the self-driving lab.
    “The most important part of any self-driving lab is the machine-learning algorithm the system uses to predict which experiment it should conduct next,” Abolhasani says. “This streaming-data approach allows the self-driving lab’s machine-learning brain to make smarter, faster decisions, honing in on optimal materials and processes in a fraction of the time. That’s because the more high-quality experimental data the algorithm receives, the more accurate its predictions become, and the faster it can solve a problem. This has the added benefit of reducing the amount of chemicals needed to arrive at a solution.”
    In this work, the researchers found the self-driving lab that incorporated a dynamic flow system generated at least 10 times more data than self-driving labs that used steady-state flow experiments over the same period of time, and was able to identify the best material candidates on the very first try after training.
    “This breakthrough isn’t just about speed,” Abolhasani says. “By reducing the number of experiments needed, the system dramatically cuts down on chemical use and waste, advancing more sustainable research practices.
    “The future of materials discovery is not just about how fast we can go, it’s also about how responsibly we get there,” Abolhasani says. “Our approach means fewer chemicals, less waste, and faster solutions for society’s toughest challenges.”
    The paper, “Flow-Driven Data Intensification to Accelerate Autonomous Materials Discovery,” will be published July 14 in the journal Nature Chemical Engineering. Co-lead authors of the paper are Fernando Delgado-Licona, a Ph.D. student at NC State; Abdulrahman Alsaiari, a master’s student at NC State; and Hannah Dickerson, a former undergraduate at NC State. The paper was co-authored by Philip Klem, an undergraduate at NC State; Arup Ghorai, a former postdoctoral researcher at NC State; Richard Canty and Jeffrey Bennett, current postdoctoral researchers at NC State; Pragyan Jha, Nikolai Mukhin, Junbin Li and Sina Sadeghi, Ph.D. students at NC State; Fazel Bateni, a former Ph.D. student at NC State; and Enrique A. López-Guajardo of Tecnologico de Monterrey.
    This work was done with support from the National Science Foundation under grants 1940959, 2315996 and 2420490; and from the University of North Carolina Research Opportunities Initiative program. More

  • in

    This magnetic breakthrough could make AI 10x more efficient

    The rapid rise in AI applications has placed increasingly heavy demands on our energy infrastructure. All the more reason to find energy-saving solutions for AI hardware. One promising idea is the use of so-called spin waves to process information. A team from the Universities of Münster and Heidelberg (Germany) led by physicist Prof. Rudolf Bratschitsch (Münster) has now developed a new way to produce waveguides in which the spin waves can propagate particularly far. They have thus created the largest spin waveguide network to date. Furthermore, the group succeeded in specifically controlling the properties of the spin wave transmitted in the waveguide. For example, they were able to precisely alter the wavelength and reflection of the spin wave at a certain interface. The study was published in the scientific journal Nature Materials.
    The electron spin is a quantum mechanical quantity that is also described as the intrinsic angular momentum. The alignment of many spins in a material determines its magnetic properties. If an alternating current is applied to a magnetic material with an antenna, thereby generating a changing magnetic field, the spins in the material can generate a spin wave.
    Spin waves have already been used to create individual components, such as logic gates that process binary input signals into binary output signals, or multiplexers that select one of various input signals. Up until now, however, the components were not connected to form a larger circuit. “The fact that larger networks such as those used in electronics have not yet been realised, is partly due to the strong attenuation of the spin waves in the waveguides that connect the individual switching elements – especially if they are narrower than a micrometre and therefore on the nanoscale,” explains Rudolf Bratschitsch.
    The group used the material with the lowest attenuation currently known: yttrium iron garnet (YIG)., The researchers inscribed individual spin-wave waveguides into a 110 nanometre thin film of this magnetic material using a silicon ion beam and produced a large network with 198 nodes. The new method allows complex structures of high quality to be produced flexibly and reproducibly.
    The German Research Foundation (DFG) funded the project as part of the Collaborative Research Centre 1459 “Intelligent Matter.” More

  • in

    Trees can’t get up and walk away, but forests can

    An army of treelike creatures called Ents marches to war in the second The Lord of the Rings movie, The Two Towers, walking for miles through dark forests. Once they arrive at the fortress of the evil wizard Saruman, the Ents hurl giant boulders, climb over walls and even rip open a dam to wipe out their enemy.

    Mobile trees like the Ents are found throughout science fiction and fantasy worlds. The treelike alien Groot in Guardians of the Galaxy uses twiggy wings to fly. Trees called Evermean fight the main character Link in The Legend of Zelda: Tears of the Kingdom video game. And Harry Potter’s Whomping Willow — well, it whomps anyone who gets too close. More

  • in

    Deep-sea mining could start soon — before we understand its risks

    An underwater gold rush may be on the horizon — or rather, a rush to mine the seafloor for manganese, nickel, cobalt and other minerals used in electric vehicles, solar panels and more.

    Meanwhile, scientists and conservationists hope to pump the brakes on the prospect of deep-sea mining, warning that it may scar the seafloor for decades — and that there’s still far too little known about the lingering harm it might do to the deep ocean’s fragile ecosystems. More

  • in

    Scientists discover the moment AI truly understands language

    The language capabilities of today’s artificial intelligence systems are astonishing. We can now engage in natural conversations with systems like ChatGPT, Gemini, and many others, with a fluency nearly comparable to that of a human being. Yet we still know very little about the internal processes in these networks that lead to such remarkable results.
    A new study published in the Journal of Statistical Mechanics: Theory and Experiment (JSTAT) reveals a piece of this mystery. It shows that when small amounts of data are used for training, neural networks initially rely on the position of words in a sentence. However, as the system is exposed to enough data, it transitions to a new strategy based on the meaning of the words. The study finds that this transition occurs abruptly, once a critical data threshold is crossed — much like a phase transition in physical systems. The findings offer valuable insights for understanding the workings of these models.
    Just like a child learning to read, a neural network starts by understanding sentences based on the positions of words: depending on where words are located in a sentence, the network can infer their relationships (are they subjects, verbs, objects?). However, as the training continues — the network “keeps going to school” — a shift occurs: word meaning becomes the primary source of information.
    This, the new study explains, is what happens in a simplified model of self-attention mechanism — a core building block of transformer language models, like the ones we use every day (ChatGPT, Gemini, Claude, etc.). A transformer is a neural network architecture designed to process sequences of data, such as text, and it forms the backbone of many modern language models. Transformers specialize in understanding relationships within a sequence and use the self-attention mechanism to assess the importance of each word relative to the others.
    “To assess relationships between words,” explains Hugo Cui, a postdoctoral researcher at Harvard University and first author of the study, “the network can use two strategies, one of which is to exploit the positions of words.” In a language like English, for example, the subject typically precedes the verb, which in turn precedes the object. “Mary eats the apple” is a simple example of this sequence.
    “This is the first strategy that spontaneously emerges when the network is trained,” Cui explains. “However, in our study, we observed that if training continues and the network receives enough data, at a certain point — once a threshold is crossed — the strategy abruptly shifts: the network starts relying on meaning instead.”
    “When we designed this work, we simply wanted to study which strategies, or mix of strategies, the networks would adopt. But what we found was somewhat surprising: below a certain threshold, the network relied exclusively on position, while above it, only on meaning.”
    Cui describes this shift as a phase transition, borrowing a concept from physics. Statistical physics studies systems composed of enormous numbers of particles (like atoms or molecules) by describing their collective behavior statistically. Similarly, neural networks — the foundation of these AI systems — are composed of large numbers of “nodes,” or neurons (named by analogy to the human brain), each connected to many others and performing simple operations. The system’s intelligence emerges from the interaction of these neurons, a phenomenon that can be described with statistical methods.

    This is why we can speak of an abrupt change in network behavior as a phase transition, similar to how water, under certain conditions of temperature and pressure, changes from liquid to gas.
    “Understanding from a theoretical viewpoint that the strategy shift happens in this manner is important,” Cui emphasizes. “Our networks are simplified compared to the complex models people interact with daily, but they can give us hints to begin to understand the conditions that cause a model to stabilize on one strategy or another. This theoretical knowledge could hopefully be used in the future to make the use of neural networks more efficient, and safer.”
    The research by Hugo Cui, Freya Behrens, Florent Krzakala, and Lenka Zdeborová, titled “A Phase Transition between Positional and Semantic Learning in a Solvable Model of Dot-Product Attention,” is published in JSTAT as part of the Machine Learning 2025 special issue and is included in the proceedings of the NeurIPS 2024 conference. More