More stories

  • in

    New details of SARS-COV-2 structure

    A new study led by Worcester Polytechnic Institute (WPI) brings into sharper focus the structural details of the COVID-19 virus, revealing an elliptical shape that “breathes,” or changes shape, as it moves in the body. The discovery, which could lead to new antiviral therapies for the disease and quicker development of vaccines, is featured in the April edition of the peer-reviewed Cell Press structural biology journal Structure.
    “This is critical knowledge we need to fight future pandemics,” said Dmitry Korkin, Harold L. Jurist ’61 and Heather E. Jurist Dean’s Professor of Computer Science and lead researcher on the project. “Understanding the SARS-COV-2 virus envelope should allow us to model the actual process of the virus attaching to the cell and apply this knowledge to our understanding of the therapies at the molecular level. For instance, how can the viral activity be inhibited by antiviral drugs? How much antiviral blocking is needed to prevent virus-to-host interaction? We don’t know. But this is the best thing we can do right now — to be able to simulate actual processes.”
    Feeding genetic sequencing information and massive amounts of real-world data about the pandemic virus into a supercomputer in Texas, Korkin and his team, working in partnership with a group led by Siewert-Jan Marrink at the University of Groningen, Netherlands, produced a computational model of the virus’s envelope, or outer shell, in “near atomistic detail” that had until now been beyond the reach of even the most powerful microscopes and imaging techniques.
    Essentially, the computer used structural bioinformatics and computational biophysics to create its own picture of what the SARS-COV-2 particle looks like. And that picture showed that the virus is more elliptical than spherical and can change its shape. Korkin said the work also led to a better understanding of the M proteins in particular: underappreciated and overlooked components of the virus’s envelope.
    The M proteins form entities called dimers with a copy of each other, and play a role in the particle’s shape-shifting by keeping the structure flexible overall while providing a triangular mesh-like structure on the interior that makes it remarkably resilient, Korkin said. In contrast, on the exterior, the proteins assemble into mysterious filament-like structures that have puzzled scientists who have seen Korkin’s results, and will require further study.
    Korkin said the structural model developed by the researchers expands what was already known about the envelope architecture of the SARS-COV-2 virus and previous SARS- and MERS-related outbreaks. The computational protocol used to create the model could also be applied to more rapidly model future coronaviruses, he said. A clearer picture of the virus’ structure could reveal crucial vulnerabilities.
    “The envelope properties of SARS-COV-2 are likely to be similar to other coronaviruses,” he said. “Eventually, knowledge about the properties of coronavirus membrane proteins could lead to new therapies and vaccines for future viruses.”
    The new findings published in Structure were three years in the making and built upon Korkin’s work in the early days of the pandemic to provide the first 3D roadmap of the virus, based on genetic sequence information from the first isolated strain in China. More

  • in

    New algorithm keeps drones from colliding in midair

    When multiple drones are working together in the same airspace, perhaps spraying pesticide over a field of corn, there’s a risk they might crash into each other.
    To help avoid these costly crashes, MIT researchers presented a system called MADER in 2020. This multiagent trajectory-planner enables a group of drones to formulate optimal, collision-free trajectories. Each agent broadcasts its trajectory so fellow drones know where it is planning to go. Agents then consider each other’s trajectories when optimizing their own to ensure they don’t collide.
    But when the team tested the system on real drones, they found that if a drone doesn’t have up-to-date information on the trajectories of its partners, it might inadvertently select a path that results in a collision. The researchers revamped their system and are now rolling out Robust MADER, a multiagent trajectory planner that generates collision-free trajectories even when communications between agents are delayed.
    “MADER worked great in simulations, but it hadn’t been tested in hardware. So, we built a bunch of drones and started flying them. The drones need to talk to each other to share trajectories, but once you start flying, you realize pretty quickly that there are always communication delays that introduce some failures,” says Kota Kondo, an aeronautics and astronautics graduate student.
    The algorithm incorporates a delay-check step during which a drone waits a specific amount of time before it commits to a new, optimized trajectory. If it receives additional trajectory information from fellow drones during the delay period, it might abandon its new trajectory and start the optimization process over again.
    When Kondo and his collaborators tested Robust MADER, both in simulations and flight experiments with real drones, it achieved a 100 percent success rate at generating collision-free trajectories. While the drones’ travel time was a bit slower than it would be with some other approaches, no other baselines could guarantee safety.

    “If you want to fly safer, you have to be careful, so it is reasonable that if you don’t want to collide with an obstacle, it will take you more time to get to your destination. If you collide with something, no matter how fast you go, it doesn’t really matter because you won’t reach your destination,” Kondo says.
    Kondo wrote the paper with Jesus Tordesillas, a postdoc; Parker C. Lusk, a graduate student; Reinaldo Figueroa, Juan Rached, and Joseph Merkel, MIT undergraduates; and senior author Jonathan P. How, the Richard C. Maclaurin Professor of Aeronautics and Astronautics and a member of the MIT-IBM Watson AI Lab. The research will be presented at the International Conference on Robots and Automation.
    Planning trajectories
    MADER is an asynchronous, decentralized, multiagent trajectory-planner. This means that each drone formulates its own trajectory and that, while all agents must agree on each new trajectory, they don’t need to agree at the same time. This makes MADER more scalable than other approaches, since it would be very difficult for thousands of drones to agree on a trajectory simultaneously. Due to its decentralized nature, the system would also work better in real-world environments where drones may fly far from a central computer.
    With MADER, each drone optimizes a new trajectory using an algorithm that incorporates the trajectories it has received from other agents. By continually optimizing and broadcasting their new trajectories, the drones avoid collisions.

    But perhaps one agent shared its new trajectory several seconds ago, but a fellow agent didn’t receive it right away because the communication was delayed. In real-world environments, signals are often delayed by interference from other devices or environmental factors like stormy weather. Due to this unavoidable delay, a drone might inadvertently commit to a new trajectory that sets it on a collision course.
    Robust MADER prevents such collisions because each agent has two trajectories available. It keeps one trajectory that it knows is safe, which it has already checked for potential collisions. While following that original trajectory, the drone optimizes a new trajectory but does not commit to the new trajectory until it completes a delay-check step.
    During the delay-check period, the drone spends a fixed amount of time repeatedly checking for communications from other agents to see if its new trajectory is safe. If it detects a potential collision, it abandons the new trajectory and starts the optimization process over again.
    The length of the delay-check period depends on the distance between agents and environmental factors that could hamper communications, Kondo says. If the agents are many miles apart, for instance, then the delay-check period would need to be longer.
    Completely collision-free
    The researchers tested their new approach by running hundreds of simulations in which they artificially introduced communication delays. In each simulation, Robust MADER was 100 percent successful at generating collision-free trajectories, while all the baselines caused crashes.
    The researchers also built six drones and two aerial obstacles and tested Robust MADER in a multiagent flight environment. They found that, while using the original version of MADER in this environment would have resulted in seven collisions, Robust MADER did not cause a single crash in any of the hardware experiments.
    “Until you actually fly the hardware, you don’t know what might cause a problem. Because we know that there is a difference between simulations and hardware, we made the algorithm robust, so it worked in the actual drones, and seeing that in practice was very rewarding,” Kondo says.
    Drones were able to fly 3.4 meters per second with Robust MADER, although they had a slightly longer average travel time than some baselines. But no other method was perfectly collision-free in every experiment.
    In the future, Kondo and his collaborators want to put Robust MADER to the test outdoors, where many obstacles and types of noise can affect communications. They also want to outfit drones with visual sensors so they can detect other agents or obstacles, predict their movements, and include that information in trajectory optimizations.
    This work was supported by Boeing Research and Technology. More

  • in

    Can AI predict how you'll vote in the next election?

    Artificial intelligence technologies like ChatGPT are seemingly doing everything these days: writing code, composing music, and even creating images so realistic you’ll think they were taken by professional photographers. Add thinking and responding like a human to the conga line of capabilities. A recent study from BYU proves that artificial intelligence can respond to complex survey questions just like a real human.
    To determine the possibility of using artificial intelligence as a substitute for human responders in survey-style research, a team of political science and computer science professors and graduate students at BYU tested the accuracy of programmed algorithms of a GPT-3 language model — a model that mimics the complicated relationship between human ideas, attitudes, and sociocultural contexts of subpopulations.
    In one experiment, the researchers created artificial personas by assigning the AI certain characteristics like race, age, ideology, and religiosity; and then tested to see if the artificial personas would vote the same as humans did in 2012, 2016, and 2020 U.S. presidential elections. Using the American National Election Studies (ANES) for their comparative human database, they found a high correspondence between how the AI and humans voted.
    “I was absolutely surprised to see how accurately it matched up,” said David Wingate, BYU computer science professor, and co-author on the study. “It’s especially interesting because the model wasn’t trained to do political science — it was just trained on a hundred billion words of text downloaded from the internet. But the consistent information we got back was so connected to how people really voted.”
    In another experiment, they conditioned artificial personas to offer responses from a list of options in an interview-style survey, again using the ANES as their human sample. They found high similarity between nuanced patterns in human and AI responses.
    This innovation holds exciting prospects for researchers, marketers, and pollsters. Researchers envision a future where artificial intelligence is used to craft better survey questions, refining them to be more accessible and representative; and even simulate populations that are difficult to reach. It can be used to test surveys, slogans, and taglines as a precursor to focus groups.
    “We’re learning that AI can help us understand people better,” said BYU political science professor Ethan Busby. “It’s not replacing humans, but it is helping us more effectively study people. It’s about augmenting our ability rather than replacing it. It can help us be more efficient in our work with people by allowing us to pre-test our surveys and our messaging.”
    And while the expansive possibilities of large language models are intriguing, the rise of artificial intelligence poses a host of questions — how much does AI really know? Which populations will benefit from this technology and which will be negatively impacted? And how can we protect ourselves from scammers and fraudsters who will manipulate AI to create more sophisticated phishing scams?
    While much of that is still to be determined, the study lays out a set of criteria that future researchers can use to determine how accurate an AI model is for different subject areas.
    “We’re going to see positive benefits because it’s going to unlock new capabilities,” said Wingate, noting that AI can help people in many different jobs be more efficient. “We’re also going to see negative things happen because sometimes computer models are inaccurate and sometimes they’re biased. It will continue to churn society.”
    Busby says surveying artificial personas shouldn’t replace the need to survey real people and that academics and other experts need to come together to define the ethical boundaries of artificial intelligence surveying in research related to social science. More

  • in

    New chip design to provide greatest precision in memory to date

    Everyone is talking about the newest AI and the power of neural networks, forgetting that software is limited by the hardware on which it runs. But it is hardware, says USC Professor of Electrical and Computer Engineering Joshua Yang, that has become “the bottleneck.” Now, Yang’s new research with collaborators might change that. They believe that they have developed a new type of chip with the best memory of any chip thus far for edge AI (AI in portable devices).
    For approximately the past 30 years, while the size of the neural networks needed for AI and data science applications doubled every 3.5 months, the hardware capability needed to process them doubled only every 3.5 years. According to Yang, hardware presents a more and more severe problem for which few have patience.
    Governments, industry, and academia are trying to address this hardware challenge worldwide. Some continue to work on hardware solutions with silicon chips, while others are experimenting with new types of materials and devices. Yang’s work falls into the middle — focusing on exploiting and combining the advantages of the new materials and traditional silicon technology that could support heavy AI and data science computation.
    Their new paper in Nature focuses on the understanding of fundamental physics that leads to a drastic increase in memory capacity needed for AI hardware. The team led by Yang, with researchers from USC (including Han Wang’s group), MIT, and the University of Massachusetts, developed a protocol for devices to reduce “noise” and demonstrated the practicality of using this protocol in integrated chips. This demonstration was made at TetraMem, a startup company co-founded by Yang and his co-authors (Miao Hu, Qiangfei Xia, and Glenn Ge), to commercialize AI acceleration technology. According to Yang, this new memory chip has the highest information density per device (11 bits) among all types of known memory technologies thus far. Such small but powerful devices could play a critical role in bringing incredible power to the devices in our pockets. The chips are not just for memory but also for the processor. And millions of them in a small chip, working in parallel to rapidly run your AI tasks, could only require a small battery to power it.
    The chips that Yang and his colleagues are creating combine silicon with metal oxide memristors in order to create powerful but low-energy intensive chips. The technique focuses on using the positions of atoms to represent information rather than the number of electrons (which is the current technique involved in computations on chips). The positions of the atoms offer a compact and stable way to store more information in an analog, instead of digital fashion. Moreover, the information can be processed where it is stored instead of being sent to one of the few dedicated ‘processors,’ eliminating the so-called ‘von Neumann bottleneck’ existing in current computing systems. In this way, says Yang, computing for AI is “more energy efficient with a higher throughput.”
    How it works
    Yang explains that electrons which are manipulated in traditional chips, are “light.” And this lightness, makes them prone to moving around and being more volatile. Instead of storing memory through electrons, Yang and collaborators are storing memory in full atoms. Here is why this memory matters. Normally, says Yang, when one turns off a computer, the information memory is gone — but if you need that memory to run a new computation and your computer needs the information all over again, you have lost both time and energy. This new method, focusing on activating atoms rather than electrons, does not require battery power to maintain stored information. Similar scenarios happen in AI computations, where a stable memory capable of high information density is crucial. Yang imagines this new tech that may enable powerful AI capability in edge devices, such as Google Glasses, which he says previously suffered from a frequent recharging issue.
    Further, by converting chips to rely on atoms as opposed to electrons, chips become smaller. Yang adds that with this new method, there is more computing capacity at a smaller scale. And this method, he says, could offer “many more levels of memory to help increase information density.”
    To put it in context, right now, ChatGPT is running on a cloud. The new innovation, followed by some further development, could put the power of a mini version of ChatGPT in everyone’s personal device. It could make such high-powered tech more affordable and accessible for all sorts of applications. More

  • in

    AI could set a new bar for designing hurricane-resistant buildings

    Being able to withstand hurricane-force winds is the key to a long life for many buildings on the Eastern Seaboard and Gulf Coast of the U.S. Determining the right level of winds to design for is tricky business, but support from artificial intelligence may offer a simple solution.
    Equipped with 100 years of hurricane data and modern AI techniques, researchers at the National Institute of Standards and Technology (NIST) have devised a new method of digitally simulating hurricanes. The results of a study published today in Artificial Intelligence for the Earth Systems demonstrate that the simulations can accurately represent the trajectory and wind speeds of a collection of actual storms. The authors suggest that simulating numerous realistic hurricanes with the new approach can help to develop improved guidelines for the design of buildings in hurricane-prone regions.
    State and local laws that regulate building design and construction — more commonly known as building codes — point designers to standardized maps. On these maps, engineers can find the level of wind their structure must handle based on its location and its relative importance (i.e., the bar is higher for a hospital than for a self-storage facility). The wind speeds in the maps are derived from scores of hypothetical hurricanes simulated by computer models, which are themselves based on real-life hurricane records.
    “Imagine you had a second Earth, or a thousand Earths, where you could observe hurricanes for 100 years and see where they hit on the coast, how intense they are. Those simulated storms, if they behave like real hurricanes, can be used to create the data in the maps almost directly,” said NIST mathematical statistician Adam Pintar, a study co-author.
    The researchers who developed the latest maps did so by simulating the complex inner workings of hurricanes, which are influenced by physical parameters such as sea surface temperatures and the Earth’s surface roughness. However, the requisite data on these specific factors is not always readily available.
    More than a decade later, advances in AI-based tools and years of additional hurricane records have made an unprecedented approach possible, which could result in more realistic hurricane wind maps down the road.

    NIST postdoctoral researcher Rikhi Bose, together with Pintar and NIST Fellow Emil Simiu, used these new techniques and resources to tackle the issue from a different angle. Rather than having their model mathematically build a storm from the ground up, the authors of the new study taught it to mimic actual hurricane data with machine learning, Pintar said.
    Studying for a physics exam by only looking at the questions and answers of previous assignments may not play out in a student’s favor, but for powerful AI-based techniques, this type of approach could be worthwhile.
    With enough quality information to study, machine-learning algorithms can construct models based on patterns they uncover within datasets that other methods may miss. Those models can then simulate specific behaviors, such as the wind strength and movement of a hurricane.
    In the new research, the study material came in the form of the National Hurricane Center’s Atlantic Hurricane Database (HURDAT2), which contains information about hurricanes going back more than 100 years, such as the coordinates of their paths and windspeeds.
    The researchers split data on more than 1,500 storms into sets for training and testing their model. When challenged with concurrently simulating the trajectory and wind of historical storms it had not seen before, the model scored highly.

    “It performs very well. Depending on where you’re looking at along the coast, it would be quite difficult to identify a simulated hurricane from a real one, honestly,” Pintar said.
    They also used the model to generate sets of 100 years’ worth of hypothetical storms. It produced the simulations in a matter of seconds, and the authors saw a large degree of overlap with the general behavior of the HURDAT2 storms, suggesting that their model could rapidly produce collections of realistic storms.
    However, there were some discrepancies, such as in the Northeastern coastal states. In these regions, HURDAT2 data was sparse, and thus, the model generated less realistic storms.
    “Hurricanes are not as frequent in, say, Boston as in Miami, for example. The less data you have, the larger the uncertainty of your predictions,” Simiu said.
    As a next step, the team plans to use simulated hurricanes to develop coastal maps of extreme wind speeds as well as quantify uncertainty in those estimated speeds.
    Since the model’s understanding of storms is limited to historical data for now, it cannot simulate the effects that climate change will have on storms of the future. The traditional approach of simulating storms from the ground up is better suited to that task. However, in the short term, the authors are confident that wind maps based on their model — which is less reliant on elusive physical parameters than other models are — would better reflect reality.
    Within the next several years they aim to produce and propose new maps for inclusion in building standards and codes. More

  • in

    Machine learning model helps forecasters improve confidence in storm prediction

    When severe weather is brewing and life-threatening hazards like heavy rain, hail or tornadoes are possible, advance warning and accurate predictions are of utmost importance. Colorado State University weather researchers have given storm forecasters a powerful new tool to improve confidence in their forecasts and potentially save lives.
    Over the last several years, Russ Schumacher, professor in the Department of Atmospheric Science and Colorado State Climatologist, has led a team developing a sophisticated machine learning model for advancing skillful prediction of hazardous weather across the continental United States. First trained on historical records of excessive rainfall, the model is now smart enough to make accurate predictions of events like tornadoes and hail four to eight days in advance — the crucial sweet spot for forecasters to get information out to the public so they can prepare. The model is called CSU-MLP, or Colorado State University-Machine Learning Probabilities.
    Led by research scientist Aaron Hill, who has worked on refining the model for the last two-plus years, the team recently published their medium-range (four to eight days) forecasting ability in the American Meteorological Society journal Weather and Forecasting.
    Working with Storm Prediction Center forecasters
    The researchers have now teamed with forecasters at the national Storm Prediction Center in Norman, Oklahoma, to test the model and refine it based on practical considerations from actual weather forecasters. The tool is not a stand-in for the invaluable skill of human forecasters, but rather provides an agnostic, confidence-boosting measure to help forecasters decide whether to issue public warnings about potential weather.
    “Our statistical models can benefit operational forecasters as a guidance product, not as a replacement,” Hill said.

    Israel Jirak, M.S. ’02, Ph.D. ’05, is science and operations officer at the Storm Prediction Center and co-author of the paper. He called the collaboration with the CSU team “a very successful research-to-operations project.”
    “They have developed probabilistic machine learning-based severe weather guidance that is statistically reliable and skillful while also being practically useful for forecasters,” Jirak said. The forecasters in Oklahoma are using the CSU guidance product daily, particularly when they need to issue medium-range severe weather outlooks.
    Nine years of historical weather data
    The model is trained on a very large dataset containing about nine years of detailed historical weather observations over the continental U.S. These data are combined with meteorological retrospective forecasts, which are model “re-forecasts” created from outcomes of past weather events. The CSU researchers pulled the environmental factors from those model forecasts and associated them with past events of severe weather like tornadoes and hail. The result is a model that can run in real time with current weather events and produce a probability of those types of hazards with a four- to eight-day lead time, based on current environmental factors like temperature and wind.
    Ph.D. student Allie Mazurek is working on the project and is seeking to understand which atmospheric data inputs are the most important to the model’s predictive capabilities. “If we can better decompose how the model is making its predictions, we can hopefully better diagnose why the model’s predictions are good or bad during certain weather setups,” she said.
    Hill and Mazurek are working to make the model not only more accurate, but also more understandable and transparent for the forecasters using it.
    For Hill, it’s most gratifying to know that years of work refining the machine learning tool are now making a difference in a public, operational setting.
    “I love fundamental research. I love understanding new things about our atmosphere. But having a system that is providing improved warnings and improved messaging around the threat of severe weather is extremely rewarding,” Hill said. More

  • in

    Can a solid be a superfluid? Engineering a novel supersolid state from layered 2D materials

    A collaboration of Australian and European physicists predict that layered electronic 2D semiconductors can host a curious quantum phase of matter called the supersolid.
    The supersolid is a very counterintuitive phase indeed. It is made up of particles that simultaneously form a rigid crystal and yet at the same time flow without friction since all the particles belong to the same single quantum state.
    A solid becomes ‘super’ when its quantum properties match the well-known quantum properties of superconductors. A supersolid simultaneously has two orders, solid and super: solid because of the spatially repeating pattern of particles, super because the particles can flow without resistance.”Although a supersolid is rigid, it can flow like a liquid without resistance,” explains Lead author Dr Sara Conti (University of Antwerp).
    The study was conducted at UNSW (Australia), University of Antwerp (Belgium) and University of Camerino (Italy).
    A 50-Year Journey Towards the Exotic Supersolid
    Geoffrey Chester, a Professor at Cornell University, predicted in 1970 that solid helium-4 under pressure should at low temperatures display: Crystalline solid order, with each helium atom at a specific point in a regularly ordered lattice and, at the same time, Bose-Einstein condensation of the atoms, with every atom in the same single quantum state, so they flow without resistance.

    However in the following five decades the Chester supersolid has not been unambiguously detected.
    Alternative approaches to forming a supersolid-like state have reported supersolid-like phases in cold-atom systems in optical lattices. These are either clusters of condensates or condensates with varying density determined by the trapping geometries. These supersolid-like phases should be distinguished from the original Chester supersolid in which each single particle is localised in its place in the crystal lattice purely by the forces acting between the particles.
    The new Australia-Europe study predicts that such a state could instead be engineered in two-dimensional (2D) electronic materials in a semiconductor structure, fabricated with two conducting layers separated by an insulating barrier of thickness d.
    One layer is doped with negatively-charged electrons and the other with positively-charged holes.
    The particles forming the supersolid are interlayer excitons, bound states of an electron and hole tied together by their strong electrical attraction. The insulating barrier prevents fast self-annihilation of the exciton bound pairs. Voltages applied to top and bottom metal ‘gates’ tune the average separation r0 between excitons.

    The research team predicts that excitons in this structure will form a supersolid over a wide range of layer separations and average separations between the excitons. The electrical repulsion between the excitons can constrain them into a fixed crystalline lattice.
    “A key novelty is that a supersolid phase with Bose-Einstein quantum coherence appears at layer separations much smaller than the separation predicted for the non-super exciton solid that is driven by the same electrical repulsion between excitons,” says co-corresponding author Prof David Neilson (University of Antwerp).
    “In this way, the supersolid pre-empts the non-super exciton solid. At still larger separations, the non-super exciton solid eventually wins, and the quantum coherence collapses.”
    “This is an extremely robust state, readily achievable in experimental setups,” adds co-corresponding author Prof Alex Hamilton (UNSW). “Ironically, the layer separations are relatively large and are easier to fabricate than the extremely small layer separations in such systems that have been the focus of recent experiments aimed at maximising the interlayer exciton binding energies.”
    As for detection, for a superfluid it is well known that this cannot be rotated until it can host a quantum vortex, analogous to a whirlpool. But to form this vortex requires a finite amount of energy, and hence a sufficiently strong rotational force. So up to this point, the measured rotational moment of inertia (the extent to which an object resists rotational acceleration) will remain zero. In the same way, a supersolid can be identified by detecting such an anomaly in its rotational moment of inertia.
    The research team has reported the complete phase diagram of this system at low temperatures.
    “By changing the layer separation relative to the average exciton spacing, the strength of the exciton-exciton interactions can be tuned to stabilise either the superfluid, or the supersolid, or the normal solid,” says Dr Sara Conti.
    “The existence of a triple point is also particularly intriguing. At this point, the boundaries of supersolid and normal-solid melting, and the supersolid to normal-solid transition, all cross. There should be exciting physics coming from the exotic interfaces separating these domains, for example, Josephson tunnelling between supersolid puddles embedded in a normal-background.” More

  • in

    Magnon-based computation could signal computing paradigm shift

    Like electronics or photonics, magnonics is an engineering subfield that aims to advance information technologies when it comes to speed, device architecture, and energy consumption. A magnon corresponds to the specific amount of energy required to change the magnetization of a material via a collective excitation called a spin wave.
    Because they interact with magnetic fields, magnons can be used to encode and transport data without electron flows, which involve energy loss through heating (known as Joule heating) of the conductor used. As Dirk Grundler, head of the Lab of Nanoscale Magnetic Materials and Magnonics (LMGN) in the School of Engineering explains, energy losses are an increasingly serious barrier to electronics as data speeds and storage demands soar.
    “With the advent of AI, the use of computing technology has increased so much that energy consumption threatens its development,” Grundler says. “A major issue is traditional computing architecture, which separates processors and memory. The signal conversions involved in moving data between different components slow down computation and waste energy.”
    This inefficiency, known as the memory wall or Von Neumann bottleneck, has had researchers searching for new computing architectures that can better support the demands of big data. And now, Grundler believes his lab might have stumbled on such a “holy grail.”
    While doing other experiments on a commercial wafer of the ferrimagnetic insulator yttrium iron garnet (YIG) with nanomagnetic strips on its surface, LMGN PhD student Korbinian Baumgaertl was inspired to develop precisely engineered YIG-nanomagnet devices. With the Center of MicroNanoTechnology’s support, Baumgaertl was able to excite spin waves in the YIG at specific gigahertz frequencies using radiofrequency signals, and — crucially — to reverse the magnetization of the surface nanomagnets.
    “The two possible orientations of these nanomagnets represent magnetic states 0 and 1, which allows digital information to be encoded and stored,” Grundler explains.

    A route to in-memory computation
    The scientists made their discovery using a conventional vector network analyzer, which sent a spin wave through the YIG-nanomagnet device. Nanomagnet reversal happened only when the spin wave hit a certain amplitude, and could then be used to write and read data.
    “We can now show that the same waves we use for data processing can be used to switch the magnetic nanostructures so that we also have nonvolatile magnetic storage within the very same system,” Grundler explains, adding that “nonvolatile” refers to the stable storage of data over long time periods without additional energy consumption.
    It’s this ability to process and store data in the same place that gives the technique its potential to change the current computing architecture paradigm by putting an end to the energy-inefficient separation of processors and memory storage, and achieving what is known as in-memory computation.
    Optimization on the horizon
    Baumgaertl and Grundler have published the groundbreaking results in the journal Nature Communications, and the LMGN team is already working on optimizing their approach.
    “Now that we have shown that spin waves write data by switching the nanomagnets from states 0 to 1, we need to work on a process to switch them back again — this is known as toggle switching,” Grundler says.
    He also notes that theoretically, the magnonics approach could process data in the terahertz range of the electromagnetic spectrum (for comparison, current computers function in the slower gigahertz range). However, they still need to demonstrate this experimentally.
    “The promise of this technology for more sustainable computing is huge. With this publication, we are hoping to reinforce interest in wave-based computation, and attract more young researchers to the growing field of magnonics.” More