More stories

  • in

    Artificial intelligence classifies supernova explosions with unprecedented accuracy

    Artificial intelligence is classifying real supernova explosions without the traditional use of spectra, thanks to a team of astronomers at the Center for Astrophysics | Harvard & Smithsonian. The complete data sets and resulting classifications are publicly available for open use.
    By training a machine learning model to categorize supernovae based on their visible characteristics, the astronomers were able to classify real data from the Pan-STARRS1 Medium Deep Survey for 2,315 supernovae with an accuracy rate of 82-percent without the use of spectra.
    The astronomers developed a software program that classifies different types of supernovae based on their light curves, or how their brightness changes over time. “We have approximately 2,500 supernovae with light curves from the Pan-STARRS1 Medium Deep Survey, and of those, 500 supernovae with spectra that can be used for classification,” said Griffin Hosseinzadeh, a postdoctoral researcher at the CfA and lead author on the first of two papers published in The Astrophysical Journal. “We trained the classifier using those 500 supernovae to classify the remaining supernovae where we were not able to observe the spectrum.”
    Edo Berger, an astronomer at the CfA explained that by asking the artificial intelligence to answer specific questions, the results become increasingly more accurate. “The machine learning looks for a correlation with the original 500 spectroscopic labels. We ask it to compare the supernovae in different categories: color, rate of evolution, or brightness. By feeding it real existing knowledge, it leads to the highest accuracy, between 80- and 90-percent.”
    Although this is not the first machine learning project for supernovae classification, it is the first time that astronomers have had access to a real data set large enough to train an artificial intelligence-based supernovae classifier, making it possible to create machine learning algorithms without the use of simulations.
    “If you make a simulated light curve, it means you are making an assumption about what supernovae will look like, and your classifier will then learn those assumptions as well,” said Hosseinzadeh. “Nature will always throw some additional complications in that you did not account for, meaning that your classifier will not do as well on real data as it did on simulated data. Because we used real data to train our classifiers, it means our measured accuracy is probably more representative of how our classifiers will perform on other surveys.” As the classifier categorizes the supernovae, said Berger, “We will be able to study them both in retrospect and in real-time to pick out the most interesting events for detailed follow up. We will use the algorithm to help us pick out the needles and also to look at the haystack.”
    The project has implications not only for archival data, but also for data that will be collected by future telescopes. The Vera C. Rubin Observatory is expected to go online in 2023, and will lead to the discovery of millions of new supernovae each year. This presents both opportunities and challenges for astrophysicists, where limited telescope time leads to limited spectral classifications.
    “When the Rubin Observatory goes online it will increase our discovery rate of supernovae by 100-fold, but our spectroscopic resources will not increase,” said Ashley Villar, a Simons Junior Fellow at Columbia University and lead author on the second of the two papers, adding that while roughly 10,000 supernovae are currently discovered each year, scientists only take spectra of about 10-percent of those objects. “If this holds true, it means that only 0.1-percent of supernovae discovered by the Rubin Observatory each year will get a spectroscopic label. The remaining 99.9-percent of data will be unusable without methods like ours.”
    Unlike past efforts, where data sets and classifications have been available to only a limited number of astronomers, the data sets from the new machine learning algorithm will be made publicly available. The astronomers have created easy-to-use, accessible software, and also released all of the data from Pan-STARRS1 Medium Deep Survey along with the new classifications for use in other projects. Hosseinzadeh said, “It was really important to us that these projects be useful for the entire supernova community, not just for our group. There are so many projects that can be done with these data that we could never do them all ourselves.” Berger added, “These projects are open data for open science.”
    This project was funded in part by a grant from the National Science Foundation (NSF) and the Harvard Data Science Initiative (HDSI). More

  • in

    Catalyst research: Molecular probes require highly precise calculations

    Catalysts are indispensable for many technologies. To further improve heterogeneous catalysts, it is required to analyze the complex processes on their surfaces, where the active sites are located. Scientists of Karlsruhe Institute of Technology (KIT), together with colleagues from Spain and Argentina, have now reached decisive progress: As reported in Physical Review Letters, they use calculation methods with so-called hybrid functionals for the reliable interpretation of experimental data.
    Many important technologies, such as processes for energy conversion, emission reduction, or the production of chemicals, work with suitable catalysts only. For this reason, highly efficient materials for heterogeneous catalysis are gaining importance. In heterogeneous catalysis, the material acting as a catalyst and the reacting substances exist in different phases as a solid or gas, for instance. Material compositions can be determined reliably by various methods. Processes taking place on the catalyst surface, however, can be detected by hardly any analysis method. “But it is these highly complex chemical processes on the outermost surface of the catalyst that are of decisive importance,” says Professor Christof Wöll, Head of KIT’s Institute of Functional Interfaces (IFG). “There, the active sites are located, where the catalyzed reaction takes place.”
    Precise Examination of the Surface of Powder Catalysts
    Among the most important heterogeneous catalysts are cerium oxides, i.e. compounds of the rare-earth metal cerium with oxygen. They exist in powder form and consist of nanoparticles of controlled structure. The shape of the nanoparticles considerably influences the reactivity of the catalyst. To study the processes on the surface of such powder catalysts, researchers recently started to use probe molecules, such as carbon monoxide molecules, that bind to the nanoparticles. These probes are then measured by infrared reflection absorption spectroscopy (IRRAS). Infrared radiation causes molecules to vibrate. From the vibration frequencies of the probe molecules, detailed information can be obtained on the type and composition of the catalytic sites. So far, however, interpretation of the experimental IRRAS data has been very difficult, because technologically relevant powder catalysts have many vibration bands, whose exact allocation is challenging. Theoretical calculations were of no help, because the deviation from the experiment, also in the case of model systems, was so large that experimentally observed vibration bands could not be allocated precisely.
    Long Calculation Time — High Accuracy
    Researchers of KIT’s Institute of Functional Interfaces (IFG) and Institute of Catalysis Research and Technology (IKFT), in cooperation with colleagues from Spain and Argentina coordinated by Dr. M. Verónica Ganduglia-Pirovano from Consejo Superior de Investigaciones Científicas (CSIC) in Madrid, have now identified and solved a major problem of theoretical analysis. As reported in Physical Review Letters, systematic theoretical studies and validation of the results using model systems revealed that theoretical methods used so far have some fundamental weaknesses. In general, such weaknesses can be observed in calculations using the density functional theory (DFT), a method with which the quantum mechanics basic state of a multi-electron system can be determined based on the density of the electrons. The researchers found that the weaknesses can be overcome with so-called hybrid functionals that combine DFT with the Hartree-Fock method, an approximation method in quantum chemistry. This makes the calculations very complex, but also highly precise. “The calculation times required by these new methods are longer by a factor of 100 than for conventional methods,” says Christof Wöll. “But this drawback is more than compensated by the excellent agreement with the experimental systems.” Using nanoscaled cerium oxide catalysts, the researchers demonstrated this progress that may contribute to making heterogeneous catalysts more effective and durable.
    The results of the work also represent an important contribution to the new Collaborative Research Center (CRC) “TrackAct — Tracking the Active Site in Heterogeneous Catalysis for Emission Control” at KIT. Professor Christof Wöll and Dr. Yuemin Wang from IFG as well as Professor Felix Studt and Dr. Philipp Pleßow from IKFT are among the principal investigators of this interdisciplinary CRC that is aimed at holistically understanding catalytic processes.

    Story Source:
    Materials provided by Karlsruher Institut für Technologie (KIT). Note: Content may be edited for style and length. More

  • in

    Longest intergalactic gas filament discovered

    More than half of the matter in our universe has so far remained hidden from us. However, astrophysicists had a hunch where it might be: In so-called filaments, unfathomably large thread-like structures of hot gas that surround and connect galaxies and galaxy clusters. A team led by the University of Bonn (Germany) has now for the first time observed a gas filament with a length of 50 million light years. Its structure is strikingly similar to the predictions of computer simulations. The observation therefore also confirms our ideas about the origin and evolution of our universe. The results are published in the journal Astronomy & Astrophysics.
    We owe our existence to a tiny aberration. Pretty much exactly 13.8 billion years ago, the Big Bang occurred. It is the beginning of space and time, but also of all matter that makes up our universe today. Although it was initially concentrated at one point, it expanded at breakneck speed — a gigantic gas cloud in which matter was almost uniformly distributed.
    Almost, but not completely: In some parts the cloud was a bit denser than in others. And for this reason alone there are planets, stars and galaxies today. This is because the denser areas exerted slightly higher gravitational forces, which drew the gas from their surroundings towards them. More and more matter therefore concentrated at these regions over time. The space between them, however, became emptier and emptier. Over the course of a good 13 billion years, a kind of sponge structure developed: large “holes” without any matter, with areas in between where thousands of galaxies are gathered in a small space, so-called galaxy clusters.
    Fine web of gas threads
    If it really happened that way, the galaxies and clusters should still be connected by remnants of this gas, like the gossamer-thin threads of a spider web. “According to calculations, more than half of all baryonic matter in our universe is contained in these filaments — this is the form of matter of which stars and planets are composed, as are we ourselves,” explains Prof. Dr. Thomas Reiprich from the Argelander Institute for Astronomy at the University of Bonn. Yet it has so far escaped our gaze: Due to the enormous expansion of the filaments, the matter in them is extremely diluted: It contains just ten particles per cubic meter, which is much less than the best vacuum we can create on Earth.
    However, with a new measuring instrument, the eROSITA space telescope, Reiprich and his colleagues were now able to make the gas fully visible for the first time. “eROSITA has very sensitive detectors for the type of X-ray radiation that emanates from the gas in filaments,” explains Reiprich. “It also has a large field of view — like a wide-angle lens, it captures a relatively large part of the sky in a single measurement, and at a very high resolution.” This allows detailed images of such huge objects as filaments to be taken in a comparatively short time.
    Confirmation of the standard model
    In their study, the researchers examined a celestial object called Abell 3391/95. This is a system of three galaxy clusters, which is about 700 million light years away from us. The eROSITA images show not only the clusters and numerous individual galaxies, but also the gas filaments connecting these structures. The entire filament is 50 million light years long. But it may be even more enormous: The scientists assume that the images only show a section.
    “We compared our observations with the results of a simulation that reconstructs the evolution of the universe,” explains Reiprich. “The eROSITA images are strikingly similar to computer-generated graphics. This suggests that the widely accepted standard model for the evolution of the universe is correct.” Most importantly, the data show that the missing matter is probably actually hidden in the filaments.

    Story Source:
    Materials provided by University of Bonn. Note: Content may be edited for style and length. More

  • in

    Tiny quantum computer solves real optimization problem

    Quantum computers have already managed to surpass ordinary computers in solving certain tasks — unfortunately, totally useless ones. The next milestone is to get them to do useful things. Researchers at Chalmers University of Technology, Sweden, have now shown that they can solve a small part of a real logistics problem with their small, but well-functioning quantum computer.
    Interest in building quantum computers has gained considerable momentum in recent years, and feverish work is underway in many parts of the world. In 2019, Google’s research team made a major breakthrough when their quantum computer managed to solve a task far more quickly than the world’s best supercomputer. The downside is that the solved task had no practical use whatsoever — it was chosen because it was judged to be easy to solve for a quantum computer, yet very difficult for a conventional computer.
    Therefore, an important task is now to find useful, relevant problems that are beyond the reach of ordinary computers, but which a relatively small quantum computer could solve.
    “We want to be sure that the quantum computer we are developing can help solve relevant problems early on. Therefore, we work in close collaboration with industrial companies,” says theoretical physicist Giulia Ferrini, one of the leaders of Chalmers University of Technology’s quantum computer project, which began in 2018.
    Together with Göran Johansson, Giulia Ferrini led the theoretical work when a team of researchers at Chalmers, including an industrial doctoral student from the aviation logistics company Jeppesen, recently showed that a quantum computer can solve an instance of a real problem in the aviation industry.
    The algorithm proven on two qubits All airlines are faced with scheduling problems. For example, assigning individual aircraft to different routes represents an optimisation problem, one that grows very rapidly in size and complexity as the number of routes and aircraft increases.

    advertisement

    Researchers hope that quantum computers will eventually be better at handling such problems than today’s computers. The basic building block of the quantum computer — the qubit — is based on completely different principles than the building blocks of today’s computers, allowing them to handle enormous amounts of information with relatively few qubits.
    However, due to their different structure and function, quantum computers must be programmed in other ways than conventional computers. One proposed algorithm that is believed to be useful on early quantum computers is the so-called Quantum Approximate Optimization Algorithm (QAOA).
    The Chalmers research team has now successfully executed said algorithm on their quantum computer — a processor with two qubits — and they showed that it can successfully solve the problem of assigning aircraft to routes. In this first demonstration, the result could be easily verified as the scale was very small — it involved only two airplanes.
    Potential to handle many aircraft With this feat, the researchers were first to show that the QAOA algorithm can solve the problem of assigning aircraft to routes in practice. They also managed to run the algorithm one level further than anyone before, an achievement that requires very good hardware and accurate control.
    “We have shown that we have the ability to map relevant problems onto our quantum processor. We still have a small number of qubits, but they work well. Our plan has been to first make everything work very well on a small scale, before scaling up,” says Jonas Bylander, senior researcher responsible for the experimental design, and one of the leaders of the project of building a quantum computer at Chalmers.
    The theorists in the research team also simulated solving the same optimisation problem for up to 278 aircraft, which would require a quantum computer with 25 qubits.
    “The results remained good as we scaled up. This suggests that the QAOA algorithm has the potential to solve this type of problem at even larger scales,” says Giulia Ferrini.
    Surpassing today’s best computers would, however, require much larger devices. The researchers at Chalmers have now begun scaling up and are currently working with five quantum bits. The plan is to reach at least 20 qubits by 2021 while maintaining the high quality. More

  • in

    How the spread of the internet is changing migration

    The spread of the Internet is shaping migration in profound ways. A McGill-led study of over 150 countries links Internet penetration with migration intentions and behaviours, suggesting that digital connectivity plays a key role in migration decisions and actively supports the migration process.
    Countries with higher proportions of Internet users tend to have more people who are willing to emigrate. At the individual level, the association between Internet use and intention to migrate is stronger among women and those with less education. The same result was found for economic migrants compared to political migrants, according to the team of international researchers from McGill University, University of Oxford, University of Calabria, and Bocconi University.
    “The digital revolution brought about by the advent of the Internet has transformed our societies, economies, and way of life. Migration is no exception in this revolution,” says co-author Luca Maria Pesando, an Assistant Professor in the Department of Sociology and Centre on Population Dynamics at McGill University.
    In the study, published in Population and Development Review, the researchers tracked Internet use and migration pathways with data from the World Bank, the International Telecommunication Union, the Global Peace Index, the Arab Barometer, and the Gallup World Poll, an international survey of citizens across 160 countries.
    Their findings underscore the importance of the Internet as an informational channel for migrants who leave their country in search of better opportunities. Unlike political migrants, who might be pushed, for example, by the sudden explosion of a civil conflict, economic migrants’ decisions are more likely to benefit from access to information provided by the Internet, and more likely to be shaped by aspirations of brighter futures in their destination countries.
    “The Internet not only gives us access to more information; it allows us to easily compare ourselves to others living in other — often wealthier — countries through social media,” says Pesando.
    Case study of Italy
    Looking at migration data in Italy — a country that has witnessed sizeable increases in migrant inflows over the past two decades — the researchers found a strong correlation between Internet use in migrants’ countries of origin, and the presence of people from that country in the Italian population register in the following year. Tracking migrants including asylum seekers and refugees passing through the Sant’Anna immigration Centre in Calabria, the researchers also found a link between migrants’ digital skills and knowledge of the Internet and voluntary departure from the Centre in search of better economic opportunities.
    “Our findings contribute to the growing research on digital demography, where Internet-generated data or digital breadcrumbs are used to study migration and other demographic phenomena,” says Pesando. “Our work suggests that the Internet acts not just as an instrument to observe migration behaviors, but indeed actively supports the migration process.”
    As next steps, the research team, which includes Francesco Billari of Bocconi University and Ridhi Kashyap and Valentina Rotondi of University of Oxford, will explore how digital technology and connectivity affect social development outcomes, ranging from women’s empowerment to reproductive health and children’s wellbeing across generations.

    Story Source:
    Materials provided by McGill University. Note: Content may be edited for style and length. More

  • in

    Teaching artificial intelligence to adapt

    Getting computers to “think” like humans is the holy grail of artificial intelligence, but human brains turn out to be tough acts to follow. The human brain is a master of applying previously learned knowledge to new situations and constantly refining what’s been learned. This ability to be adaptive has been hard to replicate in machines.
    Now, Salk researchers have used a computational model of brain activity to simulate this process more accurately than ever before. The new model mimics how the brain’s prefrontal cortex uses a phenomenon known as “gating” to control the flow of information between different areas of neurons. It not only sheds light on the human brain, but could also inform the design of new artificial intelligence programs.
    “If we can scale this model up to be used in more complex artificial intelligence systems, it might allow these systems to learn things faster or find new solutions to problems,” says Terrence Sejnowski, head of Salk’s Computational Neurobiology Laboratory and senior author of the new work, published on November 24, 2020, in Proceedings of the National Academy of Sciences.
    The brains of humans and other mammals are known for their ability to quickly process stimuli — sights and sounds, for instance — and integrate any new information into things the brain already knows. This flexibility to apply knowledge to new situations and continuously learn over a lifetime has long been a goal of researchers designing machine learning programs or artificial brains. Historically, when a machine is taught to do one task, it’s difficult for the machine to learn how to adapt that knowledge to a similar task; instead each related process has to be taught individually.
    In the current study, Sejnowski’s group designed a new computational modeling framework to replicate how neurons in the prefrontal cortex — the brain area responsible for decision-making and working memory — behave during a cognitive test known as the Wisconsin Card Sorting Test. In this task, participants have to sort cards by color, symbol or number — and constantly adapt their answers as the card-sorting rule changes. This test is used clinically to diagnose dementia and psychiatric illnesses but is also used by artificial intelligence researchers to gauge how well their computational models of the brain can replicate human behavior.
    Previous models of the prefrontal cortex performed poorly on this task. The Sejnowski team’s framework, however, integrated how neurons control the flow of information throughout the entire prefrontal cortex via gating, delegating different pieces of information to different subregions of the network. Gating was thought to be important at a small scale — in controlling the flow of information within small clusters of similar cells — but the idea had never been integrated into models through the whole network.
    The new network not only performed as reliably as humans on the Wisconsin Card Sorting Task, but also mimicked the mistakes seen in some patients. When sections of the model were removed, the system showed the same errors seen in patients with prefrontal cortex damage, such as that caused by trauma or dementia.
    “I think one of the most exciting parts of this is that, using this sort of modeling framework, we’re getting a better idea of how the brain is organized,” says Ben Tsuda, a Salk graduate student and first author of the new paper. “That has implications for both machine learning and gaining a better understanding of some of these diseases that affect the prefrontal cortex.”
    If researchers have a better understanding of how regions of the prefrontal cortex work together, he adds, that will help guide interventions to treat brain injury. It could suggest areas to target with deep brain stimulation, for instance.
    “When you think about the ways in which the brain still surpasses state-of-the-art deep learning networks, one of those ways is versatility and generalizability across tasks with different rules,” says study coauthor Kay Tye, a professor in Salk’s Systems Neurobiology Laboratory and the Wylie Vale Chair. “In this new work, we show how gating of information can power our new and improved model of the prefrontal cortex.”
    The team next wants to scale up the network to perform more complex tasks than the card-sorting test and determine whether the network-wide gating gives the artificial prefrontal cortex a better working memory in all situations. If the new approach works under broad learning scenarios, they suspect that it will lead to improved artificial intelligence systems that can be more adaptable to new situations.

    Story Source:
    Materials provided by Salk Institute. Note: Content may be edited for style and length. More

  • in

    Carbon capture's next top model

    In the transition toward clean, renewable energy, there will still be a need for conventional power sources, like coal and natural gas, to ensure steady power to the grid. Researchers across the world are using unique materials and methods that will make those conventional power sources cleaner through carbon capture technology.
    Creating accurate, detailed models is key to scaling up this important work. A recent paper led by the University of Pittsburgh Swanson School of Engineering examines and compares the various modeling approaches for hollow fiber membrane contactors (HFMCs), a type of carbon capture technology. The group analyzed over 150 cited studies of multiple modeling approaches to help researchers choose the technique best suited to their research.
    “HFMCs are one of the leading technologies for post-combustion carbon capture, but we need modeling to better understand them,” said Katherine Hornbostel, assistant professor of mechanical engineering and materials science, whose lab led the analysis. “Our analysis can guide researchers whose work is integral to meeting our climate goals and help them scale up the technology for commercial use.”
    A hollow fiber membrane contactor (HFMC) is a group of fibers in a bundle, with exhaust flowing on one side and a liquid solvent on the other to trap the carbon dioxide. The paper reviews state-of-the-art methods for modeling carbon capture HFMCs in one, two and three dimensions, comparing them in-depth and suggesting directions for future research.
    “The ideal modeling technique varies depending on the project, but we found that 3D models are qualitatively different in the nature of information they can reveal,” said Joanna Rivero, graduate student working in the Hornbostel Lab and lead author. “Though cost limits their wide use, we identify 3D modeling and scale-up modeling as areas that will greatly accelerate the progress of this technology.”
    Grigorios Panagakos, research engineer and teaching faculty in Carnegie Mellon University’s Department of Chemical Engineering, brought his expertise in analyzing the modeling of transport phenomena to the review paper, as well.

    Story Source:
    Materials provided by University of Pittsburgh. Note: Content may be edited for style and length. More

  • in

    Information transport in antiferromagnets via pseudospin-magnons

    A team of researchers from the Technical University of Munich, the Walther-Meissner-Institute of the Bavarian Academy of Sciences and Humanities, and the Norwegian University of Science and Technology in Trondheim has discovered an exciting method for controlling spin carried by quantized spin wave excitations in antiferromagnetic insulators.
    Elementary particles carry an intrinsic angular momentum known as their spin. For an electron, the spin can take only two particular values relative to a quantization axis, letting us denote them as spin-up and spin-down electrons. This intrinsic two-valuedness of the electron spin is at the core of many fascinating effects in physics.
    In today’s information technology, the spin of an electron and the associated magnetic momentum are exploited in applications of information storage and readout of magnetic media, like hard disks and magnetic tapes.
    Antiferromagnets: future stars in magnetic data storage?
    Both, the storage media and the readout sensors utilize ferromagnetically ordered materials, where all magnetic moments align parallel. However, the moments may orient in a more complex way. In antiferromagnets, the “antagonist to a ferromagnet,” neighboring moments align in an anti-parallel fashion. While these systems look “non-magnetic” from outside, they have attracted broad attention as they promise robustness against external magnetic fields and faster control. Thus, they are considered as the new kids on the block for applications in magnetic storage and unconventional computing.
    One important question in this context is, whether and how information can be transported and detected in antiferromagnets. Researchers at the Technical University of Munich, the Walther-Meissner-Institute and the Norwegian University of Science and Technology in Trondheim studied the antiferromagnetic insulator hematite in this respect.

    advertisement

    In this system, charge carriers are absent and therefore it is a particularly interesting testbed for the investigation of novel applications, where one aims at avoiding dissipation by a finite electrical resistance. The scientists discovered a new effect unique to the transport of antiferromagnetic excitations, which opens up new possibilities for information processing with antiferromagnets.
    Unleashing the pseudospin in antiferromagnets
    Dr Matthias Althammer, the lead researcher on the project describes the effect as follows: “In the antiferromagnetic phase, neighboring spins are aligned in an anti-parallel fashion. However, there are quantized excitations called magnons. Those carry information encoded in their spin and can propagate in the system. Due to the two antiparallel-coupled spin species in the antiferromagnet the excitation is of a complex nature, however, its properties can be cast in an effective spin, a pseudospin. We could experimentally demonstrate that we can manipulate this pseudospin, and its propagation with a magnetic field.”
    Dr Akashdeep Kamra, the lead theoretician from NTNU in Trondheim adds that “this mapping of the excitations of an antiferromagnet onto a pseudospin enables an understanding and a powerful approach which has been the crucial foundation for treating transport phenomena in electronic systems. In our case, this enables us to describe the dynamics of the system in a much easier manner, but still maintain a full quantitative description of the system. Most importantly, the experiments provide a proof-of-concept for the pseudospin, a concept which is closely related to fundamental quantum mechanics.”
    Unlocking the full potential of antiferromagnetic magnons
    This first experimental demonstration of magnon pseudospin dynamics in an antiferromagnetic insulator not only confirms the theoretical conjectures on magnon transport in antiferromagnets, but also provides an experimental platform for expanding towards rich electronics inspired phenomena.
    “We may be able to realize fascinating new stuff such as the magnon analogue of a topological insulator in antiferromagnetic materials” points out Rudolf Gross, director of the Walther-Meissner-Institute, Professor for Technical Physics (E23) at the Technical University of Munich and co-speaker for the cluster of excellence Munich Center for Quantum Science and Technology (MCQST). “Our work provides an exciting perspective for quantum applications based on magnons in antiferromagnets”
    The research was funded by the Deutsche Forschungsgemeinschaft (DFG) via the cluster of excellence Munich Center for Quantum Science and Technology (MCQST) and by the Research Council of Norway. More