More stories

  • in

    Characterizing super-semi sandwiches for quantum computing

    “There is an international race to identify the best platform for controlling and processing quantum information for quantum computers, where superconductors play a prominent role,” says Duc Phan, PhD student at the Institute of Science and Technology Austria (ISTA) and first author of a new paper now published in Physical Review Letters. “Microsoft is working on topological qubits using superconductor-semiconductor sandwiches. However, before we can use them, we must understand the fundamental physics behind them.”
    Phan and his ISTA colleagues Jorden Senior and Andrew Higginbotham from the Condensed Matter and Quantum Circuits group conducted this study in close collaboration with partners from New York University and with theory support from Areg Ghazaryan and Maksym Serbyn from ISTA’s Quantum Dynamics group. They developed a technique to probe the quantum interactions in super-semi sandwiches paving the way for new applications like topological quantum bits based on so-called Majorana zero modes.
    Cold Environment
    For their experiment, the researchers created a microscopic sandwich made of an aluminium (Al) superconductor on top of an indium-arsenic (InAs) semiconductor. Superconductors are materials that have no electrical resistance. For that to happen, they are cooled down to close to absolute zero temperature. Semiconductors like InAs or silicon can be insulating or conduct electricity depending on their environment and applied electric field.
    Just like in a conventional sandwich that becomes more than the sum of its parts, the combined properties of Al and InAs become modified in super-semi sandwiches. At the interface between the Al superconductor and the InAs semiconductor, the proximity effect spills the superconductivity into the semiconductor creating new quantum states there. However, until now researchers had a hard time studying them because they could not be probed directly because of being concealed by a presence of the Al superconducting layer.
    “We found that by sending a current alternating billions of times a second through the vicinity of the sandwich, we could make the superconductor’s veil partially transparent and get feedback about the properties of the semiconductor,” explains Senior. “We also applied a magnetic field to create new quantum states we were looking for and developed a new model that explained our observations.”
    A new level of detail
    This first experimental result of the Higginbotham group since its establishment at ISTA lays the groundwork to study superconductor-semiconductor hybrid structures at a new level of detail. “The parameters we can infer from this could provide much-needed guidance to construct topological quantum bits based on Majorana zero modes,” says Jorden. He also highlights that “ISTA is very well placed in this developing field because here experimental expertise, theoretical understanding, as well as excellent infrastructure provided by the state-of-the-art clean room — the kitchen for sandwich production — come together.”
    Phan and his colleagues are excited about what insights they will gain with their novel probing technique and what future applications may become possible once the fundamental physics of this exotic sandwich has been understood.
    Story Source:
    Materials provided by Institute of Science and Technology Austria. Note: Content may be edited for style and length. More

  • in

    New technology to make charging electric cars as fast as pumping gas

    Whether it’s photovoltaics or fusion, sooner or later, human civilization must turn to renewable energies. This is deemed inevitable considering the ever-growing energy demands of humanity and the finite nature of fossil fuels. As such, much research has been pursued in order to develop alternative sources of energy, most of which utilize electricity as the main energy carrier. The extensive R&D in renewables has been accompanied by gradual societal changes as the world adopted new products and devices running on renewables. The most striking change as of recently is the rapid adoption of electric vehicles. While they were hardly seen on the roads even 10 years ago, now millions of electric cars are being sold annually. The electric car market is one of the most rapidly growing sectors, and it helped propel Elon Musk to become the wealthiest man in the world.
    Unlike traditional cars which derive energy from the combustion of hydrocarbon fuels, electric vehicles rely on batteries as the storage medium for their energy. For a long time, batteries had far lower energy density than those offered by hydrocarbons, which resulted in very low ranges of early electric vehicles. However, gradual improvement in battery technologies eventually allowed the drive ranges of electric cars to be within acceptable levels in comparison to gasoline-burning cars. It is no understatement that the improvement in battery storage technology was one of the main technical bottlenecks which had to be solved in order to kickstart the current electric vehicle revolution.
    However, despite the vast improvements in battery technology, today consumers of electric vehicles face another difficulty — slow battery charging speed. Currently, cars take about 10 hours to fully recharge at home. Even the fastest superchargers at the charging stations require up to 20-40 minutes to fully recharge the vehicles. This creates additional costs and inconvenience to the customers.
    To address this problem, scientists looked for answers in the mysterious field of quantum physics. Their search has led to the discovery that quantum technologies may promise new mechanisms to charge batteries at a faster rate. Such concept of “quantum battery” has been first proposed in a seminal paper published by Alicki and Fannes in 2012. It was theorized that quantum resources, such as entanglement, can be used to vastly speed up the battery charging process by charging all cells within the battery simultaneously in a collective manner.
    This is particularly exciting as modern large-capacity batteries can contain numerous cells. Such collective charging is not possible in classical batteries, where the cells are charged in parallel independently of one another. The advantage of this collective versus parallel charging can be measured by the ratio called the ‘quantum charging advantage’. Later, around the year 2017, it was noticed that there can be two possible sources behind this quantum advantage — namely ‘global operation’ (in which all the cells talk to all others simultaneously, i.e., “all sitting at one table”) and ‘all-to-all coupling’ (every cell can talk with every other, but a single cell, i.e., “many discussions, but every discussion has only two participants”). However, it is unclear whether both these sources are necessary and whether there are any limits to the charging speed that can be achieved.
    Recently, scientists from the Center for Theoretical Physics of Complex Systems within the Institute for Basic Science (IBS) further explored these questions. The paper, which was chosen as an “Editor’s Suggestion” in the journal Physical Review Letters, showed that all-to-all coupling is irrelevant in quantum batteries and that the presence of global operations is the only ingredient in the quantum advantage. The group went further to pinpoint the exact source of this advantage while ruling out any other possibilities and even provided an explicit way of designing such batteries.
    In addition, the group was able to precisely quantify how much charging speed can be achieved in this scheme. While the maximum charging speed increases linearly with the number of cells in classical batteries, the study showed that quantum batteries employing global operation can achieve quadratic scaling in charging speed. To illustrate this, we will consider a typical electric vehicle with a battery that contains about 200 cells. Employing this quantum charging would lead to a 200 times speedup over classical batteries, which means that at home charging time would be cut from 10 hours to about 3 minutes. At high-speed charging stations, the charge time would be cut from 30 minutes to mere seconds.
    Researchers say that consequences can be far-reaching and that the implications of quantum charging can go well beyond electric cars and consumer electronics. For example, it may find key uses in future fusion power plants, which require large amounts of energy to be charged and discharged in an instant. Of course, quantum technologies are still in their infancy and there is a long way to go before these methods can be implemented in practice. Research findings such as these, however, create a promising direction and can incentivize the funding agencies and businesses to further invest in these technologies. If employed, it is believed that quantum batteries would completely revolutionize the way we use energy and take us a step closer to our sustainable future.
    Story Source:
    Materials provided by Institute for Basic Science. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence paves the way to discovering new rare-earth compounds

    Artificial intelligence advances how scientists explore materials. Researchers from Ames Laboratory and Texas A&M University trained a machine-learning (ML) model to assess the stability of rare-earth compounds. This work was supported by Laboratory Directed Research and Development Program (LDRD) program at Ames Laboratory. The framework they developed builds on current state-of-the-art methods for experimenting with compounds and understanding chemical instabilities.
    Ames Lab has been a leader in rare-earths research since the middle of the 20th century. Rare earth elements have a wide range of uses including clean energy technologies, energy storage, and permanent magnets. Discovery of new rare-earth compounds is part of a larger effort by scientists to expand access to these materials.
    The present approach is based on machine learning (ML), a form of artificial intelligence (AI), which is driven by computer algorithms that improve through data usage and experience. Researchers used the upgraded Ames Laboratory Rare Earth database (RIC 2.0) and high-throughput density-functional theory (DFT) to build the foundation for their ML model.
    High-throughput screening is a computational scheme that allows a researcher to test hundreds of models quickly. DFT is a quantum mechanical method used to investigate thermodynamic and electronic properties of many body systems. Based on this collection of information, the developed ML model uses regression learning to assess phase stability of compounds.
    Tyler Del Rose, an Iowa State University graduate student, conducted much of the foundational research needed for the database by writing algorithms to search the web for information to supplement the database and DFT calculations. He also worked on experimental validation of the AI predictions and helped to improve the ML based models by ensuring they are representative of reality.
    “Machine learning is really important here because when we are talking about new compositions, ordered materials are all very well known to everyone in the rare earth community,” said Ames Laboratory Scientist Prashant Singh, who led the DFT plus machine learning effort with Guillermo Vazquez and Raymundo Arroyave. “However, when you add disorder to known materials, it’s very different. The number of compositions becomes significantly larger, often thousands or millions, and you cannot investigate all the possible combinations using theory or experiments.”
    Singh explained that the material analysis is based on a discrete feedback loop in which the AI/ML model is updated using new DFT database based on real-time structural and phase information obtained from our experiments. This process ensures that information is carried from one step to the next and reduces the chance of making mistakes.
    Yaroslav Mudryk, the project supervisor, said that the framework was designed to explore rare earth compounds because of their technological importance, but its application is not limited to rare-earths research. The same approach can be used to train an ML model to predict magnetic properties of compounds, process controls for transformative manufacturing, and optimize mechanical behaviors.
    “It’s not really meant to discover a particular compound,” Mudryk said. “It was, how do we design a new approach or a new tool for discovery and prediction of rare earth compounds? And that’s what we did.”
    Mudryk emphasized that this work is just the beginning. The team is exploring the full potential of this method, but they are optimistic that there will be a wide range of applications for the framework in the future.
    Story Source:
    Materials provided by DOE/Ames Laboratory. Note: Content may be edited for style and length. More

  • in

    Researchers develop the world's first power-free frequency tuner using nanomaterials

    In a paper published today in Nature Communications, researchers at the University of Oxford and the University of Pennsylvania have found a power-free and ultra-fast way of frequency tuning using functional nanowires.
    Think of an orchestra warming up before the performance. The oboe starts to play a perfect A note at a frequency of 440 Hz while all the other instruments adjust themselves to that frequency. Telecommunications technology relies on this very concept of matching the frequencies of transmitters and receivers. In practice, this is achieved when both ends of the communication link tune into the same frequency channel.
    In today’s colossal communications networks, the ability to reliably synthesise as many frequencies as possible and to rapidly switch from one to another is paramount for seamless connectivity.
    Researchers at the University of Oxford and the University of Pennsylvania have fabricated vibrating nanostrings of a chalcogenide glass (germanium telluride) that resonate at predetermined frequencies, just like guitar strings. To tune the frequency of these resonators, the researchers switch the atomic structure of the material, which in turn changes the mechanical stiffness of the material itself.
    This differs from existing approaches that apply mechanical stress on the nanostrings similar to tuning a guitar using the tuning pegs. This directly translates into higher power consumption because the pegs are not permanent and require a voltage to hold the tension.
    Utku Emre Ali, at the University of Oxford who completed the research as part of his doctoral work said:
    ‘By changing how atoms bond with each other in these glasses, we are able to change the Young’s modulus within a few nanoseconds. Young’s modulus is a measure of stiffness, and it directly affects the frequency at which the nanostrings vibrate.’ More

  • in

    Making memory serve correctly: Fixing an inherent problem in next-generation magnetic RAM

    With the advent of the Internet of Things (IoT) era, many researchers are focused on making most of the technologies involved more sustainable. To reach this target of ‘green IoT,’ some of the building blocks of conventional electronics will have to be improved or radically changed to make them not only faster, but also more energy efficient. In line with this reasoning, many scientists worldwide are currently trying to develop and commercialize a new type of random-access memory (RAM) that will enable ultra-low-power electronics: magnetic RAMs.
    Each memory cell in a magnetic RAM stores either a ‘1’ or a ‘0’ depending on whether the magnetic orientation of two magnetic layers are equal or opposite to each other. Various types of magnetic RAM exist, and they mainly differ in how they modify the magnetic orientation of the magnetic layers when writing to a memory cell. In particular, spin injection torque RAM, or STT-RAM, is one type of magnetic memory that is already being commercialized. However, to achieve even lower write currents and higher reliability, a new type of magnetic memory called spin orbit torque RAM (SOT-RAM), is being actively researched.
    In SOT-RAM, by leveraging spin-orbit interactions, the write current can be immensely reduced, which lowers power consumption. Moreover, since the memory readout and write current paths are different, researchers initially thought that the potential disturbances on the stored values would also be small when either reading or writing. Unfortunately, this turned out not to be the case.
    In 2017, in a study led by Professor Takayuki Kawahara of Tokyo University of Science, Japan, researchers reported that SOT-RAMs face an additional source of disturbance when reading a stored value. In conventional SOT-RAMs, the readout current actually shares part of the path of the write current. When reading a value, the readout operation generates unbalanced spin currents due to the Spin Hall effect. This can unintentionally flip the stored bit if the effect is large enough, making reading in SOT-RAMs less reliable.
    To address this problem, Prof. Kawahara and colleagues conducted another study, which was recently published in IEEE Transactions on Magnetics. The team came up with a new reading method for SOT-RAMs that can nullify this new source of readout disturbance. In short, their idea is to alter the original SOT-RAM structure to create a bi-directional read path. When reading a value, the read current flows out of the magnetic layers in two opposite directions simultaneously. In turn, the disturbances produced by the spin currents generated on each side end up cancelling each other out. An explainer video on the same topic can be watched here: https://youtu.be/Gbz4rDOs4yQ.
    In addition to cementing the theory behind this new source of readout disturbance, the researchers conducted a series of simulations to verify the effectiveness of their proposed method. They tested three different types of ferromagnetic materials for the magnetic layers and various device shapes. The results were very favorable, as Prof. Kawahara remarks: “We confirmed that the proposed method reduces the readout disturbance by at least 10 times for all material parameters and device geometries compared with the conventional read path in SOT-RAM.”
    To top things off, the research team checked the performance of their method in the type of realistic array structure that would be used in an actual SOT-RAM. This test is important because the read paths in an array structure would not be perfectly balanced depending on each memory cell’s position. The results show that a sufficient readout disturbance reduction is possible even when connecting about 1,000 memory cells together. The team is now working towards improving their method to reach a higher number of integrated cells.
    This study could pave the way toward a new era in low-power electronics, from personal computers and portable devices to large-scale servers. Satisfied with what they have achieved, Prof. Kawahara remarks: “We expect next-generation SOT-RAMs to employ write currents an order of magnitude lower than current STT-RAMs, resulting in significant power savings. The results of our work will help solve one of the inherent problems of SOT-RAMs, which will be essential for their commercialization.” 
    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    Smoke from Australia’s intense fires in 2019 and 2020 damaged the ozone layer

    Towers of smoke that rose high into the stratosphere during Australia’s “black summer” fires in 2019 and 2020 destroyed some of Earth’s protective ozone layer, researchers report in the March 18 Science.

    Chemist Peter Bernath of Old Dominion University in Norfolk, Va., and his colleagues analyzed data collected in the lower stratosphere during 2020 by a satellite instrument called the Atmospheric Chemistry Experiment. It measures how different particles in the atmosphere absorb light at different wavelengths. Such absorption patterns are like fingerprints, identifying what molecules are present in the particles.

    The team’s analyses revealed that the particles of smoke, shot into the stratosphere by fire-fueled thunderstorms called pyrocumulonimbus clouds, contained a variety of mischief-making organic molecules (SN: 12/15/20). The molecules, the team reports, kicked off a series of chemical reactions that altered the balances of gases in Earth’s stratosphere to a degree never before observed in 15 years of satellite measurements. That shuffle included boosting levels of chlorine-containing molecules that ultimately ate away at the ozone.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Ozone concentrations in the stratosphere initially increased from January to March 2020, due to similar chemical reactions — sometimes with the contribution of wildfire smoke — that produce ozone  pollution at ground level (SN: 12/8/21). But from April to December 2020, the ozone levels not only fell, but sank below the average ozone concentration from 2005 to 2019.

    Earth’s ozone layer shields the planet from much of the sun’s ultraviolet radiation. Once depleted by human emissions of chlorofluorocarbons and other ozone-damaging substances, the layer has been showing signs of recovery thanks to the Montreal Protocol, an international agreement to reduce the atmospheric concentrations of those substances (SN: 2/10/21).

    But the increasing frequency of large wildfires due to climate change — and their ozone-destroying potential — could become a setback for that rare climate success story, the researchers say (SN: 3/4/20). More

  • in

    AI provides accurate breast density classification

    An artificial intelligence (AI) tool can accurately and consistently classify breast density on mammograms, according to a study in Radiology: Artificial Intelligence.
    Breast density reflects the amount of fibroglandular tissue in the breast commonly seen on mammograms. High breast density is an independent breast cancer risk factor, and its masking effect of underlying lesions reduces the sensitivity of mammography. Consequently, many U.S. states have laws requiring that women with dense breasts be notified after a mammogram, so that they can choose to undergo supplementary tests to improve cancer detection.
    In clinical practice, breast density is visually assessed on two-view mammograms, most commonly with the American College of Radiology Breast Imaging-Reporting and Data System (BI-RADS) four-category scale, ranging from Category A for almost entirely fatty breasts to Category D for extremely dense. The system has limitations, as visual classification is prone to inter-observer variability, or the differences in assessments between two or more people, and intra-observer variability, or the differences that appear in repeated assessments by the same person.
    To overcome this variability, researchers in Italy developed software for breast density classification based on a sophisticated type of AI called deep learning with convolutional neural networks, a sophisticated type of AI that is capable of discerning subtle patterns in images beyond the capabilities of the human eye. The researchers trained the software, known as TRACE4BDensity, under the supervision of seven experienced radiologists who independently visually assessed 760 mammographic images.
    External validation of the tool was performed by the three radiologists closest to the consensus on a dataset of 384 mammographic images obtained from a different center.
    TRACE4BDensity showed 89% accuracy in distinguishing between low density (BI-RADS categories A and B) and high density (BI-RADS categories C and D) breast tissue, with an agreement of 90% between the tool and the three readers. All disagreements were in adjacent BI-RADS categories.
    “The particular value of this tool is the possibility to overcome the suboptimal reproducibility of visual human density classification that limits its practical usability,” said study co-author Sergio Papa, M.D., from the Centro Diagnostico Italiano in Milan, Italy. “To have a robust tool that proposes the density assignment in a standardized fashion may help a lot in decision-making.”
    Such a tool would be particularly valuable, the researchers said, as breast cancer screening becomes more personalized, with density assessment accounting for one important factor in risk stratification.
    “A tool such as TRACE4BDensity can help us advise women with dense breasts to have, after a negative mammogram, supplemental screening with ultrasound, MRI or contrast-enhanced mammography,” said study co-author Francesco Sardanelli, M.D., from the IRCCS Policlinico San Donato in San Donato, Italy.
    The researchers plan additional studies to better understand the full capabilities of the software.
    “We would like to further assess the AI tool TRACE4BDensity, particularly in countries where regulations on women density is not active, by evaluating the usefulness of such tool for radiologists and patients,” said study co-author Christian Salvatore, Ph.D., senior researcher, University School for Advanced Studies IUSS Pavia and co-founder and chief executive officer of DeepTrace Technologies.
    “Development and Validation of an AI-driven Mammographic Breast Density Classification Tool Based on Radiologist Consensus.” Collaborating with Drs. Papa, Sardanelli and Salvatore were Veronica Magni, M.D., Matteo Interlenghi, M.Sc., Andrea Cozzi, M.D., Marco Alì, Ph.D., Alcide A. Azzena, M.D., Davide Capra, M.D., Serena Carriero, M.D., Gianmarco Della Pepa, M.D., Deborah Fazzini, M.D., Giuseppe Granata, M.D., Caterina B. Monti, M.D., Ph.D., Giulia Muscogiuri, M.D., Giuseppe Pellegrino, M.D., Simone Schiaffino, M.D., and Isabella Castiglioni, M.Sc., M.B.A. More

  • in

    Mathematical paradoxes demonstrate the limits of AI

    Humans are usually pretty good at recognising when they get things wrong, but artificial intelligence systems are not. According to a new study, AI generally suffers from inherent limitations due to a century-old mathematical paradox.
    Like some people, AI systems often have a degree of confidence that far exceeds their actual abilities. And like an overconfident person, many AI systems don’t know when they’re making mistakes. Sometimes it’s even more difficult for an AI system to realise when it’s making a mistake than to produce a correct result.
    Researchers from the University of Cambridge and the University of Oslo say that instability is the Achilles’ heel of modern AI and that a mathematical paradox shows AI’s limitations. Neural networks, the state of the art tool in AI, roughly mimic the links between neurons in the brain. The researchers show that there are problems where stable and accurate neural networks exist, yet no algorithm can produce such a network. Only in specific cases can algorithms compute stable and accurate neural networks.
    The researchers propose a classification theory describing when neural networks can be trained to provide a trustworthy AI system under certain specific conditions. Their results are reported in the Proceedings of the National Academy of Sciences.
    Deep learning, the leading AI technology for pattern recognition, has been the subject of numerous breathless headlines. Examples include diagnosing disease more accurately than physicians or preventing road accidents through autonomous driving. However, many deep learning systems are untrustworthy and easy to fool.
    “Many AI systems are unstable, and it’s becoming a major liability, especially as they are increasingly used in high-risk areas such as disease diagnosis or autonomous vehicles,” said co-author Professor Anders Hansen from Cambridge’s Department of Applied Mathematics and Theoretical Physics. “If AI systems are used in areas where they can do real harm if they go wrong, trust in those systems has got to be the top priority.”
    The paradox identified by the researchers traces back to two 20th century mathematical giants: Alan Turing and Kurt Gödel. At the beginning of the 20th century, mathematicians attempted to justify mathematics as the ultimate consistent language of science. However, Turing and Gödel showed a paradox at the heart of mathematics: it is impossible to prove whether certain mathematical statements are true or false, and some computational problems cannot be tackled with algorithms. And, whenever a mathematical system is rich enough to describe the arithmetic we learn at school, it cannot prove its own consistency. More