More stories

  • in

    Scientists develop 'mini-brains' to help robots recognize pain and to self-repair

    Using a brain-inspired approach, scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed a way for robots to have the artificial intelligence (AI) to recognise pain and to self-repair when damaged.
    The system has AI-enabled sensor nodes to process and respond to ‘pain’ arising from pressure exerted by a physical force. The system also allows the robot to detect and repair its own damage when minorly ‘injured’, without the need for human intervention.
    Currently, robots use a network of sensors to generate information about their immediate environment. For example, a disaster rescue robot uses camera and microphone sensors to locate a survivor under debris and then pulls the person out with guidance from touch sensors on their arms. A factory robot working on an assembly line uses vision to guide its arm to the right location and touch sensors to determine if the object is slipping when picked up.
    Today’s sensors typically do not process information but send it to a single large, powerful, central processing unit where learning occurs. As a result, existing robots are usually heavily wired which result in delayed response times. They are also susceptible to damage that will require maintenance and repair, which can be long and costly.
    The new NTU approach embeds AI into the network of sensor nodes, connected to multiple small, less-powerful, processing units, that act like ‘mini-brains’ distributed on the robotic skin. This means learning happens locally and the wiring requirements and response time for the robot are reduced five to ten times compared to conventional robots, say the scientists.
    Combining the system with a type of self-healing ion gel material means that the robots, when damaged, can recover their mechanical functions without human intervention.

    advertisement

    The breakthrough research by the NTU scientists was published in the peer-reviewed scientific journal Nature Communications in August.
    Co-lead author of the study, Associate Professor Arindam Basu from the School of Electrical & Electronic Engineering said, “For robots to work together with humans one day, one concern is how to ensure they will interact safely with us. For that reason, scientists around the world have been finding ways to bring a sense of awareness to robots, such as being able to ‘feel’ pain, to react to it, and to withstand harsh operating conditions. However, the complexity of putting together the multitude of sensors required and the resultant fragility of such a system is a major barrier for widespread adoption.”
    Assoc Prof Basu, who is a neuromorphic computing expert added, “Our work has demonstrated the feasibility of a robotic system that is capable of processing information efficiently with minimal wiring and circuits. By reducing the number of electronic components required, our system should become affordable and scalable. This will help accelerate the adoption of a new generation of robots in the marketplace.”
    Robust system enables ‘injured’ robot to self-repair
    To teach the robot how to recognise pain and learn damaging stimuli, the research team fashioned memtransistors, which are ‘brain-like’ electronic devices capable of memory and information processing, as artificial pain receptors and synapses.

    advertisement

    Through lab experiments, the research team demonstrated how the robot was able to learn to respond to injury in real time. They also showed that the robot continued to respond to pressure even after damage, proving the robustness of the system.
    When ‘injured’ with a cut from a sharp object, the robot quickly loses mechanical function. But the molecules in the self-healing ion gel begin to interact, causing the robot to ‘stitch’ its ‘wound’ together and to restore its function while maintaining high responsiveness.
    First author of the study, Rohit Abraham John, who is also a Research Fellow at the School of Materials Science & Engineering at NTU, said, “The self-healing properties of these novel devices help the robotic system to repeatedly stitch itself together when ‘injured’ with a cut or scratch, even at room temperature. This mimics how our biological system works, much like the way human skin heals on its own after a cut.
    “In our tests, our robot can ‘survive’ and respond to unintentional mechanical damage arising from minor injuries such as scratches and bumps, while continuing to work effectively. If such a system were used with robots in real world settings, it could contribute to savings in maintenance.”
    Associate Professor Nripan Mathews, who is co-lead author and from the School of Materials Science & Engineering at NTU, said, “Conventional robots carry out tasks in a structured programmable manner, but ours can perceive their environment, learning and adapting behaviour accordingly. Most researchers focus on making more and more sensitive sensors, but do not focus on the challenges of how they can make decisions effectively. Such research is necessary for the next generation of robots to interact effectively with humans.
    “In this work, our team has taken an approach that is off-the-beaten path, by applying new learning materials, devices and fabrication methods for robots to mimic the human neuro-biological functions. While still at a prototype stage, our findings have laid down important frameworks for the field, pointing the way forward for researchers to tackle these challenges.”
    Building on their previous body of work on neuromorphic electronics such as using light-activated devices to recognise objects, the NTU research team is now looking to collaborate with industry partners and government research labs to enhance their system for larger scale application. More

  • in

    What laser color do you like?

    Researchers at the National Institute of Standards and Technology (NIST) and the University of Maryland have developed a microchip technology that can convert invisible near-infrared laser light into any one of a panoply of visible laser colors, including red, orange, yellow and green. Their work provides a new approach to generating laser light on integrated microchips.
    The technique has applications in precision timekeeping and quantum information science, which often rely on atomic or solid-state systems that must be driven with visible laser light at precisely specified wavelengths. The approach suggests that a wide range of such wavelengths can be accessed using a single, small-scale platform, instead of requiring bulky, tabletop lasers or a series of different semiconductor materials. Constructing such lasers on microchips also provides a low-cost way to integrate lasers with miniature optical circuits needed for optical clocks and quantum communication systems.
    The study, reported in the October 20 issue of Optica, contributes to NIST on a Chip, a program that miniaturizes NIST’s state-of-the-art measurement-science technology, enabling it to be distributed directly to users in industry, medicine, defense and academia.
    Atomic systems that form the heart of the most precise and accurate experimental clocks and new tools for quantum information science typically rely on high-frequency visible (optical) laser light to operate, as opposed to the much lower frequency microwaves that are used to set official time worldwide.
    Scientists are now developing atomic optical system technologies that are compact and operate at low power so that they can be used outside the laboratory. While many different elements are required to realize such a vision, one key ingredient is access to visible-light laser systems that are small, lightweight and operate at low power.
    Although researchers have made great progress in creating compact, high-performance lasers at the near-infrared wavelengths used in telecommunications, it has been challenging to achieve equivalent performance at visible wavelengths. Some scientists have made strides by employing semiconductor materials to generate compact visible-light lasers. In contrast, Xiyuan Lu, Kartik Srinivasan and their colleagues at NIST and the University of Maryland in College Park adopted a different approach, focusing on a material called silicon nitride, which has a pronounced nonlinear response to light.

    advertisement

    Materials such as silicon nitride have a special property: If incoming light has high enough intensity, the color of the exiting light does not necessarily match the color of the light that entered. That is because when bound electrons in a nonlinear optical material interact with high-intensity incident light, the electrons re-radiate that light at frequencies, or colors, that differ from those of the incident light.
    (This effect stands in contrast to the everyday experience of seeing light bounce off a mirror or refract through a lens. In those cases, the color of the light always remains the same.)
    Lu and his colleagues employed a process known as third-order optical parametric oscillation (OPO), in which the nonlinear material converts incident light in the near-infrared into two different frequencies. One of the frequencies is higher than that of the incident light, placing it in the visible range, and the other is lower in frequency, extending deeper into the infrared. Although researchers have employed OPO for years to create different colors of light in large, table-top optical instruments, the new NIST-led study is the first to apply this effect to produce particular visible-light wavelengths on a microchip that has the potential for mass production.
    To miniaturize the OPO method, the researchers directed the near-infrared laser light into a microresonator, a ring-shaped device less than a millionth of a square meter in area and fabricated on a silicon chip. The light inside this microresonator circulates some 5,000 times before it dissipates, building a high enough intensity to access the nonlinear regime where it gets converted to the two different output frequencies.
    To create a multitude of visible and infrared colors, the team fabricated dozens of microresonators, each with slightly different dimensions, on each microchip. The researchers carefully chose these dimensions so that the different microresonators would produce output light of different colors. The team showed that this strategy enabled a single near-infrared laser that varied in wavelength by a relatively small amount to generate a wide range of specific visible-light and infrared colors.
    In particular, although the input laser operates over a narrow range of near-infrared wavelengths (from 780 nanometers to 790 nm), the microchip system generated visible-light colors ranging from green to red (560 nm to 760 nm) and infrared wavelengths ranging from 800 nm to 1,200 nm.
    “The benefit of our approach is that any one of these wavelengths can be accessed just by adjusting the dimensions of our microresonators,” said Srinivasan.
    “Though a first demonstration,” Lu said, “we are excited at the possibility of combining this nonlinear optics technique with well established near-infrared laser technology to create new types of on-chip light sources that can be used in a variety of applications.” More

  • in

    Assessing state of the art in AI for brain disease treatment

    Artificial intelligence is lauded for its ability to solve problems humans cannot, thanks to novel computing architectures that process large amounts of complex data quickly. As a result, AI methods, such as machine learning, computer vision, and neural networks, are applied to some of the most difficult problems in science and society.
    One tough problem is the diagnosis, surgical treatment, and monitoring of brain diseases. The range of AI technologies available for dealing with brain disease is growing fast, and exciting new methods are being applied to brain problems as computer scientists gain a deeper understanding of the capabilities of advanced algorithms.
    In a paper published this week in APL Bioengineering, by AIP Publishing, Italian researchers conducted a systematic literature review to understand the state of the art in the use of AI for brain disease. Their search yielded 2,696 results, and they narrowed their focus to the top 154 most cited papers and took a closer look.
    Their qualitative review sheds light on the most interesting corners of AI development. For example, a generative adversarial network was used to synthetically create an aged brain in order to see how disease advances over time.
    “The use of artificial intelligence techniques is gradually bringing efficient theoretical solutions to a large number of real-world clinical problems related to the brain,” author Alice Segato said. “Especially in recent years, thanks to the accumulation of relevant data and the development of increasingly effective algorithms, it has been possible to significantly increase the understanding of complex brain mechanisms.”
    The authors’ analysis covers eight paradigms of brain care, examining AI methods used to process information about structure and connectivity characteristics of the brain and in assessing surgical candidacy, identifying problem areas, predicting disease trajectory, and for intraoperative assistance. Image data used to study brain disease, including 3D data, such as magnetic resonance imaging, diffusion tensor imaging, positron emission tomography, and computed tomography imaging, can be analyzed using computer vision AI techniques.
    But the authors urge caution, noting the importance of “explainable algorithms” with paths to solutions that are clearly delineated, not a “black box” — the term for AI that reaches an accurate solution but relies on inner workings that are little understood or invisible.
    “If humans are to accept algorithmic prescriptions or diagnosis, they need to trust them,” Segato said. “Researchers’ efforts are leading to the creation of increasingly sophisticated and interpretable algorithms, which could favor a more intensive use of ‘intelligent’ technologies in practical clinical contexts.”

    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    How planting 70 million eelgrass seeds led to an ecosystem’s rapid recovery

    In the world’s largest seagrass restoration project, scientists have observed an ecosystem from birth to full flowering.
    As part of a 20-plus-years project, researchers and volunteers spread more than 70 million eelgrass seeds over plots covering more than 200 hectares, just beyond the wide expanses of salt marsh off the southern end of Virginia’s Eastern Shore. Long-term monitoring of the restored seagrass beds reveals a remarkably hardy ecosystem that is trapping carbon and nitrogen that would otherwise contribute to global warming and pollution, the team reports October 7 in Science Advances. That success provides a glimmer of hope for the climate and for ecosystems, the researchers say.
    The project, led by the Virginia Institute of Marine Science and The Nature Conservancy, has now grown to cover 3,612 hectares — and counting — in new seagrass beds. By comparison, the largest such project in Australia aims to restore 10 hectares of seagrass.
    The results are “a game changer,” says Carlos Duarte. “It’s an exemplar of how nature-based solutions can help mitigate climate change,” he says. The marine ecologist at King Abdullah University of Science and Technology in Thuwal, Saudi Arabia is a leader in recognizing the carbon-storing capacity of mangroves, tidal marshes and seagrasses.
    The team in Virginia started with a blank slate, says Robert Orth, a marine biologist at the Virginia Institute of Marine Science in Gloucester Point. The seagrass in these inshore lagoons had been wiped out by disease and a hurricane in the early 1930s, but the water was still clear enough to transmit the sunlight plants require.
    A researcher collects seeds from a restored seagrass meadow in a coastal Virginia bay.Jay Fleming
    Within the first 10 years of restoration, Orth and colleagues witnessed an ecosystem rebounding rapidly across almost every indicator of ecosystem health — seagrass coverage, water quality, carbon and nitrogen storage, and invertebrate and fish biomass (SN: 2/16/17).
    For instance, the team monitored how much carbon and nitrogen the meadows were capturing from the environment and storing in the sediment as seagrass coverage expanded. It found that meadows in place for nine or more years stored, on average, 1.3 times more carbon and 2.2 times more nitrogen than younger plots, suggesting that storage capacity increases as meadows mature. Within 20 years, the restored plots were accumulating carbon and nitrogen at rates similar to what natural, undisturbed seagrass beds in the same location would have stored. The restored seagrass beds are now sequestering on average about 3,000 metric tons of carbon per year and more than 600 metric tons of nitrogen, the researchers report.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Seagrasses can take a hit. When a sudden marine heat wave killed off a portion of the seagrass, it took just three years for the meadow to fully recover its plant density. “It surprised us how resilient these seagrass meadows were,” says Karen McGlathery, a coastal ecologist at the University of Virginia in Charlottesville.
    She believes the team’s work is more than just a great case study in restoration. It “offers a blueprint for restoring and maintaining healthy seagrass ecosystems” that others can adapt elsewhere in the world, she says.
    Reestablished eelgrass beds off Virginia not only store carbon efficiently, they also support rich biodiversity, such as the seahorse seen here.VIMS
    Seagrasses are among the world’s most valuable and most threatened ecosystems, and are important globally as reservoirs of what’s known as blue carbon, the carbon stored in ocean and coastal ecosystems. Seagrasses store more carbon, for far longer, than any other land or ocean habitat, preventing it from escaping to the atmosphere as heat-trapping carbon dioxide. These underwater prairies also support near-shore and offshore fisheries, and protect coastlines as well as other marine habitats. Despite their importance, seagrasses have declined globally by some 30 percent since 1879, according to an Aug. 14 study in Frontiers in Marine Science.
    “The study helps fill some large gaps in our understanding of how blue carbon can contribute to climate restoration,” says McGlathery. “It’s the first to put a number on how much carbon restored meadows take out of the atmosphere and store,” for decades and potentially for centuries.
    The restoration is far from finished. But already, it may point the way for struggling ecosystems such as Florida’s Biscayne Bay, once rich in seagrass but now suffering from water quality degradation and widespread fish kills.  Once the water is cleaned up, says Orth, “our work suggests that seagrasses can recover rapidly” (SN: 3/5/18).
    McGlathery also believes the scale of the team’s success should be uplifting for coastal communities. “In my first years here, there was no seagrass and there hadn’t been for decades. Today, as far as I can swim, I see lush meadows, rays, the occasional seahorse. It’s beautiful.” More

  • in

    Temperature evolution of impurities in a quantum gas

    A new, Monash-led theoretical study advances our understanding of its role in thermodynamics in the quantum impurity problem.
    Quantum impurity theory studies the behaviour of deliberately introduced atoms (ie, ‘impurities’) that behave as particularly ‘clean’ quasiparticles within a background atomic gas, allowing a controllable ‘perfect test bed’ study of quantum correlations.
    The study extends quantum impurity theory, which is of significant interest to the quantum-matter research community, into a new dimension — the thermal effect.
    “We have discovered a general relationship between two distinct experimental protocols, namely ejection and injection radio-frequency spectroscopy, where prior to our work no such relationship was known.” explains lead author Dr Weizhe Liu (Monash University School of Physics and Astronomy).
    QUANTUM IMPURITY THEORY
    Quantum impurity theory studies the effects of introducing atoms of one element (ie, ‘impurities’) into an ultracold atomic gas of another element.

    advertisement

    For example, a small number of potassium atoms can be introduced into a ‘background’ quantum gas of lithium atoms.
    The introduced impurities (in this case, the potassium atoms) behave as a particularly ‘clean’ quasiparticle within the atomic gas.
    Interactions between the introduced impurity atoms and the background atomic gas can be ‘tuned’ via an external magnetic field, allowing investigation of quantum correlations.
    In recent years there has been an explosion of studies on the subject of quantum impurities immersed in different background mediums, thanks to their controllable realization in ultracold atomic gases.
    MODELLING ‘PUSH’ AND ‘PULL’ WITH RADIO-FREQUENCY PULSES
    “Our study is based on radio-frequency spectroscopy, modelling two different scenarios: ejection and injection,” says Dr Weizhe Liu, who is a Research Fellow with FLEET, FLEET working in the group of A/Prof Meera Parish and Dr Jesper Levinsen.

    advertisement

    The team modelled the effect of radio-frequency pulses that would force impurity atoms from one spin state into another, unoccupied spin state.
    Under the ‘ejection’ scenario, radio-frequency pulses act on impurities in a spin state that strongly interact with the background medium, ‘pushing’ those impurities into a non-interacting spin state.
    The inverse ‘injection’ scenario ‘pulls’ impurities from a non-interacting state into an interacting state.
    These two spectroscopies are commonly used separately, to study distinctive aspects of the quantum impurity problem.
    * Instead, the new Monash study shows that the ejection and injection protocols probe the same information.
    “We found that the two scenarios — ejection and injection — are related to each other by an exponential function of the free-energy difference between the interacting and noninteracting impurity states,” says Dr Liu. More

  • in

    Bringing a power tool from math into quantum computing

    The Fourier transform is an important mathematical tool that decomposes a function or dataset into a its constituting frequencies, much like one could decompose a musical chord into a combination of its notes. It is used across all fields of engineering in some form or another and, accordingly, algorithms to compute it efficiently have been developed — that is, at least for conventional computers. But what about quantum computers?
    Though quantum computing remains an enormous technical and intellectual challenge, it has the potential to speed up many programs and algorithms immensely provided that appropriate quantum circuits are designed. In particular, the Fourier transform already has a quantum version called the quantum Fourier transform (QFT), but its applicability is quite limited because its results cannot be used in subsequent quantum arithmetic operations.
    To address this issue, in a recent study published in Quantum Information Processing, scientists from Tokyo University of Science developed a new quantum circuit that executes the “quantum fast Fourier transform (QFFT)” and fully benefits from the peculiarities of the quantum world. The idea for the study came to Mr. Ryo Asaka, first-year Master’s student and one of the scientists on the study, when he first learned about the QFT and its limitations. He thought it would be useful to create a better alternative based on a variant of the standard Fourier transform called the “fast Fourier transform (FFT),” an indispensable algorithm in conventional computing that greatly speeds things up if the input data meets some basic conditions.
    To design the quantum circuit for the QFFT, the scientists had to first devise quantum arithmetic circuits to perform the basic operations of the FFT, such as addition, subtraction, and digit shifting. A notable advantage of their algorithm is that no “garbage bits” are generated; the calculation process does not waste any qubits, the basic unit of quantum information. Considering that increasing the number of qubits of quantum computers has been an uphill battle over the last few years, the fact that this novel quantum circuit for the QFFT can use qubits efficiently is very promising.
    Another merit of their quantum circuit over the traditional QFT is that their implementation exploits a unique property of the quantum world to greatly increase computational speed. Associate Professor Kazumitsu Sakai, who led the study, explains: “In quantum computing, we can process a large amount of information at the same time by taking advantage of a phenomenon known as ‘superposition of states.’ This allows us to convert a lot of data, such as multiple images and sounds, into the frequency domain in one go.” Processing speed is regularly cited as the main advantage of quantum computing, and this novel QFFT circuit represents a step in the right direction.
    Moreover, the QFFT circuit is much more versatile than the QFT, as Assistant Professor Ryoko Yahagi, who also participated in the study, remarks: “One of the main advantages of the QFFT is that it is applicable to any problem that can be solved by the conventional FFT, such as the filtering of digital images in the medical field or analyzing sounds for engineering applications.” With quantum computers (hopefully) right around the corner, the outcomes of this study will make it easier to adopt quantum algorithms to solve the many engineering problems that rely on the FFT.

    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    Scientists voice concerns, call for transparency and reproducibility in AI research

    International scientists are challenging their colleagues to make Artificial Intelligence (AI) research more transparent and reproducible to accelerate the impact of their findings for cancer patients.
    In an article published in Nature on October 14, 2020, scientists at Princess Margaret Cancer Centre, University of Toronto, Stanford University, Johns Hopkins, Harvard School of Public Health, Massachusetts Institute of Technology, and others, challenge scientific journals to hold computational researchers to higher standards of transparency, and call for their colleagues to share their code, models and computational environments in publications.
    “Scientific progress depends on the ability of researchers to scrutinize the results of a study and reproduce the main finding to learn from,” says Dr. Benjamin Haibe-Kains, Senior Scientist at Princess Margaret Cancer Centre and first author of the article. “But in computational research, it’s not yet a widespread criterion for the details of an AI study to be fully accessible. This is detrimental to our progress.”
    The authors voiced their concern about the lack of transparency and reproducibility in AI research after a Google Health study by McKinney et al., published in a prominent scientific journal in January 2020, claimed an artificial intelligence (AI) system could outperform human radiologists in both robustness and speed for breast cancer screening. The study made waves in the scientific community and created a buzz with the public, with headlines appearing in BBC News, CBC, CNBC.
    A closer examination raised some concerns: the study lacked a sufficient description of the methods used, including their code and models. The lack of transparency prohibited researchers from learning exactly how the model works and how they could apply it to their own institutions.
    “On paper and in theory, the McKinney et al. study is beautiful,” says Dr. Haibe-Kains, “But if we can’t learn from it then it has little to no scientific value.”
    According to Dr. Haibe-Kains, who is jointly appointed as Associate Professor in Medical Biophysics at the University of Toronto and affiliate at the Vector Institute for Artificial Intelligence, this is just one example of a problematic pattern in computational research.
    “Researchers are more incentivized to publish their finding rather than spend time and resources ensuring their study can be replicated,” explains Dr. Haibe-Kains. “Journals are vulnerable to the ‘hype’ of AI and may lower the standards for accepting papers that don’t include all the materials required to make the study reproducible — often in contradiction to their own guidelines.”
    This can actually slow down the translation of AI models into clinical settings. Researchers are not able to learn how the model works and replicate it in a thoughtful way. In some cases, it could lead to unwarranted clinical trials, because a model that works on one group of patients or in one institution, may not be appropriate for another.
    In the article titled Transparency and reproducibility in artificial intelligence, the authors offer numerous frameworks and platforms that allow safe and effective sharing to uphold the three pillars of open science to make AI research more transparent and reproducible: sharing data, sharing computer code and sharing predictive models.
    “We have high hopes for the utility of AI for our cancer patients,” says Dr. Haibe-Kains. “Sharing and building upon our discoveries — that’s real scientific impact.”

    Story Source:
    Materials provided by University Health Network. Note: Content may be edited for style and length. More

  • in

    The first room-temperature superconductor has finally been found

    It’s here: Scientists have reported the discovery of the first room-temperature superconductor, after more than a century of waiting.
    The discovery evokes daydreams of futuristic technologies that could reshape electronics and transportation. Superconductors transmit electricity without resistance, allowing current to flow without any energy loss. But all superconductors previously discovered must be cooled, many of them to very low temperatures, making them impractical for most uses.
    Now, scientists have found the first superconductor that operates at room temperature — at least given a fairly chilly room. The material is superconducting below temperatures of about 15° Celsius (59° Fahrenheit), physicist Ranga Dias of the University of Rochester in New York and colleagues report October 14 in Nature.
    The team’s results “are nothing short of beautiful,” says materials chemist Russell Hemley of the University of Illinois at Chicago, who was not involved with the research.
    However, the new material’s superconducting superpowers appear only at extremely high pressures, limiting its practical usefulness.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Dias and colleagues formed the superconductor by squeezing carbon, hydrogen and sulfur between the tips of two diamonds and hitting the material with laser light to induce chemical reactions. At a pressure about 2.6 million times that of Earth’s atmosphere, and temperatures below about 15° C, the electrical resistance vanished.
    That alone wasn’t enough to convince Dias. “I didn’t believe it the first time,” he says. So the team studied additional samples of the material and investigated its magnetic properties.
    Superconductors and magnetic fields are known to clash — strong magnetic fields inhibit superconductivity. Sure enough, when the material was placed in a magnetic field, lower temperatures were needed to make it superconducting. The team also applied an oscillating magnetic field to the material, and showed that, when the material became a superconductor, it expelled that magnetic field from its interior, another sign of superconductivity.
    The scientists were not able to determine the exact composition of the material or how its atoms are arranged, making it difficult to explain how it can be superconducting at such relatively high temperatures. Future work will focus on describing the material more completely, Dias says.
    When superconductivity was discovered in 1911, it was found only at temperatures close to absolute zero (−273.15° C). But since then, researchers have steadily uncovered materials that superconduct at higher temperatures. In recent years, scientists have accelerated that progress by focusing on hydrogen-rich materials at high pressure.
    In 2015, physicist Mikhail Eremets of the Max Planck Institute for Chemistry in Mainz, Germany, and colleagues squeezed hydrogen and sulfur to create a superconductor at temperatures up to −70° C (SN: 12/15/15). A few years later, two groups, one led by Eremets and another involving Hemley and physicist Maddury Somayazulu, studied a high-pressure compound of lanthanum and hydrogen. The two teams found evidence of superconductivity at even higher temperatures of −23° C and −13° C, respectively, and in some samples possibly as high as 7° C (SN: 9/10/18).
    The discovery of a room-temperature superconductor isn’t a surprise. “We’ve been obviously heading toward this,” says theoretical chemist Eva Zurek of the University at Buffalo in New York, who was not involved with the research. But breaking the symbolic room-temperature barrier is “a really big deal.”
    If a room-temperature superconductor could be used at atmospheric pressure, it could save vast amounts of energy lost to resistance in the electrical grid. And it could improve current technologies, from MRI machines to quantum computers to magnetically levitated trains. Dias envisions that humanity could become a “superconducting society.”
    But so far scientists have created only tiny specks of the material at high pressure, so practical applications are still a long way off.
    Still, “the temperature is not a limit anymore,” says Somayazulu, of Argonne National Laboratory in Lemont, Ill., who was not involved with the new research. Instead, physicists now have a new aim: to create a room-temperature superconductor that works without putting on the squeeze, Somayazulu says. “That’s the next big step we have to do.”

    Trustworthy journalism comes at a price.

    Scientists and journalists share a core belief in questioning, observing and verifying to reach the truth. Science News reports on crucial research and discovery across science disciplines. We need your financial support to make it happen – every contribution makes a difference.

    Subscribe or Donate Now More