More stories

  • in

    Quantum marbles in a bowl of light

    Which factors determine how fast a quantum computer can perform its calculations? Physicists at the University of Bonn and the Technion — Israel Institute of Technology have devised an elegant experiment to answer this question. The results of the study are published in the journal Science Advances.
    Quantum computers are highly sophisticated machines that rely on the principles of quantum mechanics to process information. This should enable them to handle certain problems in the future that are completely unsolvable for conventional computers. But even for quantum computers, fundamental limits apply to the amount of data they can process in a given time.
    Quantum gates require a minimum time
    The information stored in conventional computers can be thought of as a long sequence of zeros and ones, the bits. In quantum mechanics it is different: The information is stored in quantum bits (qubits), which resemble a wave rather than a series of discrete values. Physicists also speak of wave functions when they want to precisely represent the information contained in qubits.
    In a traditional computer, information is linked together by so-called gates. Combining several gates allows elementary calculations, such as the addition of two bits. Information is processed in a very similar way in quantum computers, where quantum gates change the wave function according to certain rules.
    Quantum gates resemble their traditional relatives in another respect: “Even in the quantum world, gates do not work infinitely fast,” explains Dr. Andrea Alberti of the Institute of Applied Physics at the University of Bonn. “They require a minimum amount of time to transform the wave function and the information this contains.”
    More than 70 years ago, Soviet physicists Leonid Mandelstam and Igor Tamm deduced theoretically this minimum time for transforming the wave function. Physicists at the University of Bonn and the Technion have now investigated this Mandelstam-Tamm limit for the first time with an experiment on a complex quantum system. To do this, they used cesium atoms that moved in a highly controlled manner. “In the experiment, we let individual atoms roll down like marbles in a light bowl and observe their motion,” explains Alberti, who led the experimental study. More

  • in

    Machine learning models quantum devices

    Technologies that take advantage of novel quantum mechanical behaviors are likely to become commonplace in the near future. These may include devices that use quantum information as input and output data, which require careful verification due to inherent uncertainties. The verification is more challenging if the device is time dependent when the output depends on past inputs. For the first time, researchers using machine learning dramatically improved the efficiency of verification for time-dependent quantum devices by incorporating a certain memory effect present in these systems.
    Quantum computers make headlines in the scientific press, but these machines are considered by most experts to still be in their infancy. A quantum internet, however, may be a little closer to the present. This would offer significant security advantages over our current internet, amongst other things. But even this will rely on technologies that have yet to see the light of day outside the lab. While many fundamentals of the devices that can create our quantum internet may have been worked out, there are many engineering challenges in order to realize these as products. But much research is underway to create tools for the design of quantum devices.
    Postdoctoral researcher Quoc Hoan Tran and Associate Professor Kohei Nakajima from the Graduate School of Information Science and Technology at the University of Tokyo have pioneered just such a tool, which they think could make verifying the behavior of quantum devices a more efficient and precise undertaking than it is at present. Their contribution is an algorithm that can reconstruct the workings of a time-dependent quantum device by simply learning the relationship between the quantum inputs and outputs. This approach is actually commonplace when exploring a classical physical system, but quantum information is generally tricky to store, which usually makes it impossible.
    “The technique to describe a quantum system based on its inputs and outputs is called quantum process tomography,” said Tran. “However, many researchers now report that their quantum systems exhibit some kind of memory effect where present states are affected by previous ones. This means that a simple inspection of input and output states cannot describe the time-dependent nature of the system. You could model the system repeatedly after every change in time, but this would be extremely computationally inefficient. Our aim was to embrace this memory effect and use it to our advantage rather than use brute force to overcome it.”
    Tran and Nakajima turned to machine learning and a technique called quantum reservoir computing to build their novel algorithm. This learns patterns of inputs and outputs that change over time in a quantum system and effectively guesses how these patterns will change, even in situations the algorithm has not yet witnessed. As it does not need to know the inner workings of a quantum system as a more empirical method might, but only the inputs and outputs, the team’s algorithm can be simpler and produce results faster as well.
    “At present, our algorithm can emulate a certain kind of quantum system, but hypothetical devices may vary widely in their processing ability and have different memory effects. So the next stage of research will be to broaden the capabilities of our algorithms, essentially making something more general purpose and thus more useful,” said Tran. “I am excited by what quantum machine learning methods could do, by the hypothetical devices they might lead to.”
    This work is supported by MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) Grant
    Nos. JPMXS0118067394 and JPMXS0120319794.
    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    How electric vehicles offered hope as climate challenges grew

    This was another year of bleak climate news. Record heat waves baked the Pacific Northwest. Wildfires raged in California, Oregon, Washington and neighboring states. Tropical cyclones rapidly intensified in the Pacific Ocean. And devastating flash floods inundated Western Europe and China. Human-caused climate change is sending the world hurtling down a road to more extreme weather events, and we’re running out of time to pump the brakes, the Intergovernmental Panel on Climate Change warned in August (SN: 9/11/21, p. 8).

    The world needs to dramatically reduce its greenhouse gas emissions, and fast, if there’s any hope of preventing worse and more frequent extreme weather events. That means shifting to renewable sources of energy — and, importantly, decarbonizing transportation, a sector that is now responsible for about a quarter of the world’s carbon dioxide emissions.

    But the path to that cleaner future is daunting, clogged with political and societal roadblocks, as well as scientific obstacles. Perhaps that’s one reason why the electric vehicle — already on the road, already navigating many of these roadblocks — swerved so dramatically into the climate solutions spotlight in 2021.

    Just a few years ago, many automakers thought electric vehicles, or EVs, might be a passing fad, says Gil Tal, director of the Plug-in Hybrid & Electric Vehicle Research Center at the University of California, Davis. “It’s now clear to everyone that [EVs are] here to stay.”

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Globally, EV sales surged in the first half of 2021, increasing by 160 percent compared with the previous year. Even in 2020 — when most car sales were down due to the COVID-19 pandemic — EV sales were up 46 percent relative to 2019. Meanwhile, automakers from General Motors to Volkswagen to Nissan have outlined plans to launch new EV models over the next decade: GM pledged to go all-electric by 2035, Honda by 2040. Ford introduced electric versions of its iconic Mustang and F-150 pickup truck.

    Consumer demand for EVs isn’t actually driving the surge in sales, Tal says. The real engine is a change in supply due to government policies pushing automakers to boost their EV production. The European Union’s toughened CO2 emissions laws for the auto industry went into effect in 2021, and automakers have already bumped up new EV production in the region. China mandated in 2020 that EVs make up 40 percent of new car sales by 2030. Costa Rica has set official phase-out targets for internal combustion engines.

    In the United States, where transportation has officially supplanted power generation as the top greenhouse gas–emitting sector, President Joe Biden’s administration set a goal this year of having 50 percent of new U.S. vehicle sales be electric — both plug-in hybrid and all-electric — by 2030. That’s a steep rise over EVs’ roughly 2.5 percent share of new cars sold in the United States today. In September, California announced that by 2035 all new cars and passenger trucks sold in the state must be zero-emission.

    There are concrete signs that automakers are truly committing to EVs. In September, Ford announced plans to build two new complexes in Tennessee and Kentucky to produce electric trucks and batteries. Climate change–related energy crises, such as the February failure of Texas’ power system, may also boost interest in EVs, Ford CEO Jim Farley said September 28 on the podcast Columbia Energy Exchange.

    “We’re seeing more extreme weather events with global warming, and so people are looking at these vehicles not just for propulsion but for … other benefits,” Farley said. “One of the most popular features of the F-150 Lightning is the fact that you can power your house for three days” with the truck’s battery.

    More to navigate

    Although the EV market is growing fast, it’s still not fast enough to meet the Paris Agreement goals, the International Energy Agency reported this year. For the world to reach net-zero emissions by 2050 — when carbon emissions added to the atmosphere are balanced by carbon removal — EVs would need to climb from the current 5 percent of global car sales to 60 percent by 2030, the agency found.

    As for the United States, even if the Biden administration’s plan for EVs comes to fruition, the country’s transportation sector will still fall short of its emissions targets, researchers reported in 2020 in Nature Climate Change. To hit those targets, electric cars would need to make up 90 percent of new U.S. car sales by 2050 — or people would need to drive a lot less.

    And to truly supplant fossil fuel vehicles, electric options need to meet several benchmarks. Prices for new and used EVs must come down. Charging stations must be available and affordable to all, including people who don’t live in homes where they can plug in. And battery ranges must be extended. Average ranges have been improving. Just five or so years ago, cars needed a recharge after about 100 miles; today the average is about 250 miles, roughly the distance from Washington, D.C., to New York City. But limited ranges and too few charging stations remain a sticking point.

    Today’s batteries also require metals that are scarce, difficult to access or produced in mining operations rife with serious human rights issues. Although there, too, solutions may be on the horizon, including finding ways to recycle batteries to alleviate materials shortages (SN: 12/4/21, p. 4).

    EVs on their own are nowhere near enough to forestall the worst effects of climate change. But it won’t be possible to slow global warming without them.

    And in a year with a lot of grim climate news — both devastating extreme events and maddeningly stalled political action — EVs offered one glimmer of hope.

    “We have the technology. It’s not dependent on some technology that’s not developed yet,” Tal says. “The hope is that now we are way more willing to [transition to EVs] than at any time before.” More

  • in

    Could EKGs help doctors use AI to detect pulmonary embolisms?

    Pulmonary embolisms are dangerous, lung-clogging blot clots. In a pilot study, scientists at the Icahn School of Medicine at Mount Sinai showed for the first time that artificial intelligence (AI) algorithms can detect signs of these clots in electrocardiograms (EKGs), a finding which may one day help doctors with screening.
    The results published in the European Heart Journal — Digital Health suggested that new machine learning algorithms, which are designed to exploit a combination of EKG and electronic health record (EHR) data, may be more effective than currently used screening tests at determining whether moderate- to high-risk patients actually have pulmonary embolisms.
    The study was led by Sulaiman S. Somani, MD, a former medical student in the lab of Benjamin S. Glicksberg, PhD, Assistant Professor of Genetics and Genomic Sciences and a member of the Hasso Plattner Institute for Digital Health at Mount Sinai.
    Pulmonary embolisms happen when deep vein blood clots, usually formed in the legs or arms, break away and clog lung arteries. These clots can be lethal or cause long-term lung damage. Although some patients may experience shortness of breath or chest pain, these symptoms may also signal other problems that have nothing to do with blood clots, making it difficult for doctors to properly diagnose and treat cases. Moreover, current official diagnoses rely on computed tomography pulmonary angiograms (CTPAs), which are time-consuming chest scans that can only be performed at select hospitals and require patients to be exposed to potentially dangerous levels of radiation.
    To make diagnoses easier and more accessible, researchers have spent more than 20 years developing advanced computer programs, or algorithms, designed to help doctors determine whether at-risk patients are actually experiencing pulmonary embolisms. The results have been mixed. For example, algorithms that used EHRs have produced a wide range of success rates for accurately detecting clots and can be labor-intensive. Meanwhile, the more accurate ones depend heavily on data from the CTPAs.
    In this study the researchers found that fusing algorithms that rely on EKG and EHR data may be an effective alternative, because EKGs are widely available and relatively easy to administer.
    The researchers created and tested out various algorithms on data from 21,183 Mount Sinai Health System patients who showed moderate to highly suspicious signs of having pulmonary embolisms. While some algorithms were designed to use EKG data to screen for pulmonary embolisms, others were designed to use EHR data. In each situation, the algorithm learned to identify a pulmonary embolism case by comparing either EKG or EHR data with corresponding results from CTPAs. Finally, a third, fusion algorithm was created by combining the best-performing EKG algorithm with the best-performing EHR one.
    The results showed that the fusion model not only outperformed its parent algorithms but was also better at identifying specific pulmonary embolism cases than the Wells’ Criteria Revised Geneva Score and three other currently used screening tests. The researchers estimated that the fusion model was anywhere from 15 to 30 percent more effective at accurately screening acute embolism cases, and the model performed best at predicting the most severe cases. Furthermore, the fusion model’s accuracy remained consistent regardless of whether race or sex was tested as a factor, suggesting it may be useful for screening a variety of patients.
    According to the authors, these results support the theory that EKG data may be effectively incorporated into new pulmonary embolism screening algorithms. They plan to further develop and test these algorithms out for potential utility in the clinic.
    This study was support by the National Institutes of Health (TR001433). More

  • in

    A new platform for controlled design of printed electronics with 2D materials

    Scientists have shown how electricity is transported in printed 2D materials, paving the way for design of flexible devices for healthcare and beyond.
    A study, published today in Nature Electronics, led by Imperial College London and Politecnico di Torino researchers reveals the physical mechanisms responsible for the transport of electricity in printed two-dimensional (2D) materials.
    The work identifies what properties of 2D material films need to be tweaked to make electronic devices to order, allowing rational design of a new class of high-performance printed and flexible electronics.
    Silicon chips are the components that power most of our electronics, from fitness trackers to smartphones. However, their rigid nature limits their use in flexible electronics. Made of single-atom-thick layers, 2D materials can be dispersed in solution and formulated into printable inks, producing ultra-thin films that are extremely flexible, semi-transparent and with novel electronic properties.
    This opens up the possibility of new types of devices, such as those that can be integrated into flexible and stretchable materials, like clothes, paper, or even tissues into the human body.
    Previously, researchers have built several flexible electronic devices from printed 2D material inks, but these have been one-off ‘proof-of-concept’ components, built to show how one particular property, such as high electron mobility, light detection, or charge storage can be realised. More

  • in

    Computer simulation models potential asteroid collisions

    An asteroid impact can be enough to ruin anyone’s day, but several small factors can make the difference between an out-of-this-world story and total annihilation. In AIP Advances, by AIP Publishing, a researcher from the National Institute of Natural Hazards in China developed a computer simulation of asteroid collisions to better understand these factors.
    The computer simulation initially sought to replicate model asteroid strikes performed in a laboratory. After verifying the accuracy of the simulation, Duoxing Yang believes it could be used to predict the result of future asteroid impacts or to learn more about past impacts by studying their craters.
    “From these models, we learn generally a destructive impact process, and its crater formation,” said Yang. “And from crater morphologies, we could learn impact environment temperatures and its velocity.”
    Yang’s simulation was built using the space-time conservation element and solution element method, designed by NASA and used by many universities and government agencies, to model shock waves and other acoustic problems.
    The goal was to simulate a small rocky asteroid striking a larger metal asteroid at several thousand meters per second. Using his simulation, Yang was able to calculate the effects this would have on the metal asteroid, such as the size and shape of the crater.
    The simulation results were compared against mock asteroid impacts created experimentally in a laboratory. The simulation held up against these experimental tests, which means the next step in the research is to use the simulation to generate more data that can’t be produced in the laboratory.
    This data is being created in preparation for NASA’s Psyche mission, which aims to be the first spacecraft to explore an asteroid made entirely of metal. Unlike more familiar rocky asteroids, which are made of roughly the same materials as the Earth’s crust, metal asteroids are made of materials found in the Earth’s inner core. NASA believes studying such an asteroid can reveal more about the conditions found in the center of our own planet.
    Yang believes computer simulation models can generalize his results to all metal asteroid impacts and, in the process, answer several existing questions about asteroid interactions.
    “What kind of geochemistry components will be generated after impacts?” said Yang. “What kinds of impacts result in good or bad consequences to local climate? Can we change trajectory of asteroids heading to us?”
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Researchers develop new measurements for designing cooler electronics

    When cell phones, electric vehicle chargers, or other electronic devices get too hot, performance degrades, and eventually overheating can cause them to shut down or fail. In order to prevent that from happening researchers are working to solve the problem of dissipating heat produced during performance. Heat that is generated in the device during operation has to flow out, ideally with little hinderance to reduce the temperature rise. Often this thermal energy must cross several dissimilar materials during the process and the interface between these materials can cause challenges by impeding heat flow.
    A new study from researchers at the Georgia Institute of Technology, Notre Dame, University of California Los Angeles, University of California Irvine, Oak Ridge National Laboratory, and the Naval Research Laboratory observed interfacial phonon modes which only exist at the interface between silicon (Si) and germanium (Ge). This discovery, published in the journal Nature Communications, shows experimentally that decades-old conventional theories for interfacial heat transfer are not complete and the inclusion of these phonon modes are warranted.
    “The discovery of interfacial phonon modes suggests that the conventional models of heat transfer at interfaces which only use bulk phonon properties are not accurate,” said the Zhe Cheng, a Ph.D. graduate from Georgia Tech’s George W. Woodruff School of Mechanical Engineering who is now a postdoc at University of Illinois at Urbana-Champaign (UIUC). “There is more space for research at the interfaces. Even though these modes are localized, they can contribute to thermal conductance across interfaces.”
    The discovery opens a new pathway for consideration when engineering thermal conductance at interfaces for electronics cooling and other applications where phonons are majority heat carriers at material interfaces.
    “These results will lead to great progress in real-world engineering applications for thermal management of power electronics,” said co-author Samuel Graham, a professor in the Woodruff School of Mechanical Engineering at Georgia Tech and new dean of engineering at University of Maryland. “Interfacial phonon modes should exist widely at solid interfaces. The understanding and manipulation of these interface modes will give us the opportunity to enhance thermal conductance across technologically-important interfaces, for example, GaN-SiC, GaN-diamond, β-Ga2O3-SiC, and β-Ga2O3-diamond interfaces.”
    Presence of Interfacial Phonon Modes Confirmed in Lab
    The researchers observed the interfacial phonon modes experimentally at a high-quality Si-Ge epitaxial interface by using Raman Spectroscopy and high-energy resolution electron energy-loss spectroscopy (EELS). To figure out the role of interfacial phonon modes in heat transfer at interfaces, they used a technique called time-domain thermoreflectance in labs at Georgia Tech and UIUC to determine the temperature-dependent thermal conductance across these interfaces.
    They also observed a clean additional peak showing up in Raman Spectroscopy measurements when they measured the sample with Si-Ge interface, which was not observed when they measured a Si wafer and a Ge wafer with the same system. Both the observed interfacial modes and thermal boundary conductance were fully captured by molecular dynamics (MD) simulations and were confined to the interfacial region as predicted by theory.
    “This research is the result of great team work with all the collaborators,” said Graham. “Without this team and the unique tools that were available to us, this work would not have been possible.”
    Moving forward the researchers plan to continue to pursue the measurement and prediction of interfacial modes, increase the understanding of their contribution to heat transfer, and determine ways to manipulate these phonon modes to increase thermal transport. Breakthroughs in this area could lead to better performance in semiconductors used in satellites, 5G devices, and advanced radar systems, among other devices.
    The epitaxial Si-Ge samples used in this research were grown at the U.S. Naval Research Lab. The TEM and EELS measurements were done at University of California, Irvine and Oak Ridge National Labs. The MD simulations were performed by the University of Notre Dame. The XRD study was done at UCLA.
    This work is financially supported by U.S. Office of Naval Research under a MURI project. The EELS study at UC Irvine is supported by U.S. Department of Energy.
    Story Source:
    Materials provided by Georgia Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Study finds artificial intelligence accurately detects fractures on x-rays, alert human readers

    Emergency room and urgent care clinics are typically busy and patients often have to wait many hours before they can be seen, evaluated and receive treatment. Waiting for x-rays to be interpreted by radiologists can contribute to this long wait time because radiologists often read x-rays for a large number of patients.
    A new study has found that artificial intelligence (AI) can help physicians in interpreting x-rays after an injury and suspected fracture.
    “Our AI algorithm can quickly and automatically detect x-rays that are positive for fractures and flag those studies in the system so that radiologists can prioritize reading x-rays with positive fractures. The system also highlights regions of interest with bounding boxes around areas where fractures are suspected. This can potentially contribute to less waiting time at the time of hospital or clinic visit before patients can get a positive diagnosis of fracture,” explained corresponding Ali Guermazi, MD, PhD, chief of radiology at VA Boston Healthcare System and Professor of Radiology & Medicine at Boston University School of Medicine (BUSM).
    Fracture interpretation errors represents up to 24 percent of harmful diagnostic errors seen in the emergency department. Furthermore, inconsistencies in radiographic diagnosis of fractures are more common during the evening and overnight hours (5 p.m. to 3 a.m.), likely related to non-expert reading and fatigue.
    The AI algorithm (AI BoneView), was trained on a very large number of X-rays from multiple institutions to detect fractures of the limbs, pelvis, torso and lumbar spine and rib cage. Expert human readers (musculoskeletal radiologists, who are subspecialized radiology doctors after receiving focused training on reading bone x-rays) defined the gold standard in this study and compared the performance of human readers with and without AI assistance.
    A variety of readers were used to simulate real life scenario, including radiologists, orthopedic surgeons, emergency physicians and physician assistants, rheumatologists, and family physicians, all of whom read x-rays in real clinical practice to diagnose fractures in their patients. Each reader’s diagnostic accuracy of fractures, with and without AI assistance, were compared against the gold standard. They also assessed the diagnostic performance of AI alone against the gold standard. AI assistance helped reduce missed fractures by 29% and increased readers’ sensitivity by 16%, and by 30% for exams with more than 1 fracture, while improving specificity by 5%.
    Guermazi believes that AI can be a powerful tool to help radiologists and other physicians to improve diagnostic performance and increase efficiency, while potentially improving patient experience at the time of hospital or clinic visit. “Our study was focused on fracture diagnosis, but similar concept can be applied to other diseases and disorders. Our ongoing research interest is to how best to utilize AI to help human healthcare providers to improve patient care, rather than making AI replace human healthcare providers. Our study showed one such example,” he added.
    These findings appear online in the journal Radiology.
    Funding for this study was provided by GLEAMER Inc.
    Story Source:
    Materials provided by Boston University School of Medicine. Note: Content may be edited for style and length. More