More stories

  • in

    Quantum marbles in a bowl of light

    Which factors determine how fast a quantum computer can perform its calculations? Physicists at the University of Bonn and the Technion — Israel Institute of Technology have devised an elegant experiment to answer this question. The results of the study are published in the journal Science Advances.
    Quantum computers are highly sophisticated machines that rely on the principles of quantum mechanics to process information. This should enable them to handle certain problems in the future that are completely unsolvable for conventional computers. But even for quantum computers, fundamental limits apply to the amount of data they can process in a given time.
    Quantum gates require a minimum time
    The information stored in conventional computers can be thought of as a long sequence of zeros and ones, the bits. In quantum mechanics it is different: The information is stored in quantum bits (qubits), which resemble a wave rather than a series of discrete values. Physicists also speak of wave functions when they want to precisely represent the information contained in qubits.
    In a traditional computer, information is linked together by so-called gates. Combining several gates allows elementary calculations, such as the addition of two bits. Information is processed in a very similar way in quantum computers, where quantum gates change the wave function according to certain rules.
    Quantum gates resemble their traditional relatives in another respect: “Even in the quantum world, gates do not work infinitely fast,” explains Dr. Andrea Alberti of the Institute of Applied Physics at the University of Bonn. “They require a minimum amount of time to transform the wave function and the information this contains.”
    More than 70 years ago, Soviet physicists Leonid Mandelstam and Igor Tamm deduced theoretically this minimum time for transforming the wave function. Physicists at the University of Bonn and the Technion have now investigated this Mandelstam-Tamm limit for the first time with an experiment on a complex quantum system. To do this, they used cesium atoms that moved in a highly controlled manner. “In the experiment, we let individual atoms roll down like marbles in a light bowl and observe their motion,” explains Alberti, who led the experimental study. More

  • in

    Machine learning models quantum devices

    Technologies that take advantage of novel quantum mechanical behaviors are likely to become commonplace in the near future. These may include devices that use quantum information as input and output data, which require careful verification due to inherent uncertainties. The verification is more challenging if the device is time dependent when the output depends on past inputs. For the first time, researchers using machine learning dramatically improved the efficiency of verification for time-dependent quantum devices by incorporating a certain memory effect present in these systems.
    Quantum computers make headlines in the scientific press, but these machines are considered by most experts to still be in their infancy. A quantum internet, however, may be a little closer to the present. This would offer significant security advantages over our current internet, amongst other things. But even this will rely on technologies that have yet to see the light of day outside the lab. While many fundamentals of the devices that can create our quantum internet may have been worked out, there are many engineering challenges in order to realize these as products. But much research is underway to create tools for the design of quantum devices.
    Postdoctoral researcher Quoc Hoan Tran and Associate Professor Kohei Nakajima from the Graduate School of Information Science and Technology at the University of Tokyo have pioneered just such a tool, which they think could make verifying the behavior of quantum devices a more efficient and precise undertaking than it is at present. Their contribution is an algorithm that can reconstruct the workings of a time-dependent quantum device by simply learning the relationship between the quantum inputs and outputs. This approach is actually commonplace when exploring a classical physical system, but quantum information is generally tricky to store, which usually makes it impossible.
    “The technique to describe a quantum system based on its inputs and outputs is called quantum process tomography,” said Tran. “However, many researchers now report that their quantum systems exhibit some kind of memory effect where present states are affected by previous ones. This means that a simple inspection of input and output states cannot describe the time-dependent nature of the system. You could model the system repeatedly after every change in time, but this would be extremely computationally inefficient. Our aim was to embrace this memory effect and use it to our advantage rather than use brute force to overcome it.”
    Tran and Nakajima turned to machine learning and a technique called quantum reservoir computing to build their novel algorithm. This learns patterns of inputs and outputs that change over time in a quantum system and effectively guesses how these patterns will change, even in situations the algorithm has not yet witnessed. As it does not need to know the inner workings of a quantum system as a more empirical method might, but only the inputs and outputs, the team’s algorithm can be simpler and produce results faster as well.
    “At present, our algorithm can emulate a certain kind of quantum system, but hypothetical devices may vary widely in their processing ability and have different memory effects. So the next stage of research will be to broaden the capabilities of our algorithms, essentially making something more general purpose and thus more useful,” said Tran. “I am excited by what quantum machine learning methods could do, by the hypothetical devices they might lead to.”
    This work is supported by MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) Grant
    Nos. JPMXS0118067394 and JPMXS0120319794.
    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Could EKGs help doctors use AI to detect pulmonary embolisms?

    Pulmonary embolisms are dangerous, lung-clogging blot clots. In a pilot study, scientists at the Icahn School of Medicine at Mount Sinai showed for the first time that artificial intelligence (AI) algorithms can detect signs of these clots in electrocardiograms (EKGs), a finding which may one day help doctors with screening.
    The results published in the European Heart Journal — Digital Health suggested that new machine learning algorithms, which are designed to exploit a combination of EKG and electronic health record (EHR) data, may be more effective than currently used screening tests at determining whether moderate- to high-risk patients actually have pulmonary embolisms.
    The study was led by Sulaiman S. Somani, MD, a former medical student in the lab of Benjamin S. Glicksberg, PhD, Assistant Professor of Genetics and Genomic Sciences and a member of the Hasso Plattner Institute for Digital Health at Mount Sinai.
    Pulmonary embolisms happen when deep vein blood clots, usually formed in the legs or arms, break away and clog lung arteries. These clots can be lethal or cause long-term lung damage. Although some patients may experience shortness of breath or chest pain, these symptoms may also signal other problems that have nothing to do with blood clots, making it difficult for doctors to properly diagnose and treat cases. Moreover, current official diagnoses rely on computed tomography pulmonary angiograms (CTPAs), which are time-consuming chest scans that can only be performed at select hospitals and require patients to be exposed to potentially dangerous levels of radiation.
    To make diagnoses easier and more accessible, researchers have spent more than 20 years developing advanced computer programs, or algorithms, designed to help doctors determine whether at-risk patients are actually experiencing pulmonary embolisms. The results have been mixed. For example, algorithms that used EHRs have produced a wide range of success rates for accurately detecting clots and can be labor-intensive. Meanwhile, the more accurate ones depend heavily on data from the CTPAs.
    In this study the researchers found that fusing algorithms that rely on EKG and EHR data may be an effective alternative, because EKGs are widely available and relatively easy to administer.
    The researchers created and tested out various algorithms on data from 21,183 Mount Sinai Health System patients who showed moderate to highly suspicious signs of having pulmonary embolisms. While some algorithms were designed to use EKG data to screen for pulmonary embolisms, others were designed to use EHR data. In each situation, the algorithm learned to identify a pulmonary embolism case by comparing either EKG or EHR data with corresponding results from CTPAs. Finally, a third, fusion algorithm was created by combining the best-performing EKG algorithm with the best-performing EHR one.
    The results showed that the fusion model not only outperformed its parent algorithms but was also better at identifying specific pulmonary embolism cases than the Wells’ Criteria Revised Geneva Score and three other currently used screening tests. The researchers estimated that the fusion model was anywhere from 15 to 30 percent more effective at accurately screening acute embolism cases, and the model performed best at predicting the most severe cases. Furthermore, the fusion model’s accuracy remained consistent regardless of whether race or sex was tested as a factor, suggesting it may be useful for screening a variety of patients.
    According to the authors, these results support the theory that EKG data may be effectively incorporated into new pulmonary embolism screening algorithms. They plan to further develop and test these algorithms out for potential utility in the clinic.
    This study was support by the National Institutes of Health (TR001433). More

  • in

    A new platform for controlled design of printed electronics with 2D materials

    Scientists have shown how electricity is transported in printed 2D materials, paving the way for design of flexible devices for healthcare and beyond.
    A study, published today in Nature Electronics, led by Imperial College London and Politecnico di Torino researchers reveals the physical mechanisms responsible for the transport of electricity in printed two-dimensional (2D) materials.
    The work identifies what properties of 2D material films need to be tweaked to make electronic devices to order, allowing rational design of a new class of high-performance printed and flexible electronics.
    Silicon chips are the components that power most of our electronics, from fitness trackers to smartphones. However, their rigid nature limits their use in flexible electronics. Made of single-atom-thick layers, 2D materials can be dispersed in solution and formulated into printable inks, producing ultra-thin films that are extremely flexible, semi-transparent and with novel electronic properties.
    This opens up the possibility of new types of devices, such as those that can be integrated into flexible and stretchable materials, like clothes, paper, or even tissues into the human body.
    Previously, researchers have built several flexible electronic devices from printed 2D material inks, but these have been one-off ‘proof-of-concept’ components, built to show how one particular property, such as high electron mobility, light detection, or charge storage can be realised. More

  • in

    Computer simulation models potential asteroid collisions

    An asteroid impact can be enough to ruin anyone’s day, but several small factors can make the difference between an out-of-this-world story and total annihilation. In AIP Advances, by AIP Publishing, a researcher from the National Institute of Natural Hazards in China developed a computer simulation of asteroid collisions to better understand these factors.
    The computer simulation initially sought to replicate model asteroid strikes performed in a laboratory. After verifying the accuracy of the simulation, Duoxing Yang believes it could be used to predict the result of future asteroid impacts or to learn more about past impacts by studying their craters.
    “From these models, we learn generally a destructive impact process, and its crater formation,” said Yang. “And from crater morphologies, we could learn impact environment temperatures and its velocity.”
    Yang’s simulation was built using the space-time conservation element and solution element method, designed by NASA and used by many universities and government agencies, to model shock waves and other acoustic problems.
    The goal was to simulate a small rocky asteroid striking a larger metal asteroid at several thousand meters per second. Using his simulation, Yang was able to calculate the effects this would have on the metal asteroid, such as the size and shape of the crater.
    The simulation results were compared against mock asteroid impacts created experimentally in a laboratory. The simulation held up against these experimental tests, which means the next step in the research is to use the simulation to generate more data that can’t be produced in the laboratory.
    This data is being created in preparation for NASA’s Psyche mission, which aims to be the first spacecraft to explore an asteroid made entirely of metal. Unlike more familiar rocky asteroids, which are made of roughly the same materials as the Earth’s crust, metal asteroids are made of materials found in the Earth’s inner core. NASA believes studying such an asteroid can reveal more about the conditions found in the center of our own planet.
    Yang believes computer simulation models can generalize his results to all metal asteroid impacts and, in the process, answer several existing questions about asteroid interactions.
    “What kind of geochemistry components will be generated after impacts?” said Yang. “What kinds of impacts result in good or bad consequences to local climate? Can we change trajectory of asteroids heading to us?”
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Researchers develop new measurements for designing cooler electronics

    When cell phones, electric vehicle chargers, or other electronic devices get too hot, performance degrades, and eventually overheating can cause them to shut down or fail. In order to prevent that from happening researchers are working to solve the problem of dissipating heat produced during performance. Heat that is generated in the device during operation has to flow out, ideally with little hinderance to reduce the temperature rise. Often this thermal energy must cross several dissimilar materials during the process and the interface between these materials can cause challenges by impeding heat flow.
    A new study from researchers at the Georgia Institute of Technology, Notre Dame, University of California Los Angeles, University of California Irvine, Oak Ridge National Laboratory, and the Naval Research Laboratory observed interfacial phonon modes which only exist at the interface between silicon (Si) and germanium (Ge). This discovery, published in the journal Nature Communications, shows experimentally that decades-old conventional theories for interfacial heat transfer are not complete and the inclusion of these phonon modes are warranted.
    “The discovery of interfacial phonon modes suggests that the conventional models of heat transfer at interfaces which only use bulk phonon properties are not accurate,” said the Zhe Cheng, a Ph.D. graduate from Georgia Tech’s George W. Woodruff School of Mechanical Engineering who is now a postdoc at University of Illinois at Urbana-Champaign (UIUC). “There is more space for research at the interfaces. Even though these modes are localized, they can contribute to thermal conductance across interfaces.”
    The discovery opens a new pathway for consideration when engineering thermal conductance at interfaces for electronics cooling and other applications where phonons are majority heat carriers at material interfaces.
    “These results will lead to great progress in real-world engineering applications for thermal management of power electronics,” said co-author Samuel Graham, a professor in the Woodruff School of Mechanical Engineering at Georgia Tech and new dean of engineering at University of Maryland. “Interfacial phonon modes should exist widely at solid interfaces. The understanding and manipulation of these interface modes will give us the opportunity to enhance thermal conductance across technologically-important interfaces, for example, GaN-SiC, GaN-diamond, β-Ga2O3-SiC, and β-Ga2O3-diamond interfaces.”
    Presence of Interfacial Phonon Modes Confirmed in Lab
    The researchers observed the interfacial phonon modes experimentally at a high-quality Si-Ge epitaxial interface by using Raman Spectroscopy and high-energy resolution electron energy-loss spectroscopy (EELS). To figure out the role of interfacial phonon modes in heat transfer at interfaces, they used a technique called time-domain thermoreflectance in labs at Georgia Tech and UIUC to determine the temperature-dependent thermal conductance across these interfaces.
    They also observed a clean additional peak showing up in Raman Spectroscopy measurements when they measured the sample with Si-Ge interface, which was not observed when they measured a Si wafer and a Ge wafer with the same system. Both the observed interfacial modes and thermal boundary conductance were fully captured by molecular dynamics (MD) simulations and were confined to the interfacial region as predicted by theory.
    “This research is the result of great team work with all the collaborators,” said Graham. “Without this team and the unique tools that were available to us, this work would not have been possible.”
    Moving forward the researchers plan to continue to pursue the measurement and prediction of interfacial modes, increase the understanding of their contribution to heat transfer, and determine ways to manipulate these phonon modes to increase thermal transport. Breakthroughs in this area could lead to better performance in semiconductors used in satellites, 5G devices, and advanced radar systems, among other devices.
    The epitaxial Si-Ge samples used in this research were grown at the U.S. Naval Research Lab. The TEM and EELS measurements were done at University of California, Irvine and Oak Ridge National Labs. The MD simulations were performed by the University of Notre Dame. The XRD study was done at UCLA.
    This work is financially supported by U.S. Office of Naval Research under a MURI project. The EELS study at UC Irvine is supported by U.S. Department of Energy.
    Story Source:
    Materials provided by Georgia Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Study finds artificial intelligence accurately detects fractures on x-rays, alert human readers

    Emergency room and urgent care clinics are typically busy and patients often have to wait many hours before they can be seen, evaluated and receive treatment. Waiting for x-rays to be interpreted by radiologists can contribute to this long wait time because radiologists often read x-rays for a large number of patients.
    A new study has found that artificial intelligence (AI) can help physicians in interpreting x-rays after an injury and suspected fracture.
    “Our AI algorithm can quickly and automatically detect x-rays that are positive for fractures and flag those studies in the system so that radiologists can prioritize reading x-rays with positive fractures. The system also highlights regions of interest with bounding boxes around areas where fractures are suspected. This can potentially contribute to less waiting time at the time of hospital or clinic visit before patients can get a positive diagnosis of fracture,” explained corresponding Ali Guermazi, MD, PhD, chief of radiology at VA Boston Healthcare System and Professor of Radiology & Medicine at Boston University School of Medicine (BUSM).
    Fracture interpretation errors represents up to 24 percent of harmful diagnostic errors seen in the emergency department. Furthermore, inconsistencies in radiographic diagnosis of fractures are more common during the evening and overnight hours (5 p.m. to 3 a.m.), likely related to non-expert reading and fatigue.
    The AI algorithm (AI BoneView), was trained on a very large number of X-rays from multiple institutions to detect fractures of the limbs, pelvis, torso and lumbar spine and rib cage. Expert human readers (musculoskeletal radiologists, who are subspecialized radiology doctors after receiving focused training on reading bone x-rays) defined the gold standard in this study and compared the performance of human readers with and without AI assistance.
    A variety of readers were used to simulate real life scenario, including radiologists, orthopedic surgeons, emergency physicians and physician assistants, rheumatologists, and family physicians, all of whom read x-rays in real clinical practice to diagnose fractures in their patients. Each reader’s diagnostic accuracy of fractures, with and without AI assistance, were compared against the gold standard. They also assessed the diagnostic performance of AI alone against the gold standard. AI assistance helped reduce missed fractures by 29% and increased readers’ sensitivity by 16%, and by 30% for exams with more than 1 fracture, while improving specificity by 5%.
    Guermazi believes that AI can be a powerful tool to help radiologists and other physicians to improve diagnostic performance and increase efficiency, while potentially improving patient experience at the time of hospital or clinic visit. “Our study was focused on fracture diagnosis, but similar concept can be applied to other diseases and disorders. Our ongoing research interest is to how best to utilize AI to help human healthcare providers to improve patient care, rather than making AI replace human healthcare providers. Our study showed one such example,” he added.
    These findings appear online in the journal Radiology.
    Funding for this study was provided by GLEAMER Inc.
    Story Source:
    Materials provided by Boston University School of Medicine. Note: Content may be edited for style and length. More

  • in

    Face detection in untrained deep neural networks?

    Researchers have found that higher visual cognitive functions can arise spontaneously in untrained neural networks. A KAIST research team led by Professor Se-Bum Paik from the Department of Bio and Brain Engineering has shown that visual selectivity of facial images can arise even in completely untrained deep neural networks.
    This new finding has provided revelatory insights into mechanisms underlying the development of cognitive functions in both biological and artificial neural networks, also making a significant impact on our understanding of the origin of early brain functions before sensory experiences.
    The study published in Nature Communications on December 16 demonstrates that neuronal activities selective to facial images are observed in randomly initialized deep neural networks in the complete absence of learning, and that they show the characteristics of those observed in biological brains.
    The ability to identify and recognize faces is a crucial function for social behavior, and this ability is thought to originate from neuronal tuning at the single or multi-neuronal level. Neurons that selectively respond to faces are observed in young animals of various species, and this raises intense debate whether face-selective neurons can arise innately in the brain or if they require visual experience.
    Using a model neural network that captures properties of the ventral stream of the visual cortex, the research team found that face-selectivity can emerge spontaneously from random feedforward wirings in untrained deep neural networks. The team showed that the character of this innate face-selectivity is comparable to that observed with face-selective neurons in the brain, and that this spontaneous neuronal tuning for faces enables the network to perform face detection tasks.
    These results imply a possible scenario in which the random feedforward connections that develop in early, untrained networks may be sufficient for initializing primitive visual cognitive functions.
    Professor Paik said, “Our findings suggest that innate cognitive functions can emerge spontaneously from the statistical complexity embedded in the hierarchical feedforward projection circuitry, even in the complete absence of learning.”
    He continued, “Our results provide a broad conceptual advance as well as advanced insight into the mechanisms underlying the development of innate functions in both biological and artificial neural networks, which may unravel the mystery of the generation and evolution of intelligence.”
    This work was supported by the National Research Foundation of Korea (NRF) and by the KAIST singularity research project.
    Story Source:
    Materials provided by The Korea Advanced Institute of Science and Technology (KAIST). Note: Content may be edited for style and length. More