More stories

  • in

    Could EKGs help doctors use AI to detect pulmonary embolisms?

    Pulmonary embolisms are dangerous, lung-clogging blot clots. In a pilot study, scientists at the Icahn School of Medicine at Mount Sinai showed for the first time that artificial intelligence (AI) algorithms can detect signs of these clots in electrocardiograms (EKGs), a finding which may one day help doctors with screening.
    The results published in the European Heart Journal — Digital Health suggested that new machine learning algorithms, which are designed to exploit a combination of EKG and electronic health record (EHR) data, may be more effective than currently used screening tests at determining whether moderate- to high-risk patients actually have pulmonary embolisms.
    The study was led by Sulaiman S. Somani, MD, a former medical student in the lab of Benjamin S. Glicksberg, PhD, Assistant Professor of Genetics and Genomic Sciences and a member of the Hasso Plattner Institute for Digital Health at Mount Sinai.
    Pulmonary embolisms happen when deep vein blood clots, usually formed in the legs or arms, break away and clog lung arteries. These clots can be lethal or cause long-term lung damage. Although some patients may experience shortness of breath or chest pain, these symptoms may also signal other problems that have nothing to do with blood clots, making it difficult for doctors to properly diagnose and treat cases. Moreover, current official diagnoses rely on computed tomography pulmonary angiograms (CTPAs), which are time-consuming chest scans that can only be performed at select hospitals and require patients to be exposed to potentially dangerous levels of radiation.
    To make diagnoses easier and more accessible, researchers have spent more than 20 years developing advanced computer programs, or algorithms, designed to help doctors determine whether at-risk patients are actually experiencing pulmonary embolisms. The results have been mixed. For example, algorithms that used EHRs have produced a wide range of success rates for accurately detecting clots and can be labor-intensive. Meanwhile, the more accurate ones depend heavily on data from the CTPAs.
    In this study the researchers found that fusing algorithms that rely on EKG and EHR data may be an effective alternative, because EKGs are widely available and relatively easy to administer.
    The researchers created and tested out various algorithms on data from 21,183 Mount Sinai Health System patients who showed moderate to highly suspicious signs of having pulmonary embolisms. While some algorithms were designed to use EKG data to screen for pulmonary embolisms, others were designed to use EHR data. In each situation, the algorithm learned to identify a pulmonary embolism case by comparing either EKG or EHR data with corresponding results from CTPAs. Finally, a third, fusion algorithm was created by combining the best-performing EKG algorithm with the best-performing EHR one.
    The results showed that the fusion model not only outperformed its parent algorithms but was also better at identifying specific pulmonary embolism cases than the Wells’ Criteria Revised Geneva Score and three other currently used screening tests. The researchers estimated that the fusion model was anywhere from 15 to 30 percent more effective at accurately screening acute embolism cases, and the model performed best at predicting the most severe cases. Furthermore, the fusion model’s accuracy remained consistent regardless of whether race or sex was tested as a factor, suggesting it may be useful for screening a variety of patients.
    According to the authors, these results support the theory that EKG data may be effectively incorporated into new pulmonary embolism screening algorithms. They plan to further develop and test these algorithms out for potential utility in the clinic.
    This study was support by the National Institutes of Health (TR001433). More

  • in

    A new platform for controlled design of printed electronics with 2D materials

    Scientists have shown how electricity is transported in printed 2D materials, paving the way for design of flexible devices for healthcare and beyond.
    A study, published today in Nature Electronics, led by Imperial College London and Politecnico di Torino researchers reveals the physical mechanisms responsible for the transport of electricity in printed two-dimensional (2D) materials.
    The work identifies what properties of 2D material films need to be tweaked to make electronic devices to order, allowing rational design of a new class of high-performance printed and flexible electronics.
    Silicon chips are the components that power most of our electronics, from fitness trackers to smartphones. However, their rigid nature limits their use in flexible electronics. Made of single-atom-thick layers, 2D materials can be dispersed in solution and formulated into printable inks, producing ultra-thin films that are extremely flexible, semi-transparent and with novel electronic properties.
    This opens up the possibility of new types of devices, such as those that can be integrated into flexible and stretchable materials, like clothes, paper, or even tissues into the human body.
    Previously, researchers have built several flexible electronic devices from printed 2D material inks, but these have been one-off ‘proof-of-concept’ components, built to show how one particular property, such as high electron mobility, light detection, or charge storage can be realised. More

  • in

    Computer simulation models potential asteroid collisions

    An asteroid impact can be enough to ruin anyone’s day, but several small factors can make the difference between an out-of-this-world story and total annihilation. In AIP Advances, by AIP Publishing, a researcher from the National Institute of Natural Hazards in China developed a computer simulation of asteroid collisions to better understand these factors.
    The computer simulation initially sought to replicate model asteroid strikes performed in a laboratory. After verifying the accuracy of the simulation, Duoxing Yang believes it could be used to predict the result of future asteroid impacts or to learn more about past impacts by studying their craters.
    “From these models, we learn generally a destructive impact process, and its crater formation,” said Yang. “And from crater morphologies, we could learn impact environment temperatures and its velocity.”
    Yang’s simulation was built using the space-time conservation element and solution element method, designed by NASA and used by many universities and government agencies, to model shock waves and other acoustic problems.
    The goal was to simulate a small rocky asteroid striking a larger metal asteroid at several thousand meters per second. Using his simulation, Yang was able to calculate the effects this would have on the metal asteroid, such as the size and shape of the crater.
    The simulation results were compared against mock asteroid impacts created experimentally in a laboratory. The simulation held up against these experimental tests, which means the next step in the research is to use the simulation to generate more data that can’t be produced in the laboratory.
    This data is being created in preparation for NASA’s Psyche mission, which aims to be the first spacecraft to explore an asteroid made entirely of metal. Unlike more familiar rocky asteroids, which are made of roughly the same materials as the Earth’s crust, metal asteroids are made of materials found in the Earth’s inner core. NASA believes studying such an asteroid can reveal more about the conditions found in the center of our own planet.
    Yang believes computer simulation models can generalize his results to all metal asteroid impacts and, in the process, answer several existing questions about asteroid interactions.
    “What kind of geochemistry components will be generated after impacts?” said Yang. “What kinds of impacts result in good or bad consequences to local climate? Can we change trajectory of asteroids heading to us?”
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Researchers develop new measurements for designing cooler electronics

    When cell phones, electric vehicle chargers, or other electronic devices get too hot, performance degrades, and eventually overheating can cause them to shut down or fail. In order to prevent that from happening researchers are working to solve the problem of dissipating heat produced during performance. Heat that is generated in the device during operation has to flow out, ideally with little hinderance to reduce the temperature rise. Often this thermal energy must cross several dissimilar materials during the process and the interface between these materials can cause challenges by impeding heat flow.
    A new study from researchers at the Georgia Institute of Technology, Notre Dame, University of California Los Angeles, University of California Irvine, Oak Ridge National Laboratory, and the Naval Research Laboratory observed interfacial phonon modes which only exist at the interface between silicon (Si) and germanium (Ge). This discovery, published in the journal Nature Communications, shows experimentally that decades-old conventional theories for interfacial heat transfer are not complete and the inclusion of these phonon modes are warranted.
    “The discovery of interfacial phonon modes suggests that the conventional models of heat transfer at interfaces which only use bulk phonon properties are not accurate,” said the Zhe Cheng, a Ph.D. graduate from Georgia Tech’s George W. Woodruff School of Mechanical Engineering who is now a postdoc at University of Illinois at Urbana-Champaign (UIUC). “There is more space for research at the interfaces. Even though these modes are localized, they can contribute to thermal conductance across interfaces.”
    The discovery opens a new pathway for consideration when engineering thermal conductance at interfaces for electronics cooling and other applications where phonons are majority heat carriers at material interfaces.
    “These results will lead to great progress in real-world engineering applications for thermal management of power electronics,” said co-author Samuel Graham, a professor in the Woodruff School of Mechanical Engineering at Georgia Tech and new dean of engineering at University of Maryland. “Interfacial phonon modes should exist widely at solid interfaces. The understanding and manipulation of these interface modes will give us the opportunity to enhance thermal conductance across technologically-important interfaces, for example, GaN-SiC, GaN-diamond, β-Ga2O3-SiC, and β-Ga2O3-diamond interfaces.”
    Presence of Interfacial Phonon Modes Confirmed in Lab
    The researchers observed the interfacial phonon modes experimentally at a high-quality Si-Ge epitaxial interface by using Raman Spectroscopy and high-energy resolution electron energy-loss spectroscopy (EELS). To figure out the role of interfacial phonon modes in heat transfer at interfaces, they used a technique called time-domain thermoreflectance in labs at Georgia Tech and UIUC to determine the temperature-dependent thermal conductance across these interfaces.
    They also observed a clean additional peak showing up in Raman Spectroscopy measurements when they measured the sample with Si-Ge interface, which was not observed when they measured a Si wafer and a Ge wafer with the same system. Both the observed interfacial modes and thermal boundary conductance were fully captured by molecular dynamics (MD) simulations and were confined to the interfacial region as predicted by theory.
    “This research is the result of great team work with all the collaborators,” said Graham. “Without this team and the unique tools that were available to us, this work would not have been possible.”
    Moving forward the researchers plan to continue to pursue the measurement and prediction of interfacial modes, increase the understanding of their contribution to heat transfer, and determine ways to manipulate these phonon modes to increase thermal transport. Breakthroughs in this area could lead to better performance in semiconductors used in satellites, 5G devices, and advanced radar systems, among other devices.
    The epitaxial Si-Ge samples used in this research were grown at the U.S. Naval Research Lab. The TEM and EELS measurements were done at University of California, Irvine and Oak Ridge National Labs. The MD simulations were performed by the University of Notre Dame. The XRD study was done at UCLA.
    This work is financially supported by U.S. Office of Naval Research under a MURI project. The EELS study at UC Irvine is supported by U.S. Department of Energy.
    Story Source:
    Materials provided by Georgia Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Study finds artificial intelligence accurately detects fractures on x-rays, alert human readers

    Emergency room and urgent care clinics are typically busy and patients often have to wait many hours before they can be seen, evaluated and receive treatment. Waiting for x-rays to be interpreted by radiologists can contribute to this long wait time because radiologists often read x-rays for a large number of patients.
    A new study has found that artificial intelligence (AI) can help physicians in interpreting x-rays after an injury and suspected fracture.
    “Our AI algorithm can quickly and automatically detect x-rays that are positive for fractures and flag those studies in the system so that radiologists can prioritize reading x-rays with positive fractures. The system also highlights regions of interest with bounding boxes around areas where fractures are suspected. This can potentially contribute to less waiting time at the time of hospital or clinic visit before patients can get a positive diagnosis of fracture,” explained corresponding Ali Guermazi, MD, PhD, chief of radiology at VA Boston Healthcare System and Professor of Radiology & Medicine at Boston University School of Medicine (BUSM).
    Fracture interpretation errors represents up to 24 percent of harmful diagnostic errors seen in the emergency department. Furthermore, inconsistencies in radiographic diagnosis of fractures are more common during the evening and overnight hours (5 p.m. to 3 a.m.), likely related to non-expert reading and fatigue.
    The AI algorithm (AI BoneView), was trained on a very large number of X-rays from multiple institutions to detect fractures of the limbs, pelvis, torso and lumbar spine and rib cage. Expert human readers (musculoskeletal radiologists, who are subspecialized radiology doctors after receiving focused training on reading bone x-rays) defined the gold standard in this study and compared the performance of human readers with and without AI assistance.
    A variety of readers were used to simulate real life scenario, including radiologists, orthopedic surgeons, emergency physicians and physician assistants, rheumatologists, and family physicians, all of whom read x-rays in real clinical practice to diagnose fractures in their patients. Each reader’s diagnostic accuracy of fractures, with and without AI assistance, were compared against the gold standard. They also assessed the diagnostic performance of AI alone against the gold standard. AI assistance helped reduce missed fractures by 29% and increased readers’ sensitivity by 16%, and by 30% for exams with more than 1 fracture, while improving specificity by 5%.
    Guermazi believes that AI can be a powerful tool to help radiologists and other physicians to improve diagnostic performance and increase efficiency, while potentially improving patient experience at the time of hospital or clinic visit. “Our study was focused on fracture diagnosis, but similar concept can be applied to other diseases and disorders. Our ongoing research interest is to how best to utilize AI to help human healthcare providers to improve patient care, rather than making AI replace human healthcare providers. Our study showed one such example,” he added.
    These findings appear online in the journal Radiology.
    Funding for this study was provided by GLEAMER Inc.
    Story Source:
    Materials provided by Boston University School of Medicine. Note: Content may be edited for style and length. More

  • in

    Face detection in untrained deep neural networks?

    Researchers have found that higher visual cognitive functions can arise spontaneously in untrained neural networks. A KAIST research team led by Professor Se-Bum Paik from the Department of Bio and Brain Engineering has shown that visual selectivity of facial images can arise even in completely untrained deep neural networks.
    This new finding has provided revelatory insights into mechanisms underlying the development of cognitive functions in both biological and artificial neural networks, also making a significant impact on our understanding of the origin of early brain functions before sensory experiences.
    The study published in Nature Communications on December 16 demonstrates that neuronal activities selective to facial images are observed in randomly initialized deep neural networks in the complete absence of learning, and that they show the characteristics of those observed in biological brains.
    The ability to identify and recognize faces is a crucial function for social behavior, and this ability is thought to originate from neuronal tuning at the single or multi-neuronal level. Neurons that selectively respond to faces are observed in young animals of various species, and this raises intense debate whether face-selective neurons can arise innately in the brain or if they require visual experience.
    Using a model neural network that captures properties of the ventral stream of the visual cortex, the research team found that face-selectivity can emerge spontaneously from random feedforward wirings in untrained deep neural networks. The team showed that the character of this innate face-selectivity is comparable to that observed with face-selective neurons in the brain, and that this spontaneous neuronal tuning for faces enables the network to perform face detection tasks.
    These results imply a possible scenario in which the random feedforward connections that develop in early, untrained networks may be sufficient for initializing primitive visual cognitive functions.
    Professor Paik said, “Our findings suggest that innate cognitive functions can emerge spontaneously from the statistical complexity embedded in the hierarchical feedforward projection circuitry, even in the complete absence of learning.”
    He continued, “Our results provide a broad conceptual advance as well as advanced insight into the mechanisms underlying the development of innate functions in both biological and artificial neural networks, which may unravel the mystery of the generation and evolution of intelligence.”
    This work was supported by the National Research Foundation of Korea (NRF) and by the KAIST singularity research project.
    Story Source:
    Materials provided by The Korea Advanced Institute of Science and Technology (KAIST). Note: Content may be edited for style and length. More

  • in

    Swinging on the quantum level

    After the “first quantum revolution” — the development of devices such as lasers and the atomic clock — the “second quantum revolution” is currently in full swing. Experts from all over the world are developing fundamentally new technologies based on quantum physics. One key application is quantum communication, where information is written and sent in light. For many applications making use of quantum effects, the light has to be in a certain state — namely a single photon state. But what is the best way of generating such single photon states? In the PRX Quantum journal, researchers from Münster, Bayreuth and Berlin (Germany) have now proposed an entirely new way of preparing quantum systems in order to develop components for quantum technology.
    In the experts’ view it is highly promising to use quantum systems for generating single photon states. One well-known example of such a quantum system is a quantum dot. This is a semiconductor structure, just a few nanometres in size. Quantum dots can be controlled using laser pulses. Although quantum dots have properties similar to those of atoms, they are embedded in a crystal matrix, which is often more practical for applications. “Quantum dots are excellent for generating single photons, and that is something we are already doing in our labs almost every day,” says Dr. Tobias Heindel, who runs an experimental lab for quantum communication at the Technical University of Berlin. “But there is still much room for improvement, especially in transferring this technology from the lab to real applications,” he adds.
    One difficulty that has to be overcome is to separate the generated single photons from the exciting laser pulse. In their work, the researchers propose an entirely new method of solving this problem. “The excitation exploits a swing-up process in the quantum system,” explains Münster University’s Thomas Bracht, the lead author of the study. “For this, we use one or more laser pulses which have frequencies which differ greatly from those in the system. This makes spectral filtering very easy.”
    Scientists define the “swing-up process” as a particular behaviour of the particles excited by the laser light in the quantum system — the electrons or, to be more precise, electron-hole pairs (excitons). Here, laser light from two lasers is used which emit light pulses almost simultaneously. As a result of the interaction of the pulses with one another, a rapid modulation occurs, and in each modulation cycle, the particle is always excited a little, but then dips towards the ground state again. In this process, however, it does not fall back to its previous level, but is excited more strongly with each swing up until it reaches the maximum state. The advantage of this method is that the laser light does not have the same frequency as the light emitted by the excited particles. This means that photons generated from the quantum dot can be clearly assigned.
    The team simulated this process in the quantum system, thus providing guidelines for experimental implementation. “We also explain the physics of the swing-up process, which helps us to gain a better understanding of the dynamics in the quantum system,” says associate professor Dr. Doris Reiter, who led the study.
    In order to be able to use the photons in quantum communication, they have to possess certain properties. In addition, any preparation of the quantum system should not be negatively influenced by environmental processes or disruptive influences. In quantum dots, especially the interaction with the surrounding semiconductor material is often a big problem for such preparation schemes. “Our numerical simulations show that the properties of the photons generated after the swing-up process are comparable with the results of established methods for generating single photons, which are less practical,” adds Prof. Martin Axt, who heads the team of researchers from Bayreuth.
    The study constitutes theoretical work. As a result of the collaboration between theoretical and experimental groups, however, the proposal is very close to realistic experimental laboratory conditions, and the authors are confident that an experimental implementation of the scheme will soon be possible. With their results, the researchers are taking a further step towards developing the quantum technologies of tomorrow.
    Story Source:
    Materials provided by University of Münster. Note: Content may be edited for style and length. More

  • in

    IT security: Computer attacks with laser light

    Computer systems that are physically isolated from the outside world (air-gapped) can still be attacked. This is demonstrated by IT security experts of the Karlsruhe Institute of Technology (KIT) in the LaserShark project. They show that data can be transmitted to light-emitting diodes of regular office devices using a directed laser. With this, attackers can secretly communicate with air-gapped computer systems over distances of several meters. In addition to conventional information and communication technology security, critical IT systems need to be protected optically as well.
    Hackers attack computers with lasers. This sounds like a scene from the latest James Bond movie, but it actually is possible in reality. Early December 2021, researchers of KIT, TU Braunschweig, and TU Berlin presented the LaserShark attack at the 37th Annual Computer Security Applications Conference (ACSAC). This research project focuses on hidden communication via optical channels. Computers or networks in critical infrastructures are often physically isolated to prevent external access. “Air-gapping” means that these systems have neither wired nor wireless connections to the outside world. Previous attempts to bypass such protection via electromagnetic, acoustic, or optical channels merely work at short distances or low data rates. Moreover, they frequently allow for data exfiltration only, that is, receiving data.
    Hidden Optical Channel Uses LEDs in Commercially Available Office Devices
    The Intelligent System Security Group of KASTEL — Institute of Information Security and Dependability of KIT, in cooperation with researchers from TU Braunschweig and TU Berlin, have now demonstrated a new attack: With a directed laser beam, an adversary can introduce data into air-gapped systems and retrieve data without additional hardware on-side at the attacked device. “This hidden optical communication uses light-emitting diodes already build into office devices, for instance, to display status messages on printers or telephones,” explains Professor Christian Wressnegger, Head of the Intelligent System Security Group of KASTEL. Light-emitting diodes (LEDs) can receiving light, although they are not designed to do so.
    Data Are Transmitted in Both Directions
    By directing laser light to already installed LEDs and recording their response, the researchers establish a hidden communication channel over a distance of up to 25 m that can be used bidirectionally (in both directions). It reaches data rates of 18.2 kilobits per second inwards and 100 kilobits per second outwards. This optical attack is possible in commercially available office devices used at companies, universities, and authorities. “The LaserShark project demonstrates how important it is to additionally protect critical IT systems optically next to conventional information and communication technology security measures,” Christian Wressnegger says.
    Story Source:
    Materials provided by Karlsruher Institut für Technologie (KIT). Note: Content may be edited for style and length. More