More stories

  • in

    Artificial intelligence may help predict — possibly prevent — sudden cardiac death

    Predicting sudden cardiac death, and perhaps even addressing a person’s risk to prevent future death, may be possible through artificial intelligence (AI) and could offer a new move toward prevention and global health strategies, according to preliminary research to be presented at the American Heart Association’s Resuscitation Science Symposium 2023. The meeting, Nov. 11-12, in Philadelphia is a premier global exchange of the most recent advances related to treating cardiopulmonary arrest and life-threatening traumatic injury.
    “Sudden cardiac death, a public health burden, represents 10% to 20% of overall deaths. Predicting it is difficult, and the usual approaches fail to identify high-risk people, particularly at an individual level,” said Xavier Jouven, M.D., Ph.D., the lead author of the study and professor of cardiology and epidemiology at the Paris Cardiovascular Research Center, Inserm U970-University of Paris. “We proposed a new approach not restricted to the usual cardiovascular risk factors but encompassing all medical information available in electronic health records.”
    The research team analyzed medical information with AI from registries and databases in Paris, France and Seattle for 25,000 people who had died from sudden cardiac arrest and 70,000 people from the general population, with data from the two groups matched by age, sex and residential area. The data, which represented more than 1 million hospital diagnoses and 10 million medication prescriptions, was gathered from medical records up to ten years prior to each death. Using AI to analyze the data, researchers built nearly 25,000 equations with personalized health factors used to identify those people who were at very high risk of sudden cardiac death. Additionally, they developed a customized risk profile for each of the individuals in the study.
    The personalized risk equations included a person’s medical details, such as treatment for high blood pressure and history of heart disease, as well as mental and behavioral disorders including alcohol abuse. The analysis identified those factors most likely to decrease or increase the risk of sudden cardiac death at a particular percentage and time frame, for example, 89% risk of sudden cardiac death within three months.
    The AI analysis was able to identify people who had more than 90% of risk to die suddenly, and they represented more than one fourth of all cases of sudden cardiac death.
    “We have been working for almost 30 years in the field of sudden cardiac death prediction, however, we did not expect to reach such a high level of accuracy. We also discovered that the personalized risk factors are very different between the participants and are often issued from different medical fields (a mix of neurological, psychiatric, metabolic and cardiovascular data) — a picture difficult to catch for the medical eyes and brain of a specialist in one given field” said Jouven, who is also founder of the Paris Sudden Death Expertise Center. “While doctors have efficient treatments such as correction of risk factors, specific medications and implantable defibrillators, the use of AI is necessary to detect in a given subject a succession of medical information registered over the years that will form a trajectory associated with an increased risk of sudden cardiac death. We hope that with a personalized list of risk factors, patients will be able to work with their clinicians to reduce those risk factors and ultimately decrease the potential for sudden cardiac death.”
    Among the study’s limitations are the potential use of the prediction models beyond this research. In addition, the medical data collected in electronic health records sometimes include proxies instead of raw data, and the data collected may be different among countries, requiring an adaptation of the prediction models. More

  • in

    Brain implant may enable communication from thoughts alone

    A speech prosthetic developed by a collaborative team of Duke neuroscientists, neurosurgeons, and engineers can translate a person’s brain signals into what they’re trying to say.
    Appearing Nov. 6 in the journal Nature Communications, the new technology might one day help people unable to talk due to neurological disorders regain the ability to communicate through a brain-computer interface.
    “There are many patients who suffer from debilitating motor disorders, like ALS (amyotrophic lateral sclerosis) or locked-in syndrome, that can impair their ability to speak,” said Gregory Cogan, Ph.D., a professor of neurology at Duke University’s School of Medicine and one of the lead researchers involved in the project. “But the current tools available to allow them to communicate are generally very slow and cumbersome.”
    Imagine listening to an audiobook at half-speed. That’s the best speech decoding rate currently available, which clocks in at about 78 words per minute. People, however, speak around 150 words per minute.
    The lag between spoken and decoded speech rates is partially due the relatively few brain activity sensors that can be fused onto a paper-thin piece of material that lays atop the surface of the brain. Fewer sensors provide less decipherable information to decode.
    To improve on past limitations, Cogan teamed up with fellow Duke Institute for Brain Sciences faculty member Jonathan Viventi, Ph.D., whose biomedical engineering lab specializes in making high-density, ultra-thin, and flexible brain sensors.
    For this project, Viventi and his team packed an impressive 256 microscopic brain sensors onto a postage stamp-sized piece of flexible, medical-grade plastic. Neurons just a grain of sand apart can have wildly different activity patterns when coordinating speech, so it’s necessary to distinguish signals from neighboring brain cells to help make accurate predictions about intended speech. More

  • in

    Collective intelligence can help reduce medical misdiagnoses

    Researchers from the Max Planck Institute for Human Development, the Institute for Cognitive Sciences and Technologies (ISTC), and the Norwegian University of Science and Technology developed a collective intelligence approach to increase the accuracy of medical diagnoses. Their work was recently presented in the journal PNAS.
    An estimated 250,000 people die from preventable medical errors in the U.S. each year. Many of these errors originate during the diagnostic process. A powerful way to increase diagnostic accuracy is to combine the diagnoses of multiple diagnosticians into a collective solution. However, there has been a dearth of methods for aggregating independent diagnoses in general medical diagnostics. Researchers from the Max Planck Institute for Human Development, the Institute for Cognitive Sciences and Technologies (ISTC), and the Norwegian University of Science and Technology have therefore introduced a fully automated solution using knowledge engineering methods.
    The researchers tested their solution on 1,333 medical cases provided by The Human Diagnosis Project (Human Dx), each of which was independently diagnosed by 10 diagnosticians. The collective solution substantially increased diagnostic accuracy: Single diagnosticians achieved 46% accuracy, whereas pooling the decisions of 10 diagnosticians increased accuracy to 76%. Improvements occurred across medical specialties, chief complaints, and diagnosticians’ tenure levels. “Our results show the life-saving potential of tapping into the collective intelligence,” says first author Ralf Kurvers. He is a senior research scientist at the Center for Adaptive Rationality of the Max Planck Institute for Human Development and his research focuses on social and collective decision making in humans and animals.
    Collective intelligence has been proven to boost decision accuracy across many domains, such as geopolitical forecasting, investment, and diagnostics in radiology and dermatology (e.g., Kurvers et al., PNAS, 2016). However, collective intelligence has been mostly applied to relatively simple decision tasks. Applications in more open-ended tasks, such as emergency management or general medical diagnostics, are largely lacking due to the challenge of integrating unstandardized inputs from different people. To overcome this hurdle, the researchers used semantic knowledge graphs, natural language processing, and the SNOMED CT medical ontology, a comprehensive multilingual clinical terminology, for standardization.
    “A key contribution of our work is that, while the human-provided diagnoses maintain their primacy, our aggregation and evaluation procedures are fully automated, avoiding possible biases in the generation of the final diagnosis and allowing the process to be more time- and cost-efficient,” adds co-author Vito Trianni from the Institute for Cognitive Sciences and Technologies (ISTC) in Rome.
    The researchers are currently collaborating — along with other partners — within the HACID project to bring their application one step closer to the market. The EU-funded project will explore a new approach that brings together human experts and AI-supported knowledge representation and reasoning in order to create new tools for decision making in various domains. The application of the HACID technology to medical diagnostics showcases one of the many opportunities to benefit from a digitally based health system and accessible data. More

  • in

    Photo battery achieves competitive voltage

    Researchers from the Universities of Freiburg and Ulm have developed a monolithically integrated photo battery using organic materials.
    Networked intelligent devices and sensors can improve the energy efficiency of consumer products and buildings by monitoring their consumption in real time. Miniature devices like these being developed under the concept of the Internet of Things require energy sources that are as compact as possible in order to function autonomously. Monolithically integrated batteries that simultaneously generate, convert, and store energy in a single system could be used for this purpose.
    A team of scientists at the University of Freiburg’s Cluster of Excellence Living, Adaptive, and Energy-Autonomous Materials Systems (livMatS) has developed a monolithically integrated photo battery consisting of an organic polymer-based battery and a multi-junction organic solar cell. The battery, presented by Rodrigo Delgado Andrés andDr. Uli Würfel, University Freiburg, and Robin Wessling and Prof. Dr. Birgit Esser, University of Ulm, is the first monolithically integrated photo battery made of organic materials to achieve a discharge potential of 3.6 volts. It is thus among the first systems of this kind capable of powering miniature devices. The team published their results in the journal Energy & Environmental Science.
    Combination of a multi-junction solar cell and a dual-ion battery
    The researchers developed a scalable method for the photo battery which allows them to manufacture organic solar cells out of five active layers. “The system achieves relatively high voltages of 4.2 volts with this solar cell,” explains Wessling. The team combined this multi-junction solar cell with a so-called dual-ion battery, which is capable of being charged at high currents, unlike the cathodes of conventional lithium batteries. With careful control of illumination intensity and discharge rates, a photo battery constructed in this way is capable of rapid charging in less than 15 minutes at discharge capacities of up to 22 milliampere hours per gram (mAh g-1). In combination with the averaged discharge potential of 3.6 volts, the devices can provide an energy density of 69 milliwatt hours per gram (mWh g-1) and a power density of 95 milliwatts per gram (mW g-1). “Our system thus lays the foundation for more in-depth research and further developments in the area of organic photo batteries,” says Wessling. More

  • in

    Researchers develop solid-state thermal transistor for better heat management

    A team of researchers from UCLA has unveiled a first-of-its-kind stable and fully solid-state thermal transistor that uses an electric field to control a semiconductor device’s heat movement.
    The group’s study, which will be published in the Nov. 3 issue of Science, details how the device works and its potential applications. With top speed and performance, the transistor could open new frontiers in heat management of computer chips through an atomic-level design and molecular engineering. The advance could also further the understanding of how heat is regulated in the human body.
    “The precision control of how heat flows through materials has been a long-held but elusive dream for physicists and engineers,” said the study’s co-author Yongjie Hu, a professor of mechanical and aerospace engineering at the UCLA Samueli School of Engineering.” This new design principle takes a big leap toward that, as it manages the heat movement with the on-off switching of an electric field, just like how it has been done with electrical transistors for decades.”
    Electrical transistors are the foundational building blocks of modern information technology. They were first developed by Bell Labs in the 1940s and have three terminals — a gate, a source and a sink. When an electrical field is applied through the gate, it regulates how electricity (in the form of electrons) moves through the chip. These semiconductor devices can amplify or switch electrical signals and power. But as they continue to shrink in size over the years, billions of transistors can fit on one chip, resulting in more heat generated from the movement of electrons, which affects chip performance. Conventional heat sinks passively draw heat away from hotspots, but it has remained a challenge to find a more dynamic control to actively regulate heat.
    While there have been efforts in tuning thermal conductivity, their performances have suffered due to reliance on moving parts, ionic motions, or liquid solution components. This has resulted in slow switching speeds for heat movement on the order of minutes or far slower, creating issues in performance reliability as well as incompatibility with semiconductor manufacturing.
    The new thermal transistor, which boasts a field effect (the modulation of the thermal conductivity of a material by the application of an external electric field) and a full solid state (no moving parts), offers high performance and compatibility with integrated circuits in semiconductor manufacturing processes. The team’s design incorporates the field effect on charge dynamics at an atomic interface to allow high performance using a negligible power to switch and amplify a heat flux continuously.
    The UCLA team demonstrated electrically gated thermal transistors that achieved record-high performance with switching speed of more than 1 megahertz, or 1 million cycles per second. They also offered a 1,300% tunability in thermal conductance and reliable performance for more than 1 million switching cycles. More

  • in

    AI trained to identify least green homes

    ‘Hard-to-decarbonize’ (HtD) houses are responsible for over a quarter of all direct housing emissions — a major obstacle to achieving net zero — but are rarely identified or targeted for improvement.
    Now a new ‘deep learning’ model trained by researchers from Cambridge University’s Department of Architecture promises to make it far easier, faster and cheaper to identify these high priority problem properties and develop strategies to improve their green credentials.
    Houses can be ‘hard to decarbonize’ for various reasons including their age, structure, location, social-economic barriers and availability of data. Policymakers have tended to focus mostly on generic buildings or specific hard-to-decarbonise technologies but the study, published in the journal Sustainable Cities and Society, could help to change this.
    Maoran Sun, an urban researcher and data scientist, and his PhD supervisor Dr Ronita Bardhan, who leads Cambridge’s Sustainable Design Group, show that their AI model can classify HtD houses with 90% precision and expect this to rise as they add more data, work which is already underway.
    Dr Bardhan said: “This is the first time that AI has been trained to identify hard-to-decarbonize buildings using open source data to achieve this.
    “Policymakers need to know how many houses they have to decarbonize, but they often lack the resources to perform detail audits on every house. Our model can direct them to high priority houses, saving them precious time and resources.”
    The model also helps authorities to understand the geographical distribution of HtD houses, enabling them to efficiently target and deploy interventions efficiently. More

  • in

    What a ‘2D’ quantum superfluid feels like to the touch

    Researchers from Lancaster University in the UK have discovered how superfluid helium 3He would feel if you could put your hand into it.
    The interface between the exotic world of quantum physics and classical physics of the human experience is one of the major open problems in modern physics.
    Dr Samuli Autti is the lead author of the research published in Nature Communications.
    Dr Autti said: “In practical terms, we don’t know the answer to the question ‘how does it feel to touch quantum physics?’
    “These experimental conditions are extreme and the techniques complicated, but I can now tell you how it would feel if you could put your hand into this quantum system.
    “Nobody has been able to answer this question during the 100-year history of quantum physics. We now show that, at least in superfluid 3He, this question can be answered.”
    The experiments were carried out at about a 10000th of a degree above absolute zero in a special refrigerator and made use of mechanical resonator the size of a finger to probe the very cold superfluid. More

  • in

    Optical-fiber based single-photon light source at room temperature for next-generation quantum processing

    Quantum-based systems promise faster computing and stronger encryption for computation and communication systems. These systems can be built on fiber networks involving interconnected nodes which consist of qubits and single-photon generators that create entangled photon pairs.
    In this regard, rare-earth (RE) atoms and ions in solid-state materials are highly promising as single-photon generators. These materials are compatible with fiber networks and emit photons across a broad range of wavelengths. Due to their wide spectral range, optical fibers doped with these RE elements could find use in various applications, such as free-space telecommunication, fiber-based telecommunications, quantum random number generation, and high-resolution image analysis. However, so far, single-photon light sources have been developed using RE-doped crystalline materials at cryogenic temperatures, which limits the practical applications of quantum networks based on them.
    In a study published in Volume 20, Issue 4 of the journal Physical Review Applied on 16 October 2023, a team of researchers from Japan, led by Associate Professor Kaoru Sanaka from Tokyo University of Science (TUS) has successfully developed a single-photon light source consisting of doped ytterbium ions (Yb3+) in an amorphous silica optical fiber at room temperature. Associate Professor Mark Sadgrove and Mr. Kaito Shimizu from TUS and Professor Kae Nemoto from the Okinawa Institute of Science and Technology Graduate University were also a part of this study. This newly developed single-photon light source eliminates the need for expensive cooling systems and has the potential to make quantum networks more cost-effective and accessible.
    “Single-photon light sources are devices that control the statistical properties of photons, which represent the smallest energy units of light,” explains Dr. Sanaka. “In this study, we have developed a single-photon light source using an optical fiber material doped with optically active RE elements. Our experiments also reveal that such a source can be generated directly from an optical fiber at room temperature.”
    Ytterbium is an RE element with favorable optical and electronic properties, making it a suitable candidate for doping the fiber. It has a simple energy-level structure, and ytterbium ion in its excited state has a long fluorescence lifetime of around one millisecond.
    To fabricate the ytterbium-doped optical fiber, the researchers tapered a commercially available ytterbium-doped fiber using a heat-and-pull technique, where a section of the fiber is heated and then pulled with tension to gradually reduce its diameter.
    Within the tapered fiber, individual RE atoms emit photons when excited with a laser. The separation between these RE atoms plays a crucial role in defining the fiber’s optical properties. For instance, if the average separation between the individual RE atoms exceeds the optical diffraction limit, which is determined by the wavelength of the emitted photons, the emitted light from these atoms appears as though it is coming from clusters rather than distinct individual sources.
    To confirm the nature of these emitted photons, the researchers employed an analytical method known as auto-correlation, which assesses the similarity between a signal and its delayed version. By analyzing the emitted photon pattern using autocorrelation, the researchers observed non-resonant emissions and further obtained evidence of photon emission from the single ytterbium ion in the doped filter.
    While quality and quantity of emitted photons can be enhanced further, the developed optical fiber with ytterbium atoms can be manufactured without the need for expensive cooling systems. This overcomes a significant hurdle and opens doors to various next-generation quantum information technologies. “We have demonstrated a low-cost single-photon light source with selectable wavelength and without the need for a cooling system. Going ahead, it can enable various next-generation quantum information technologies such as true random number generators, quantum communication, quantum logic operations, and high-resolution image analysis beyond the diffraction limit,” concludes Dr. Sanaka. More