More stories

  • in

    New sensing checks for 3D printed products could overhaul manufacturing sector

    In the study, published today in the journal Waves in Random and Complex Media, researchers from the University of Bristol have derived a formula that can inform the design boundaries for a given component’s geometry and material microstructure.
    A commercially viable sensing technology and associated imaging algorithm to assess the quality of such components currently does not exist. If the additive manufacturing (3D Printing) of metallic components could satisfy the safety and quality standards in industries there could be significant commercial advantages in the manufacturing sector.
    The key breakthrough is the use of ultrasonic array sensors, which are essentially the same as those used in medical imaging in, for example, creating images of babies in the womb. However, these new laser based versions would not require the sensor to be in contact with the material.
    Author Professor Anthony Mulholland, head of the School of Engineering Maths and Technology, explained: “There is a potential sensing method using a laser based ultrasonic array and we are using mathematical modelling to inform the design of the this equipment ahead of its in situ deployment.”
    The team built a mathematical model that incorporated the physics of ultrasonic waves propagating through a layered (as additively manufactured) metallic material, which took into account the variability one gets between each manufactured component.
    The mathematical formula is made up of the design parameters associated with the ultrasonic laser and the nature of the particular material. The output is a measure of how much information will be produced by the sensor to enable the mechanical integrity of the component to be assessed. The input parameters can then be varied to maximise this information content.
    It is hoped their discovery will accelerate the design and deployment of this proposed solution to this manufacturing opportunity.

    Professor Mullholland added: “We can then work with our industry partners to produce a means of assessing the mechanical integrity of these safety critical components at the manufacturing stage.
    “This could then lead to radically new designs (by taking full advantage of 3D printing), quicker and more cost effective production processes, and significant commercial and economic advantage to UK manufacturing.”
    Now the team plan to use the findings to help their experimental collaborators who are designing and building the laser based ultrasonic arrays.
    These sensors will then be deployed in situ by robotic arms in a controlled additive manufacturing environment. They will maximise the information content in the data produced by the sensor and create bespoke imaging algorithms to generate tomographic images of the interior of components supplied by their industry partners. Destructive means will then be employed to assess the quality of the tomographic images produced.
    Professor Mullholland concluded: “Opening up 3D printing in the manufacture of safety critical components, such as those found in the aerospace industry, would provide significant commercial advantage to UK industry.
    “The lack of a means of assessing the mechanical integrity of such components is the major blockage in taking this exciting opportunity forward. This study has built a mathematical model that simulates the use of a new laser based sensor, that could provide the solution to this problem, and this study will accelerate the sensor’s design and deployment.” More

  • in

    2D materials rotate light polarization

    German and Indian physicists have shown that ultra-thin two-dimensional materials such as tungsten diselenide can rotate the polarisation of visible light by several degrees at certain wavelengths under small magnetic fields suitable for use on chips.
    It has been known for centuries that light exhibits wave-like behaviour in certain situations. Some materials are able to rotate the polarisation, i.e. the direction of oscillation, of the light wave when the light passes through the material. This property is utilised in a central component of optical communication networks known as an “optical isolator” or “optical diode.” This component allows light to propagate in one direction but blocks all light in the other direction. In a recent study, German and Indian physicists have shown that ultra-thin two-dimensional materials such as tungsten diselenide can rotate the polarisation of visible light by several degrees at certain wavelengths under small magnetic fields suitable for use on chips. The scientists from the University of Münster, Germany, and the Indian Institute of Science Education and Research (IISER) in Pune, India, have published their findings in the journal Nature Communications.
    One of the problems with conventional optical isolators is that they are quite large with sizes ranging between several millimetres and several centimetres. As a result, researchers have not yet been able to create miniaturised integrated optical systems on a chip that are comparable to everyday silicon-based electronic technologies. Current integrated optical chips consist of only a few hundred elements on a chip. By comparison, a computer processor chip contains many billions of switching elements. The work of the German-Indian team is therefore a step forward in the development of miniaturised optical isolators. The 2D materials used by the researchers are only a few atomic layers thick and therefore a hundred thousand times thinner than a human hair.
    “In the future, two-dimensional materials could become the core of optical isolators and enable on-chip integration for today’s optical and future quantum optical computing and communication technologies,” says Prof Rudolf Bratschitsch from the University of Münster. Prof Ashish Arora from IISER adds: “Even the bulky magnets, which are also required for optical isolators, could be replaced by atomically thin 2-D magnets.” This would drastically reduce the size of photonic integrated circuits.
    The team deciphered the mechanism responsible for the effect they found: Bound electron-hole pairs, so-called excitons, in 2D semiconductors rotate the polarisation of the light very strongly when the ultra-thin material is placed in a small magnetic field. According to Ashish Arora, “conducting such sensitive experiments on two-dimensional materials is not easy because the sample areas are very small.” The scientists had to develop a new measuring technique that is around 1,000 times faster than previous methods. More

  • in

    Predicting cardiac arrhythmia 30 minutes before it happens

    Atrial fibrillation is the most common cardiac arrhythmia worldwide with around 59 million people concerned in 2019. This irregular heartbeat is associated with increased risks of heart failure, dementia and stroke. It constitutes a significant burden to healthcare systems, making its early detection and treatment a major goal. Researchers from the Luxembourg Centre for Systems Biomedicine (LCSB) of the University of Luxembourg have recently developed a deep-learning model capable of predicting the transition from a normal cardiac rhythm to atrial fibrillation. It gives early warnings on average 30 minutes before onset, with an accuracy of around 80%. These results, published in the scientific journal Patterns, pave the way for integration into wearable technologies, allowing early interventions and better patient outcomes.
    During atrial fibrillation, the heart’s upper chambers beat irregularly and are out of sync with the ventricles. Reverting to a regular rhythm can require intensive interventions, from shocking the heart back to normal sinus rhythm to the removal of a specific area responsible for faulty signals. Being able to predict an episode of atrial fibrillation early enough would allow patients to take preventive measures to keep their cardiac rhythm stable. However, current methods based on the analysis of heart rate and electrocardiogram (ECG) data are only able to detect atrial fibrillation right before its onset and do not provide an early warning.
    “In contrast, our work departs from this approach to a more prospective prediction model,” explains Prof. Jorge Goncalves, head of the Systems Control group at the LCSB. “We used heart rate data to train a deep learning model that can recognise different phases — sinus rhythm, pre-atrial fibrillation and atrial fibrillation — and calculate a “probability of danger” that the patient will have an imminent episode.” When approaching atrial fibrillation, the probability increases until it crosses a specific threshold, providing an early warning.
    This artificial intelligence model, called WARN (Warning of Atrial fibRillatioN), was trained and tested on 24h-recordings collected from 350 patients at Tongji Hospital (Wuhan, China) and gave early warnings, on average 30 minutes before the start of atrial fibrillation, with great accuracy. Compared to previous work on arrhythmia prediction, WARN is the first method to provide a warning far from onset.
    “Another interesting aspect is that our model has a high performance using only R-to-R intervals, basically just heart rate data, that can be acquired from easy-to-wear and affordable pulse signal recorders such as smartwatches,” highlights Dr Marino Gavidia, first author of the publication, who worked on this project during his PhD within the Systems Control group and the Doctoral Training Unit CriTiCS (see box below). “These devices can be used by patients on a daily basis, so our results open possibilities for the development of real-time monitoring and early warnings from comfortable wearable devices,” adds Dr Arthur Montanari, a LCSB researcher involved in the project.
    Additionally, the deep-learning model developed by the researchers could be implemented in smartphones to process the data from a smartwatch. This low computational cost makes it ideal for integration into wearable technologies. The long-term objective is for patients to be able to continuously monitor their cardiac rhythm and receive early warnings that can provide sufficient time to take antiarrhythmic medication or use some targeted treatments to prevent the onset of atrial fibrillation. This in turn would reduce emergency interventions and improve patient outcomes.
    “Moving forward, we will focus on developing personalised models. The daily use of a simple smartwatch constantly provides new information on personal heart dynamics, enabling us to continuously refine and retrain our model for that patient to achieve enhanced performance with even earlier warnings,” concludes Prof. Goncalves. “Eventually, this approach could even lead to new clinical trials and innovative therapeutic interventions.” More

  • in

    AI weather forecasts captured Ciaran’s destructive path

    Artificial intelligence (AI) can quickly and accurately predict the path and intensity of major storms, a new study has demonstrated.
    The research, based on an analysis of November 2023’s Storm Ciaran, suggests weather forecasts that use machine learning can produce predictions of similar accuracy to traditional forecasts faster, cheaper, and using less computational power.
    Published in npj Climate and Atmospheric Science, the University of Reading study highlights the rapid progress and transformative potential of AI in weather prediction.
    Professor Andrew Charlton-Perez, who led the study, said: “AI is transforming weather forecasting before our eyes. Two years ago, modern machine learning techniques were rarely being applied to make weather forecasts. Now we have multiple models that can produce 10-day global forecasts in minutes.
    “There is a great deal we can learn about AI weather forecasts by stress-testing them on extreme events like Storm Ciarán. We can identify their strengths and weaknesses and guide the development of even better AI forecasting technology to help protect people and property. This is an exciting and important time for weather forecasting.”
    Promise and pitfalls
    To understand the effectiveness of AI-based weather models, scientists from the University of Reading compared AI and physics-based forecasts of Storm Ciarán — a deadly windstorm that hit northern and central Europe in November 2023 which claimed 16 lives in northern Europe and left more than a million homes without power in France.
    The researchers used four AI models and compared their results with traditional physics-based models. The AI models, developed by tech giants like Google, Nvidia and Huawei, were able to predict the storm’s rapid intensification and track 48 hours in advance. To a large extent, the forecasts were ‘indistinguishable’ from the performance of conventional forecasting models, the researchers said. The AI models also accurately captured the large-scale atmospheric conditions that fuelled Ciarán’s explosive development, such as its position relative to the jet stream — a narrow corridor of strong high-level winds.
    The machine learning technology underestimated the storm’s damaging winds, however. All four AI systems underestimated Ciarán’s maximum wind speeds, which in reality gusted at speeds of up to 111 knots at Pointe du Raz, Brittany. The authors were able to show that this underestimation was linked to some of the features of the storm, including the temperature contrasts near its centre, that were not well predicted by the AI systems.
    To better protect people from extreme weather like Storm Ciaran, the researchers say further investigation of the use of AI in weather prediction is urgently needed. Development of machine learning models could mean artificial intelligence is routinely used in weather prediction in the near future, saving forecasters time and money. More

  • in

    Magnetic with a pinch of hydrogen

    Magnetic two-dimensional materials consisting of one or a few atomic layers have only recently become known and promise interesting applications, for example for the electronics of the future. So far, however, it has not been possible to control the magnetic states of these materials well enough. A German-American research team led by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and Dresden University of Technology (TUD) is now presenting in the journal Nano Letters an innovative idea that could overcome this shortcoming — by allowing the 2D layer to react with hydrogen.
    2D materials are ultra-thin, in some cases consisting of a single atomic layer. Due to their special properties, this still young class of materials offers exciting prospects for spintronics and data storage. In 2017, experts discovered a new variant — 2D materials that are magnetic. However, these systems have so far been difficult to switch back and forth between two magnetic states — a prerequisite for the construction of new types of electronic components — through targeted chemical influences. To overcome this problem, a research team from the HZDR and TUD led by junior research group leader Rico Friedrich set their sights on a special group of 2D materials: layers obtained from crystals in which relatively strong chemical bonds exist: so-called non-van der Waals 2D materials.
    Twenty years ago, the later Physics Noble Prize winners Konstantin Novoselov and Andre Geim were able to produce a 2D material in a targeted manner for the first time: Using adhesive tape, they peeled off a thin layer from a graphite crystal, thereby isolating single-layer carbon, so-called graphene. The simple trick worked because the individual layers of graphite are only loosely bound chemically. Incidentally, this is exactly what makes it possible to draw lines on paper with a pencil.
    “Only in recent years has it been possible to detach individual layers from crystals using liquid-based processes, in which the layers are much more strongly bound than in graphite,” explains Rico Friedrich, head of the “DRESDEN-concept” junior research group AutoMaT. “The resulting 2D materials are much more chemically active than graphene, for example.” The reason: these layers have unsaturated chemical bonds on their surface and therefore a strong tendency to bind with other substances.
    Turn 35 into 4
    Friedrich and his team came up with the following idea: if the reactive surface of these 2D materials were made to react with hydrogen, it should be possible to influence specifically the magnetic properties of the thin layers. However, it was unclear which of the 2D systems were particularly suitable for this. To answer this question, the experts combed through their previously developed database of 35 novel 2D materials and carried out detailed and extensive calculations using density functional theory. The challenge was to ensure the stability of the hydrogen-passivated systems in terms of energetic, dynamic and thermal aspects and to determine the correct magnetic state — a task that could only be accomplished with the support of several high-performance computing centers.
    When the hard work was done, four promising 2D materials remained. The group took a closer look at these once again. “In the end, we were able to identify three candidates that could be magnetically activated by hydrogen passivation,” reports Friedrich. A material called cadmium titanate (CdTiO3) proved to be particularly remarkable — it becomes ferromagnetic, i.e. a permanent magnet, through the influence of hydrogen. The three candidates treated with hydrogen should be easy to control magnetically and could therefore be suitable for new types of electronic components. As these layers are extremely thin, they could be easily integrated into flat device components — an important aspect for potential applications.
    Experiments are already underway
    “The next step is to confirm our theoretical findings experimentally,” says Rico Friedrich. “And several research teams are already trying to do this, for example at the University of Kassel and the Leibniz Institute for Solid State and Materials Research in Dresden.” But also at HZDR and TUD the research on 2D materials is continuing: among other things, Friedrich and his team are working on new types of 2D materials that could be relevant for energy conversion and storage in the long term. One focus is on the possible splitting of water into oxygen and hydrogen. The green hydrogen obtained this way could then be used, for example, as energy storage medium for times when there is too little solar and wind power available. More

  • in

    Despite AI advancements, human oversight remains essential

    State-of-the-art artificial intelligence systems known as large language models (LLMs) are poor medical coders, according to researchers at the Icahn School of Medicine at Mount Sinai. Their study, published in the April 19 online issue of NEJM AI, emphasizes the necessity for refinement and validation of these technologies before considering clinical implementation.
    The study extracted a list of more than 27,000 unique diagnosis and procedure codes from 12 months of routine care in the Mount Sinai Health System, while excluding identifiable patient data. Using the description for each code, the researchers prompted models from OpenAI, Google, and Meta to output the most accurate medical codes. The generated codes were compared with the original codes and errors were analyzed for any patterns.
    The investigators reported that all of the studied large language models, including GPT-4, GPT-3.5, Gemini-pro, and Llama-2-70b, showed limited accuracy (below 50 percent) in reproducing the original medical codes, highlighting a significant gap in their usefulness for medical coding. GPT-4 demonstrated the best performance, with the highest exact match rates for ICD-9-CM (45.9 percent), ICD-10-CM (33.9 percent), and CPT codes (49.8 percent).
    GPT-4 also produced the highest proportion of incorrectly generated codes that still conveyed the correct meaning. For example, when given the ICD-9-CM description “nodular prostate without urinary obstruction,” GPT-4 generated a code for “nodular prostate,” showcasing its comparatively nuanced understanding of medical terminology. However, even considering these technically correct codes, an unacceptably large number of errors remained.
    The next best-performing model, GPT-3.5, had the greatest tendency toward being vague. It had the highest proportion of incorrectly generated codes that were accurate but more general in nature compared to the precise codes. In this case, when provided with the ICD-9-CM description “unspecified adverse effect of anesthesia,” GPT-3.5 generated a code for “other specified adverse effects, not elsewhere classified.”
    “Our findings underscore the critical need for rigorous evaluation and refinement before deploying AI technologies in sensitive operational areas like medical coding,” says study corresponding author Ali Soroush, MD, MS, Assistant Professor of Data-Driven and Digital Medicine (D3M), and Medicine (Gastroenterology), at Icahn Mount Sinai. “While AI holds great potential, it must be approached with caution and ongoing development to ensure its reliability and efficacy in health care.”
    One potential application for these models in the health care industry, say the investigators, is automating the assignment of medical codes for reimbursement and research purposes based on clinical text.

    “Previous studies indicate that newer large language models struggle with numerical tasks. However, the extent of their accuracy in assigning medical codes from clinical text had not been thoroughly investigated across different models,” says co-senior author Eyal Klang, MD, Director of the D3M’s Generative AI Research Program. “Therefore, our aim was to assess whether these models could effectively perform the fundamental task of matching a medical code to its corresponding official text description.”
    The study authors proposed that integrating LLMs with expert knowledge could automate medical code extraction, potentially enhancing billing accuracy and reducing administrative costs in health care.
    “This study sheds light on the current capabilities and challenges of AI in health care, emphasizing the need for careful consideration and additional refinement prior to widespread adoption,” says co-senior author Girish Nadkarni, MD, MPH, Irene and Dr. Arthur M. Fishberg Professor of Medicine at Icahn Mount Sinai, Director of The Charles Bronfman Institute of Personalized Medicine, and System Chief of D3M.
    The researchers caution that the study’s artificial task may not fully represent real-world scenarios where LLM performance could be worse.
    Next, the research team plans to develop tailored LLM tools for accurate medical data extraction and billing code assignment, aiming to improve quality and efficiency in health care operations.
    The study is titled “Generative Large Language Models are Poor Medical Coders: A Benchmarking Analysis of Medical Code Querying.”
    The remaining authors on the paper, all with Icahn Mount Sinai except where indicated, are: Benjamin S. Glicksberg, PhD; Eyal Zimlichman, MD (Sheba Medical Center and Tel Aviv University, Israel); Yiftach Barash, (Tel Aviv University and Sheba Medical Center, Israel); Robert Freeman, RN, MSN, NE-BC; and Alexander W. Charney, MD, PhD.
    This research was supported by the AGA Research Foundation’s 2023 AGA-Amgen Fellowship to-Faculty Transition Award AGA2023-32-06 and an NIH UL1TR004419 award.
    The researchers affirm that the study was conducted without the use of any Protected Health Information (“PHI”). More

  • in

    Compact quantum light processing

    An international collaboration of researchers, led by Philip Walther at University of Vienna, have achieved a significant breakthrough in quantum technology, with the successful demonstration of quantum interference among several single photons using a novel resource-efficient platform. The work published in the journal Science Advances represents a notable advancement in optical quantum computing that paves the way for more scalable quantum technologies.
    Interference among photons, a fundamental phenomenon in quantum optics, serves as a cornerstone of optical quantum computing. It involves harnessing the properties of light, such as its wave-particle duality, to induce interference patterns, enabling the encoding and processing of quantum information.
    In traditional multi-photon experiments, spatial encoding is commonly employed, wherein photons are manipulated in different spatial paths to induce interference. These experiments require intricate setups with numerous components, making them resource-intensive and challenging to scale. In contrast, the international team, comprising scientists from University of Vienna, Politecnico di Milano, and Université libre de Bruxells, opted for an approach based on temporal encoding. This technique manipulates the time domain of photons rather than their spatial statistics. To realize this approach, they developed an innovative architecture at the Christian Doppler Laboratory at the University of Vienna, utilizing an optical fiber loop. This design enables repeated use of the same optical components, facilitating efficient multi-photon interference with minimal physical resources.
    First author Lorenzo Carosini explains: “In our experiment, we observed quantum interference among up to eight photons, surpassing the scale of most of existing experiments. Thanks to the versatility of our approach, the interference pattern can be reconfigured and the size of the experiment can be scaled, without changing the optical setup.” The results demonstrate the significant resource efficiency of the implemented architecture compared to traditional spatial-encoding approaches, paving the way for more accessible and scalable quantum technologies. More

  • in

    Accelerating the discovery of new materials via the ion-exchange method

    Tohoku University researchers have unveiled a new means of predicting how to synthesize new materials via the ion-exchange. Based on computer simulations, the method significantly reduces the time and energy required to explore for inorganic materials.
    Details of their research were published in the journal Chemistry of Materials on April 17, 2024.
    In the quest to form new materials that facilitate environmentally friendly and efficient energy technologies, scientists regularly rely on the high temperature reaction method to synthesize inorganic materials. When the raw substances are mixed and heated to very high temperatures, they are split into atoms and then reassemble into new substances. But this approach has some drawbacks. Only materials with the most energetically stable crystal structure can be formed, and it is not possible to synthesize materials that would decompose at high temperatures.
    On the contrary, the ion-exchange method forms new materials at relatively low temperatures. Ions from existing materials are exchanged with ions of similar charge from other materials, thereby forming new inorganic substances. The low synthesis temperature makes it possible to obtain compounds that would not be available by the usual high temperature reaction method.
    Despite its potential, however, the lack of a systematic approach to predicting appropriate material combinations for ion exchange has hindered its widespread adoption, necessitating laborious trial-and-error experiments.
    “In our study, we predicted the feasibility of materials suited for ion exchange using computer simulations,” says Issei Suzuki, a senior assistant professor at Tohoku University’s Institute of Multidisciplinary Research for Advanced Materials, and co-author of the paper.
    The simulations involved investigating the potential for ion exchange reactions between ternary wurtzite-type oxides and halides/nitrates. Specifically, Suzuki and his colleagues performed simulations on 42 combinations of β-MIGaO2, MI = Na, Li, Cu, Ag as precursors, and halides and nitrates as ion sources.
    The simulation results were divided into three categories: “ion exchange occurs,” “no ion exchange occurs,” and “partial ion exchange occurs (solid solution is formed). To confirm their results, the researchers verified the simulation through actual experiments, confirming an agreement between simulation and experiments in all 42 combinations.
    Suzuki believes that their advancement will accelerate the development of new materials suitable for improved energy technologies. “Our findings have shown that it is possible to predict whether ion exchange is feasible and to design reactions in advance without experimental trial and error. In the future, we plan to use this method to search for materials with new and attractive properties that will tackle energy problems.” More