More stories

  • in

    Magnetic with a pinch of hydrogen

    Magnetic two-dimensional materials consisting of one or a few atomic layers have only recently become known and promise interesting applications, for example for the electronics of the future. So far, however, it has not been possible to control the magnetic states of these materials well enough. A German-American research team led by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and Dresden University of Technology (TUD) is now presenting in the journal Nano Letters an innovative idea that could overcome this shortcoming — by allowing the 2D layer to react with hydrogen.
    2D materials are ultra-thin, in some cases consisting of a single atomic layer. Due to their special properties, this still young class of materials offers exciting prospects for spintronics and data storage. In 2017, experts discovered a new variant — 2D materials that are magnetic. However, these systems have so far been difficult to switch back and forth between two magnetic states — a prerequisite for the construction of new types of electronic components — through targeted chemical influences. To overcome this problem, a research team from the HZDR and TUD led by junior research group leader Rico Friedrich set their sights on a special group of 2D materials: layers obtained from crystals in which relatively strong chemical bonds exist: so-called non-van der Waals 2D materials.
    Twenty years ago, the later Physics Noble Prize winners Konstantin Novoselov and Andre Geim were able to produce a 2D material in a targeted manner for the first time: Using adhesive tape, they peeled off a thin layer from a graphite crystal, thereby isolating single-layer carbon, so-called graphene. The simple trick worked because the individual layers of graphite are only loosely bound chemically. Incidentally, this is exactly what makes it possible to draw lines on paper with a pencil.
    “Only in recent years has it been possible to detach individual layers from crystals using liquid-based processes, in which the layers are much more strongly bound than in graphite,” explains Rico Friedrich, head of the “DRESDEN-concept” junior research group AutoMaT. “The resulting 2D materials are much more chemically active than graphene, for example.” The reason: these layers have unsaturated chemical bonds on their surface and therefore a strong tendency to bind with other substances.
    Turn 35 into 4
    Friedrich and his team came up with the following idea: if the reactive surface of these 2D materials were made to react with hydrogen, it should be possible to influence specifically the magnetic properties of the thin layers. However, it was unclear which of the 2D systems were particularly suitable for this. To answer this question, the experts combed through their previously developed database of 35 novel 2D materials and carried out detailed and extensive calculations using density functional theory. The challenge was to ensure the stability of the hydrogen-passivated systems in terms of energetic, dynamic and thermal aspects and to determine the correct magnetic state — a task that could only be accomplished with the support of several high-performance computing centers.
    When the hard work was done, four promising 2D materials remained. The group took a closer look at these once again. “In the end, we were able to identify three candidates that could be magnetically activated by hydrogen passivation,” reports Friedrich. A material called cadmium titanate (CdTiO3) proved to be particularly remarkable — it becomes ferromagnetic, i.e. a permanent magnet, through the influence of hydrogen. The three candidates treated with hydrogen should be easy to control magnetically and could therefore be suitable for new types of electronic components. As these layers are extremely thin, they could be easily integrated into flat device components — an important aspect for potential applications.
    Experiments are already underway
    “The next step is to confirm our theoretical findings experimentally,” says Rico Friedrich. “And several research teams are already trying to do this, for example at the University of Kassel and the Leibniz Institute for Solid State and Materials Research in Dresden.” But also at HZDR and TUD the research on 2D materials is continuing: among other things, Friedrich and his team are working on new types of 2D materials that could be relevant for energy conversion and storage in the long term. One focus is on the possible splitting of water into oxygen and hydrogen. The green hydrogen obtained this way could then be used, for example, as energy storage medium for times when there is too little solar and wind power available. More

  • in

    Despite AI advancements, human oversight remains essential

    State-of-the-art artificial intelligence systems known as large language models (LLMs) are poor medical coders, according to researchers at the Icahn School of Medicine at Mount Sinai. Their study, published in the April 19 online issue of NEJM AI, emphasizes the necessity for refinement and validation of these technologies before considering clinical implementation.
    The study extracted a list of more than 27,000 unique diagnosis and procedure codes from 12 months of routine care in the Mount Sinai Health System, while excluding identifiable patient data. Using the description for each code, the researchers prompted models from OpenAI, Google, and Meta to output the most accurate medical codes. The generated codes were compared with the original codes and errors were analyzed for any patterns.
    The investigators reported that all of the studied large language models, including GPT-4, GPT-3.5, Gemini-pro, and Llama-2-70b, showed limited accuracy (below 50 percent) in reproducing the original medical codes, highlighting a significant gap in their usefulness for medical coding. GPT-4 demonstrated the best performance, with the highest exact match rates for ICD-9-CM (45.9 percent), ICD-10-CM (33.9 percent), and CPT codes (49.8 percent).
    GPT-4 also produced the highest proportion of incorrectly generated codes that still conveyed the correct meaning. For example, when given the ICD-9-CM description “nodular prostate without urinary obstruction,” GPT-4 generated a code for “nodular prostate,” showcasing its comparatively nuanced understanding of medical terminology. However, even considering these technically correct codes, an unacceptably large number of errors remained.
    The next best-performing model, GPT-3.5, had the greatest tendency toward being vague. It had the highest proportion of incorrectly generated codes that were accurate but more general in nature compared to the precise codes. In this case, when provided with the ICD-9-CM description “unspecified adverse effect of anesthesia,” GPT-3.5 generated a code for “other specified adverse effects, not elsewhere classified.”
    “Our findings underscore the critical need for rigorous evaluation and refinement before deploying AI technologies in sensitive operational areas like medical coding,” says study corresponding author Ali Soroush, MD, MS, Assistant Professor of Data-Driven and Digital Medicine (D3M), and Medicine (Gastroenterology), at Icahn Mount Sinai. “While AI holds great potential, it must be approached with caution and ongoing development to ensure its reliability and efficacy in health care.”
    One potential application for these models in the health care industry, say the investigators, is automating the assignment of medical codes for reimbursement and research purposes based on clinical text.

    “Previous studies indicate that newer large language models struggle with numerical tasks. However, the extent of their accuracy in assigning medical codes from clinical text had not been thoroughly investigated across different models,” says co-senior author Eyal Klang, MD, Director of the D3M’s Generative AI Research Program. “Therefore, our aim was to assess whether these models could effectively perform the fundamental task of matching a medical code to its corresponding official text description.”
    The study authors proposed that integrating LLMs with expert knowledge could automate medical code extraction, potentially enhancing billing accuracy and reducing administrative costs in health care.
    “This study sheds light on the current capabilities and challenges of AI in health care, emphasizing the need for careful consideration and additional refinement prior to widespread adoption,” says co-senior author Girish Nadkarni, MD, MPH, Irene and Dr. Arthur M. Fishberg Professor of Medicine at Icahn Mount Sinai, Director of The Charles Bronfman Institute of Personalized Medicine, and System Chief of D3M.
    The researchers caution that the study’s artificial task may not fully represent real-world scenarios where LLM performance could be worse.
    Next, the research team plans to develop tailored LLM tools for accurate medical data extraction and billing code assignment, aiming to improve quality and efficiency in health care operations.
    The study is titled “Generative Large Language Models are Poor Medical Coders: A Benchmarking Analysis of Medical Code Querying.”
    The remaining authors on the paper, all with Icahn Mount Sinai except where indicated, are: Benjamin S. Glicksberg, PhD; Eyal Zimlichman, MD (Sheba Medical Center and Tel Aviv University, Israel); Yiftach Barash, (Tel Aviv University and Sheba Medical Center, Israel); Robert Freeman, RN, MSN, NE-BC; and Alexander W. Charney, MD, PhD.
    This research was supported by the AGA Research Foundation’s 2023 AGA-Amgen Fellowship to-Faculty Transition Award AGA2023-32-06 and an NIH UL1TR004419 award.
    The researchers affirm that the study was conducted without the use of any Protected Health Information (“PHI”). More

  • in

    Compact quantum light processing

    An international collaboration of researchers, led by Philip Walther at University of Vienna, have achieved a significant breakthrough in quantum technology, with the successful demonstration of quantum interference among several single photons using a novel resource-efficient platform. The work published in the journal Science Advances represents a notable advancement in optical quantum computing that paves the way for more scalable quantum technologies.
    Interference among photons, a fundamental phenomenon in quantum optics, serves as a cornerstone of optical quantum computing. It involves harnessing the properties of light, such as its wave-particle duality, to induce interference patterns, enabling the encoding and processing of quantum information.
    In traditional multi-photon experiments, spatial encoding is commonly employed, wherein photons are manipulated in different spatial paths to induce interference. These experiments require intricate setups with numerous components, making them resource-intensive and challenging to scale. In contrast, the international team, comprising scientists from University of Vienna, Politecnico di Milano, and Université libre de Bruxells, opted for an approach based on temporal encoding. This technique manipulates the time domain of photons rather than their spatial statistics. To realize this approach, they developed an innovative architecture at the Christian Doppler Laboratory at the University of Vienna, utilizing an optical fiber loop. This design enables repeated use of the same optical components, facilitating efficient multi-photon interference with minimal physical resources.
    First author Lorenzo Carosini explains: “In our experiment, we observed quantum interference among up to eight photons, surpassing the scale of most of existing experiments. Thanks to the versatility of our approach, the interference pattern can be reconfigured and the size of the experiment can be scaled, without changing the optical setup.” The results demonstrate the significant resource efficiency of the implemented architecture compared to traditional spatial-encoding approaches, paving the way for more accessible and scalable quantum technologies. More

  • in

    Accelerating the discovery of new materials via the ion-exchange method

    Tohoku University researchers have unveiled a new means of predicting how to synthesize new materials via the ion-exchange. Based on computer simulations, the method significantly reduces the time and energy required to explore for inorganic materials.
    Details of their research were published in the journal Chemistry of Materials on April 17, 2024.
    In the quest to form new materials that facilitate environmentally friendly and efficient energy technologies, scientists regularly rely on the high temperature reaction method to synthesize inorganic materials. When the raw substances are mixed and heated to very high temperatures, they are split into atoms and then reassemble into new substances. But this approach has some drawbacks. Only materials with the most energetically stable crystal structure can be formed, and it is not possible to synthesize materials that would decompose at high temperatures.
    On the contrary, the ion-exchange method forms new materials at relatively low temperatures. Ions from existing materials are exchanged with ions of similar charge from other materials, thereby forming new inorganic substances. The low synthesis temperature makes it possible to obtain compounds that would not be available by the usual high temperature reaction method.
    Despite its potential, however, the lack of a systematic approach to predicting appropriate material combinations for ion exchange has hindered its widespread adoption, necessitating laborious trial-and-error experiments.
    “In our study, we predicted the feasibility of materials suited for ion exchange using computer simulations,” says Issei Suzuki, a senior assistant professor at Tohoku University’s Institute of Multidisciplinary Research for Advanced Materials, and co-author of the paper.
    The simulations involved investigating the potential for ion exchange reactions between ternary wurtzite-type oxides and halides/nitrates. Specifically, Suzuki and his colleagues performed simulations on 42 combinations of β-MIGaO2, MI = Na, Li, Cu, Ag as precursors, and halides and nitrates as ion sources.
    The simulation results were divided into three categories: “ion exchange occurs,” “no ion exchange occurs,” and “partial ion exchange occurs (solid solution is formed). To confirm their results, the researchers verified the simulation through actual experiments, confirming an agreement between simulation and experiments in all 42 combinations.
    Suzuki believes that their advancement will accelerate the development of new materials suitable for improved energy technologies. “Our findings have shown that it is possible to predict whether ion exchange is feasible and to design reactions in advance without experimental trial and error. In the future, we plan to use this method to search for materials with new and attractive properties that will tackle energy problems.” More

  • in

    Octopus inspires new suction mechanism for robots

    A new robotic suction cup which can grasp rough, curved and heavy stone, has been developed by scientists at the University of Bristol.
    The team, based at Bristol Robotics Laboratory, studied the structures of octopus biological suckers, which have superb adaptive suction abilities enabling them to anchor to rock.
    In their findings, published in the journal PNAS today, the researchers show how they were able create a multi-layer soft structure and an artificial fluidic system to mimic the musculature and mucus structures of biological suckers.
    Suction is a highly evolved biological adhesion strategy for soft-body organisms to achieve strong grasping on various objects. Biological suckers can adaptively attach to dry complex surfaces such as rocks and shells, which are extremely challenging for current artificial suction cups. Although the adaptive suction of biological suckers is believed to be the result of their soft body’s mechanical deformation, some studies imply that in-sucker mucus secretion may be another critical factor in helping attach to complex surfaces, thanks to its high viscosity.
    Lead author Tianqi Yue explained: “The most important development is that we successfully demonstrated the effectiveness of the combination of mechanical conformation — the use of soft materials to conform to surface shape, and liquid seal — the spread of water onto the contacting surface for improving the suction adaptability on complex surfaces. This may also be the secret behind biological organisms ability to achieve adaptive suction.”
    Their multi-scale suction mechanism is an organic combination of mechanical conformation and regulated water seal. Multi-layer soft materials first generate a rough mechanical conformation to the substrate, reducing leaking apertures to just micrometres. The remaining micron-sized apertures are then sealed by regulated water secretion from an artificial fluidic system based on the physical model, thereby the suction cup achieves long suction longevity on diverse surfaces but with minimal overflow.
    Tianqi added: “We believe the presented multi-scale adaptive suction mechanism is a powerful new adaptive suction strategy which may be instrumental in the development of versatile soft adhesion.

    “Current industrial solutions use always-on air pumps to actively generate the suction however, these are noisy and waste energy.
    “With no need for a pump, it is well known that many natural organisms with suckers, including octopuses, some fishes such as suckerfish and remoras, leeches, gastropods and echinoderms, can maintain their superb adaptive suction on complex surfaces by exploiting their soft body structures.”
    The findings have great potential for industrial applications, such as providing a next-generation robotic gripper for grasping a variety of irregular objects.
    The team now plan to build a more intelligent suction cup, by embedding sensors into the suction cup to regulate suction cup’s behaviour. More

  • in

    Teaching a computer to type like a human

    An entirely new predictive typing model can simulate different kinds of users, helping figure out ways to optimize how we use our phones. Developed by researchers at Aalto University, the new model captures the difference between typing with one or two hands or between younger and older users.
    ‘Typing on a phone requires manual dexterity and visual perception: we press buttons, proofread text, and correct mistakes. We also use our working memory. Automatic text correction functions can help some people, while for others they can make typing harder,’ says Professor Antti Oulasvirta of Aalto University.
    The researchers created a machine-learning model that uses its virtual ‘eyes and fingers’ and working memory to type out a sentence, just like humans do. That means it also makes similar mistakes and has to correct them.
    ‘We created a simulated user with a human-like visual and motor system. Then we trained it millions of times in a keyboard simulator. Eventually, it learned typing skills that can also be used to type in various situations outside the simulator,’ explains Oulasvirta.
    The predictive typing model was developed in collaboration with Google. New designs for phone keyboards are normally tested with real users, which is costly and time-consuming. The project’s goal is to complement those tests so keyboards can be evaluated and optimized more quickly and easily.
    For Oulasvirta, this is part of a larger effort to improve user interfaces overall and understand how humans behave in task-oriented situations. He leads a research group at Aalto that uses computational models of human behaviour to probe these questions.
    ‘We can train computer models so that we don’t need observation of lots of people to make predictions. User interfaces are everywhere today — fundamentally, this work aims to create a more functional society and smoother everyday life,’ he says.
    The researchers will present their findings at the CHI Conference in May. More

  • in

    When thoughts flow in one direction

    Contrary to previous assumptions, nerve cells in the human neocortex are wired differently than in mice. Those are the findings of a new study conducted by Charité — Universitätsmedizin Berlin and published in the journal Science.* The study found that human neurons communicate in one direction, while in mice, signals tend to flow in loops. This increases the efficiency and capacity of the human brain to process information. These discoveries could further the development of artificial neural networks.
    The neocortex, a critical structure for human intelligence, is less than five millimeters thick. There, in the outermost layer of the brain, 20 billion neurons process countless sensory perceptions, plan actions, and form the basis of our consciousness. How do these neurons process all this complex information? That largely depends on how they are “wired” to each other.
    More complex neocortex — different information processing
    “Our previous understanding of neural architecture in the neocortex is based primarily on findings from animal models such as mice,” explains Prof. Jörg Geiger, Director of the Institute for Neurophysiology at Charité. In those models, the neighboring neurons frequently communicate with each other as if they are in dialogue. One neuron signals another, and then that one sends a signal back. That means the information often flows in recurrent loops.”
    The human neocortex is much thicker and more complex than that of a mouse. Nonetheless, researchers had previously assumed — in part due to lack of data — that it follows the same basic principles of connectivity. A team of Charité researchers led by Geiger has now used exceptionally rare tissue samples and state-of-the-art technology to demonstrate that this is not the case.
    A clever method of listening in on neuronal communication
    For the study, the researchers examined brain tissue from 23 people who had undergone neurosurgery at Charité to treat drug-resistant epilepsy. During surgery, it was medically necessary to remove brain tissue in order to gain access to the diseased structures beneath it. The patients had consented to the use of this access tissue for research purposes.

    To be able to observe the flows of signals between neighboring neurons in the outermost layer of the human neocortex, the team developed an improved version of what is known as the “multipatch” technique. This allowed the researchers to listen in on the communications taking place between as many as ten neurons at once (for details, see “About the method”). As a result, they were able to take the necessary number of measurements to map the network in the short time before the cells ceased their activity outside the body. In all, they analyzed the communication channels among nearly 1,170 neurons with about 7,200 possible connections.
    Feed-forward instead of in cycles
    They found that only a small fraction of the neurons engaged in reciprocal dialogue with each other. “In humans, the information tends to flow in one direction instead. It seldom returns to the starting point either directly or via cycles,” explains Dr. Yangfan Peng, first author of the publication. He worked on the study at the Institute for Neurophysiology and is now based at the Department of Neurology and the Neuroscience Research Center at Charité. The team used a computer simulation that they devised according to the same principles underlying the human network architecture to demonstrate that this forward-directed signal flow has benefits in terms of processing data.
    The researchers gave the artificial neural network a typical machine learning task: recognizing the correct numbers from audio recordings of spoken digits. The network model that mimicked the human structures achieved more correct responses to this speech recognition task than the one modeled on mice. It was also more efficient, with the same performance requiring the equivalent of 380 neurons in the mouse model, but only 150 in the human one.
    An economic role model for AI?
    “The directed network architecture we see in humans is more powerful and conserves resources because more independent neurons can handle different tasks simultaneously,” Peng explains. “This means that the local network can store more information. It isn’t clear yet whether our findings within the outermost layer of the temporal cortex extend to other cortical regions, or how well they might explain the unique cognitive abilities of humans.”
    In the past, AI developers have looked to biological models for inspiration in designing artificial neural networks, but have also optimized their algorithms independently of the biological models. “Many artificial neural networks already use some form of this forward-directed connectivity because it delivers better results for some tasks,” Geiger says. “It’s fascinating to see that the human brain also shows similar network principles. These insights into cost-efficient information processing in the human neocortex could provide further inspiration for refining AI networks.” More

  • in

    Skyrmions move at record speeds: A step towards the computing of the future

    An international research team led by scientists from the CNRS1 has discovered that the magnetic nanobubbles2 known as skyrmions can be moved by electrical currents, attaining record speeds up to 900 m/s.
    Anticipated as future bits in computer memory, these nanobubbles offer enhanced avenues for information processing in electronic devices. Their tiny size3 provides great computing and information storage capacity, as well as low energy consumption.
    Until now, these nanobubbles moved no faster than 100 m/s, which is too slow for computing applications. However, thanks to the use of an antiferromagnetic material4 as medium, the scientists successfully had the skyrmions move 10 times faster than previously observed.
    These results, which were published in Science on 19 March, offer new prospects for developing higher-performance and less energy-intensive computing devices.
    This study is part of the SPIN national research programme5 launched on 29 January, which supports innovative research in spintronics, with a view to helping develop a more agile and enduring digital world.
    notes :
    1 — The French laboratories involved are SPINTEC (CEA/CNRS/Université Grenoble Alpes), the Institut Néel (CNRS), and the Charles Coulomb Laboratory (CNRS/Université de Montpellier).

    2 — A skyrmion consists of elementary nanomagnets (“spins”) that wind to form a highly stable spiral structure, like a tight knot.
    3 — The size of a skyrmion can reach a few nanometres, which is to say approximately a dozen atoms.
    4 — Antiferromagnetic stacks consist of two nano-sized ferromagnetic layers (such as cobalt) separated by a think non-magnetic layer, with opposite magnetisation.
    5 — The SPIN priority research programme and equipment (PEPR) is an exploratory programme in connection with the France 2030 investment plan. More