More stories

  • in

    New weapon against dementia

    Nearly 100,000 Danes over the age of 65 and more than 55 million people around the world live with dementia-related disorders such as Alzheimer’s and Parkinson’s. These diseases arise when some of the smallest building blocks in the body clump together and destroy vital functions. Why this occurs and how to treat it remains a scientific mystery. Until now, studying the phenomenon has been very challenging and limited due to an absence of the right tools.
    Now, researchers from the Hatzakis lab at the University of Copenhagen’s Department of Chemistry have invented a machine learning algorithm that can track clumping under the microscope in real-time. The algorithm can automatically map and track the important characteristics of the clumped-up building blocks that cause Alzheimer’s and other neurodegenerative disorders. Until now, doing so has been impossible.
    “In just minutes, our algorithm solves a challenge that would take researchers several weeks. That it will now be easier to study microscopic images of clumping proteins will hopefully contribute to our knowledge, and in the long term, lead to new therapies for neurodegenerative brain disorders,” says PhD Jacob Kæstel-Hansen from the Department of Chemistry, who, alongside Nikos Hatzakis, led the research team behind the algorithm.
    Microscopic proteins detected in no time
    The coming together and exchange of compounds and signals among proteins and other molecules occurs billions of times within our cells in natural processes that allow our bodies function. But when errors occur, proteins can clump together in ways that interfere with their ability to work as intended. Among other things, this can lead to neurodegenerative disorders in the brain and cancer.
    The researchers’ machine learning algorithm can spot protein clumps down to a billionth of a meter in microscopy images. At the same time, the algorithm can count and then group clumps according to their shapes and sizes, all while tracking their development over time. The appearance of clumps can have a major impact on their function and how they behave in the body, for better or worse.
    “When studying clumps through a microscope, one quickly sees, for example, that some are rounder, while others have filamentous structures. And, their exact shape can vary depending on the disorder they trigger. But to sit and count them manually many thousands of times takes a very long time, which could be better spent on other things,” says Steen Bender from the Department of Chemistry, the article’s first author.

    In the future, the algorithm will make it much easier to learn more about why clumps form so that we can develop new drugs and therapies to combat these disorders.
    “The fundamental understanding of these clumps depends on us being able to see, track and quantify them, and describe what they look like over time. No other methods can currently do so automatically and as effectively,” he says.
    Tools are freely available to everyone
    The Department of Chemistry researchers are in now in full swing using the tool to conduct experiments with insulin molecules. As insulin molecules clump, their ability to regulate our blood sugar weakens.
    “We see this undesirable clumping in insulin molecules as well. Our new tool can let us see how these clumps are affected by whatever compounds we add. In this way, the model can help us work towards understanding how to potentially stop or transform them into less dangerous or more stable clumps,” explains Jacob Kæstel-Hansen.
    Thus, the researchers see great potential in being able to use the tool to develop new drugs once the microscopic building blocks have been clearly identified. The researchers hope that their work will kickstart the gathering of more comprehensive knowledge about the shapes and functions of proteins and molecules.
    “As other researchers around the world begin to deploy the tool, it will help create a large library of molecule and protein structures related to various disorders and biology in general. This will allow us to better understand diseases and try to stop them,” concludes Nikos Hatzakis from the Department of Chemistry.
    The algorithm is freely available on the internet as open source and can be used by scientific researchers and anyone else working to understand the clumping of proteins and other molecules.
    The research was conducted by: Steen W.B. Bender, Marcus W. Dreisler, Min Zhang, Jacob Kæstel-Hansen and Nikos S. Hatzakis from the Department of Chemistry with support from the Novo Nordisk Foundation Center for Optimised Oligo Escape and Control of Disease. More

  • in

    Flexible film senses nearby movements — featured in blink-tracking glasses

    I’m not touching you! When another person’s finger hovers over your skin, you may get the sense that they’re touching you, feeling not necessarily contact, but their proximity. Similarly, researchers reporting in ACS Applied Materials & Interfaces have designed a soft, flexible film that senses the presence of nearby objects without physically touching them. The study features the new sensor technology to detect eyelash proximity in blink-tracking glasses.
    Noncontact sensors can identify or measure an object without directly touching it. Examples of these devices include infrared thermometers and vehicle proximity notification systems. One type of noncontact sensor relies on static electricity to detect closeness and small motions, and has the potential to enhance smart devices, such as allowing phone screens to recognize more finger gestures. So far, however, they’ve been limited in what types of objects get detected, how long they stay charged and how hard they are to fabricate. So, Xunlin Qiu, Yiming Wang, Fuzhen Xuan and coworkers wanted to create a flexible static electricity-based sensor that overcame these problems.
    The researchers began by simply fabricating a three-part system: fluorinated ethylene propylene (FEP) for the top sensing layer, with an electrically conductive film and flexible plastic base for the middle and bottom layers, respectively. FEP is an electret, a type of material that’s electrically charged and produces an external electrostatic field, similar to the way a magnet produces a magnetic field. Then they electrically charged the FEP-based sensor making it ready for use.
    As objects approached the FEP surface, their inherent static charge caused an electrical current to flow in the sensor, thereby “feeling” the object without physical contact. The resulting clear and flexible sensor detected objects — made of glass, rubber, aluminum and paper — that were nearly touching it but not quite, from 2 to 20 millimeters (less than an inch) away. The sensor held its charge for over 3,000 different approach-withdraw cycles over almost two hours.
    In a demonstration of the new sensing film, the researchers attached it to the inner side of an eyeglass lens. When worn by a person, the glasses noticed the approach of eyelashes and identified when the wearer blinked Morse code for “E C U S T,” the abbreviation for the researchers’ institution. In the future, the researchers say their noncontact sensors could be used to help people who are unable to speak or use sign language communicate or even detect drowsiness when driving.
    The authors acknowledge funding from Natural Science Foundation of China Grants, Shanghai Pilot Program for Basic Research, the “Chenguang Program” supported by the Shanghai Education Development Foundation and Shanghai Municipal Education Commission, National Key Research and Development Program of China, Natural Science Foundation of Shanghai, and the Open Project of State Key Laboratory of Chemical Engineering. More

  • in

    New crystal production method could enhance quantum computers and electronics

    In a study published in Nature Materials, scientists from the University of California, Irvine describe a new method to make very thin crystals of the element bismuth — a process that may aid the manufacturing of cheap flexible electronics an everyday reality.
    “Bismuth has fascinated scientists for over a hundred years due to its low melting point and unique electronic properties,” said Javier Sanchez-Yamagishi, assistant professor of physics & astronomy at UC Irvine and a co-author of the study. “We developed a new method to make very thin crystals of materials such as bismuth, and in the process reveal hidden electronic behaviors of the metal’s surfaces.”
    The bismuth sheets the team made are only a few nanometers thick. Sanchez-Yamagishi explained how theorists have predicted that bismuth contains special electronic states allowing it to become magnetic when electricity flows through it — something essential for quantum electronic devices based on the magnetic spin of electrons.
    One of the hidden behaviors observed by the team is so-called quantum oscillations originating from the surfaces of the crystals. “Quantum oscillations arise from the motion of an electron in a magnetic field,” said Laisi Chen, a Ph.D. candidate in physics & astronomy at UC Irvine and one of the lead authors of the paper. “If the electron can complete a full orbit around a magnetic field, it can exhibit effects that are important for the performance of electronics. Quantum oscillations were first discovered in bismuth in the 1930s, but have never been seen in nanometer-thin bismuth crystals.”
    Amy Wu, a Ph.D. candidate in physics in Sanchez-Yamagishi’s lab, likened the team’s new method to a tortilla press. To make the ultra-thin sheets of bismuth, Wu explained, they had to squish bismuth between two hot plates. To make the sheets as flat as they are, they had to use molding plates that are perfectly smooth at the atomic level, meaning there are no microscopic divots or other imperfections on the surface. “We then made a kind of quesadilla or panini where the bismuth is the cheesy filling and the tortillas are the atomically flat surfaces,” said Wu.
    “There was this nervous moment where we had spent over a year making these beautiful thin crystals, but we had no idea whether its electrical properties would be something extraordinary,” said Sanchez-Yamagishi. “But when we cooled down the device in our lab, we were amazed to observe quantum oscillations, which have not been previously seen in thin bismuth films.”
    “Compression is a very common manufacturing technique used for making common household materials such as aluminum foil, but is not commonly used for making electronic materials like those in your computers,” Sanchez-Yamagishi added. “We believe our method will generalize to other materials, such as tin, selenium, tellurium and related alloys with low melting points, and it could be interesting to explore for future flexible electronic circuits.”
    Next, the team wants to explore other ways in which compression and injection molding methods can be used to make the next computer chips for phones or tablets.
    “Our new team members bring exciting ideas to this project, and we’re working on new techniques to gain further control over the shape and thickness of the grown bismuth crystals,” said Chen. “This will simplify how we fabricate devices, and take it one step closer for mass production.”
    The research team included collaborators from UC Irvine, Los Alamos National Laboratory and the National Institute for Materials Science in Japan. The research was primarily funded by the Air Force Office of Scientific Research, with partial support coming from the UC Irvine Center for Complex and Active Materials Seed Program, a Materials Research Science and Engineering Center under the National Science Foundation. More

  • in

    Batteries: Modeling tomorrow’s materials today

    Research into new battery materials is aimed at optimizing their performance and lifetime and at reducing costs. Work is also underway to reduce the consumption of rare elements, such as lithium and cobalt, as well as toxicconstituents. Sodium-ion batteries are considered very promising in this respect. They are based on principles similar to those of lithium-ion batteries, but can be produced from raw materials that are widely accessible in Europe. And they are suitable for both stationary and mobile applications. “Layered oxides, such as sodium-nickel-manganese oxides, are highly promising cathode materials,” says Dr. Simon Daubner, Group Leader at the Institute for Applied Materials — Microstructure Modelling and Simulation (IAM-MMS) of KIT and corresponding author of the study. Within the POLiS (stands for Post Lithium Storage) Cluster of Excellence, he investigates sodium-ion technology.
    Fast Charging Creates Mechanical Stress
    However, cathode materials of this type have a problem. Sodium-nickel-manganese oxides change their crystal structure depending on how much sodium is stored. If the material is charged slowly, everything proceeds in a well-ordered way. “Sodium leaves the material Layer by layer, just like cars leaving a carpark story by story,” Daubner explains. “But when charging is quick, sodium is extracted from all sides.” This results in mechanical stress that may damage the material permanently.
    Researchers from the Institute of Nanotechnology (INT) and IAM-MMS of KIT, together with scientists from Ulm University and the Center for Solar Energy and Hydrogen Research Baden-Württemberg (ZSW), recently carried out simulations to clarify the situation. They report in npj Computational Materials, a journal of the Nature portfolio.
    Experiments Confirm Simulation Results
    “Computer models can describe various length scales, from the arrangement of atoms in electrode materials to their microstructure to the cell as the functional unit of any battery,” Daubner says. To study the NaXNi1/3Mn2/3O2 layered oxide, microstructured models were combined with slow charge and discharge experiments. The material was found to exhibit several degradation mechanisms causing a loss of capacity. For this reason, it is not yet suited for commercial applications. A change in the crystal structure results in an elastic deformation. The crystal shrinks, which may cause cracking and capacity reduction. INT and IAM-MMS simulations show that this mechanical influence decisively determines the time needed for charging the material. Experimental studies at ZSW confirm these results.
    The findings of the study can be transferred partly to other layered oxides. “Now, we understand basic processes and can work on the development of battery materials that are long-lastin and can be charged as quickly as possible,” Daubner summarizes. This could lead to the widespread use of sodium-ion batteries in five to ten years’ time. More

  • in

    3D printing robot creates extreme shock-absorbing shape, with help of AI

    Inside a lab in Boston University’s College of Engineering, a robot arm drops small, plastic objects into a box placed perfectly on the floor to catch them as they fall. One by one, these tiny structures — feather-light, cylindrical pieces, no bigger than an inch tall — fill the box. Some are red, others blue, purple, green, or black.
    Each object is the result of an experiment in robot autonomy. On its own, learning as it goes, the robot is searching for, and trying to make, an object with the most efficient energy-absorbing shape to ever exist.
    To do this, the robot creates a small plastic structure with a 3D printer, records its shape and size, moves it to a flat metal surface — and then crushes it with a pressure equivalent to an adult Arabian horse standing on a quarter. The robot then measures how much energy the structure absorbed, how its shape changed after being squashed, and records every detail in a vast database. Then, it drops the crushed object into the box and wipes the metal plate clean, ready to print and test the next piece. It will be ever-so-slightly different from its predecessor, its design and dimensions tweaked by the robot’s computer algorithm based on all past experiments — the basis of what’s called Bayesian optimization. Experiment after experiment, the 3D structures get better at absorbing the impact of getting crushed.
    These experiments are possible because of the work of Keith Brown, an ENG associate professor of mechanical engineering, and his team in the KABlab. The robot, named MAMA BEAR — short for its lengthy full title, Mechanics of Additively Manufactured Architectures Bayesian Experimental Autonomous Researcher — has evolved since it was first conceptualized by Brown and his lab in 2018. By 2021, the lab had set the machine on its quest to make a shape that absorbs the most energy, a property known as its mechanical energy absorption efficiency. This current iteration has run continuously for over three years, filling dozens of boxes with more than 25,000 3D-printed structures.
    Why so many shapes? There are countless uses for something that can efficiently absorb energy — say, cushioning for delicate electronics being shipped across the world or for knee pads and wrist guards for athletes. “You could draw from this library of data to make better bumpers in a car, or packaging equipment, for example,” Brown says.
    To work ideally, the structures have to strike the perfect balance: they can’t be so strong that they cause damage to whatever they’re supposed to protect, but should be strong enough to absorb impact. Before MAMA BEAR, the best structure anyone ever observed was about 71 percent efficient at absorbing energy, says Brown. But on a chilly January afternoon in 2023, Brown’s lab watched their robot hit 75 percent efficiency, breaking the known record. The results have just been published in Nature Communications.
    “When we started out, we didn’t know if there was going to be this record-breaking shape,” says Kelsey Snapp (ENG’25), a PhD student in Brown’s lab who oversees MAMA BEAR. “Slowly but surely we kept inching up, and broke through.”
    The record-breaking structure looks like nothing the researchers would have expected: it has four points, shaped like thin flower petals, and is taller and narrower than the early designs.

    “We’re excited that there’s so much mechanical data here, that we’re using this to learn lessons about design more generally,” Brown says.
    Their extensive data is already getting its first real-life application, helping to inform the design of new helmet padding for US Army soldiers. Brown, Snapp, and project collaborator Emily Whiting, a BU College of Arts & Sciences associate professor of computer science, worked with the US Army and went through field testing to ensure helmets using their patent-pending padding are comfortable and provide sufficient protection from impact. The 3D structure used for the padding is different from the record-breaking piece — with a softer center and shorter stature to help with comfort.
    MAMA BEAR is not Brown’s only autonomous research robot. His lab has other “BEAR” robots performing different tasks — like the nano BEAR, which studies the way materials behave at the molecular scale using a technology called atomic force microscopy. Brown has also been working with Jörg Werner, an ENG assistant professor of mechanical engineering, to develop another system, known as the PANDA — short for Polymer Analysis and Discovery Array — BEAR to test thousands of thin polymer materials to find one that works best in a battery.
    “They’re all robots that do research,” Brown says. “The philosophy is that they’re using machine learning together with automation to help us do research much faster.”
    “Not just faster,” adds Snapp. “You can do things you couldn’t normally do. We can reach a structure or goal that we wouldn’t have been able to achieve otherwise, because it would have been too expensive and time-consuming.” He has worked closely with MAMA BEAR since the experiments began in 2021, and gave the robot its ability to see — known as machine vision — and clean its own test plate.
    The KABlab is hoping to further demonstrate the importance of autonomous research. Brown wants to keep collaborating with scientists in various fields who need to test incredibly large numbers of structures and solutions. Even though they already broke a record, “we have no ability to know if we’ve reached the maximum efficiency,” Brown says, meaning they could possibly break it again. So, MAMA BEAR will keep on running, pushing boundaries further, while Brown and his team see what other applications the database can be useful for. They’re also exploring how the more than 25,000 crushed pieces can be unwound and reloaded into the 3D printers so the material can be recycled for more experiments.
    “We’re going to keep studying this system, because mechanical efficiency, like so many other material properties, is only accurately measured by experiment,” Brown says, “and using self-driving labs helps us pick the best experiments and perform them as fast as possible.” More

  • in

    Improving statistical methods to protect wildlife populations

    In human populations, it is relatively easy to calculate demographic trends and make projections of the future if data on basic processes such as births and immigration is known. The data, given by individuals, can be also death and emigration, which subtract. In the wild, on the other hand, understanding the processes that determine wildlife demographic patterns is a highly complex challenge for the scientific community. Although a wide range of methods are now available to estimate births and deaths in wildlife, quantifying emigration and immigration has historically been difficult or impossible in many populations of interest, particularly in the case of threatened species.
    A paper published in the journal Biological Conservation warns that missing data on emigration and immigration movements in wildlife can lead to significant biases in species’ demographic projections. As a result, projections about the short-, medium- and long-term future of study populations may be inadequate. This puts their survival at risk due to the implementation of erroneous or ineffective conservation strategies. The authors of the new study are Joan Real, Jaume A. Badia-Boher and Antonio Hernández-Matías, from the Conservation Biology team of the Faculty of Biology of the University of Barcelona and the Institute for Research on Biodiversity (IRBio).
    More reliable population predictions
    This new study on population biology is based on data collected from 2008 to 2020 on the population of the Bonelli’s eagle (Aquila fasciata), a threatened species that can be found in Catalonia in the coastal and pre-coastal mountain ranges, from the Empordà to Terres de L’Ebre. In the study, the team emphasises the precision of the population viability analysis (PVA) methodology to improve the management and conservation of long-lived species in the natural environment.
    “Population viability analyses are a set of methods that allow us to project the demography of a species into the future, mainly to quantify the probability of extinction of a given species or population of interest,” says Joan Real, professor at the Department of Evolutionary Biology, Ecology and Environmental Sciences and head of the Conservation Biology team.
    “To date — he continues — these projections have mostly been carried out only with data on births and deaths, so that migration processes were ignored because of the difficulty of obtaining these data. In other words, we are trying to make demographic projections without considering two key demographic processes.”
    Threats affecting more and more species
    In the study of wildlife, population models that do not incorporate immigration or emigration “have a considerable probability of leading to biased projections of future population trends. However, explicitly considering migratory processes allows us to consider all the key demographic processes that determine the future trend of a population,” says expert Jaume A. Badia-Boher, first author of the study. “This allows us to be much more precise when making demographic predictions, and therefore also when planning future conservation strategies,” he adds.

    The development of new and more sophisticated statistical methods over the last decade has made it possible to estimate emigration and immigration in a much more accessible way than before. Including these processes in population viability analyses is therefore relatively straightforward, the paper details.
    “This new perspective may imply a relevant advance in the reliability of population viability analyses, which will allow us to estimate the future trend of populations more accurately and propose conservation actions more efficiently,” notes Professor Antonio Hernández-Matías. “This is of great importance given that in the current context of global change the extinction rates of species are increasing, and more and more species require urgent and effective conservation actions to reverse their decline,” the expert says.
    Applying methodological developments to conserve biodiversity
    Introducing changes in the structure and modelling of population viability analyses can lead to multiple benefits in many areas of biodiversity research and conservation. “Methodological advances are effective when they are applied. For this reason, the application of the new methodology in populations and species of conservation interest should be promoted. It is a priority to make these methodologies known to the scientific community, managers and administration, in order to prioritise conservation actions with the best available methods,” say the authors.
    “In the future, new methodologies must continue to be developed, as has been done in this study, as they are key to understanding how wild populations function, what measures need to be implemented to conserve them, and how to make these measures as efficient as possible. In the case of endangered species such as the Bonelli’s eagle, knowing the rates of emigration and immigration is key to understanding the state of self-sustainability of a population, and thus implementing efficient conservation measures,” concludes the team. More

  • in

    How AI helps programming a quantum computer

    Researchers from the University of Innsbruck have unveiled a novel method to prepare quantum operations on a given quantum computer, using a machine learning generative model to find the appropriate sequence of quantum gates to execute a quantum operation. The study, recently published in Nature Machine Intelligence, marks a significant step forward in unleashing the full extent of quantum computing.
    Generative models like diffusion models are one of the most important recent developments in Machine Learning (ML), with models as Stable Diffusion and Dall.e revolutionizing the field of image generation. These models are able to produce high quality images based on some text description. “Our new model for programming quantum computers does the same but, instead of generating images, it generates quantum circuits based on the text description of the quantum operation to be performed,” explains Gorka Muñoz-Gil from the Department of Theoretical Physics of the University of Innsbruck, Austria.
    To prepare a certain quantum state or execute an algorithm on a quantum computer, one needs to find the appropriate sequence of quantum gates to perform such operations. While this is rather easy in classical computing, it is a great challenge in quantum computing, due to the particularities of the quantum world. Recently, many scientists have proposed methods to build quantum circuits with many relying machine learning methods. However, training of these ML models is often very hard due to the necessity of simulating quantum circuits as the machine learns.
    Diffusion models avoid such problems due to the way how they are trained. “This provides a tremendous advantage,” explains Gorka Muñoz-Gil, who developed the novel method together with Hans J. Briegel and Florian Fürrutter. “Moreover, we show that denoising diffusion models are accurate in their generation and also very flexible, allowing to generate circuits with different numbers of qubits, as well as types and numbers of quantum gates.” The models also can be tailored to prepare circuits that take into consideration the connectivity of the quantum hardware, i.e. how qubits are connected in the quantum computer. “As producing new circuits is very cheap once the model is trained, one can use it to discover new insights about quantum operations of interest,” Gorka Muñoz-Gil names another potential of the new method.
    The method developed at the University of Innsbruck produces quantum circuits based on user specifications and tailored to the features of the quantum hardware the circuit will be run on. This marks a significant step forward in unleashing the full extent of quantum computing. The work has now been published in Nature Machine Intelligence and was financially supported by the Austrian Science Fund FWF and the European Union, among others. More

  • in

    AI can help improve ER admission decisions

    Generative artificial intelligence (AI), such as GPT-4, can help predict whether an emergency room patient needs to be admitted to the hospital even with only minimal training on a limited number of records, according to investigators at the Icahn School of Medicine at Mount Sinai. Details of the research were published in the May 21 online issue of the Journal of the American Medical Informatics Association.
    In the retrospective study, the researchers analyzed records from seven Mount Sinai Health System hospitals, using both structured data, such as vital signs, and unstructured data, such as nurse triage notes, from more than 864,000 emergency room visits while excluding identifiable patient data. Of these visits, 159,857 (18.5 percent) led to the patient being admitted to the hospital.
    The researchers compared GPT-4 against traditional machine-learning models such as Bio-Clinical-BERT for text and XGBoost for structured data in various scenarios, assessing its performance to predict hospital admissions independently and in combination with the traditional methods.
    “We were motivated by the need to test whether generative AI, specifically large language models (LLMs) like GPT-4, could improve our ability to predict admissions in high-volume settings such as the Emergency Department,” says co-senior author Eyal Klang, MD, Director of the Generative AI Research Program in the Division of Data-Driven and Digital Medicine (D3M) at Icahn Mount Sinai. “Our goal is to enhance clinical decision-making through this technology. We were surprised by how well GPT-4 adapted to the ER setting and provided reasoning for its decisions. This capability of explaining its rationale sets it apart from traditional models and opens up new avenues for AI in medical decision-making.”
    While traditional machine-learning models use millions of records for training, LLMs can effectively learn from just a few examples. Moreover, according to the researchers, LLMs can incorporate traditional machine-learning predictions, improving performance
    “Our research suggests that AI could soon support doctors in emergency rooms by making quick, informed decisions about patient admissions. This work opens the door for further innovation in health care AI, encouraging the development of models that can reason and learn from limited data, like human experts do,” says co-senior author Girish N. Nadkarni, MD, MPH, Irene and Dr. Arthur M. Fishberg Professor of Medicine at Icahn Mount Sinai, Director of The Charles Bronfman Institute of Personalized Medicine, and System Chief of D3M. “However, while the results are encouraging, the technology is still in a supportive role, enhancing the decision-making process by providing additional insights, not taking over the human component of health care, which remains critical.”
    The research team is investigating how to apply large language models to health care systems, with the goal of harmoniously integrating them with traditional machine-learning methods to address complex challenges and decision-making in real-time clinical settings.

    “Our study informs how LLMs can be integrated into health care operations. The ability to rapidly train LLMs highlights their potential to provide valuable insights even in complex environments like health care,” says Brendan Carr, MD, MA, MS, a study co-author and emergency room physician who is Chief Executive Officer of Mount Sinai Health System. “Our study sets the stage for further research on AI integration in health care across the many domains of diagnostic, treatment, operational, and administrative tasks that require continuous optimization.”
    The paper is titled “Evaluating the accuracy of a state-of-the-art large language model for prediction of admissions from the emergency room.”
    The remaining authors of the paper, all with Icahn Mount Sinai, are Benjamin S. Glicksberg, PhD; Dhaval Patel, BS; Ashwin Sawant, MD; Akhil Vaid, MD; Ganesh Raut, BS; Alexander W. Charney, MD, PhD; Donald Apakama, MD; and Robert Freeman, RN.
    The work was supported by the National Heart Lung and Blood Institute NIH grant 5R01HL141841-05. More