More stories

  • in

    3D printing robot creates extreme shock-absorbing shape, with help of AI

    Inside a lab in Boston University’s College of Engineering, a robot arm drops small, plastic objects into a box placed perfectly on the floor to catch them as they fall. One by one, these tiny structures — feather-light, cylindrical pieces, no bigger than an inch tall — fill the box. Some are red, others blue, purple, green, or black.
    Each object is the result of an experiment in robot autonomy. On its own, learning as it goes, the robot is searching for, and trying to make, an object with the most efficient energy-absorbing shape to ever exist.
    To do this, the robot creates a small plastic structure with a 3D printer, records its shape and size, moves it to a flat metal surface — and then crushes it with a pressure equivalent to an adult Arabian horse standing on a quarter. The robot then measures how much energy the structure absorbed, how its shape changed after being squashed, and records every detail in a vast database. Then, it drops the crushed object into the box and wipes the metal plate clean, ready to print and test the next piece. It will be ever-so-slightly different from its predecessor, its design and dimensions tweaked by the robot’s computer algorithm based on all past experiments — the basis of what’s called Bayesian optimization. Experiment after experiment, the 3D structures get better at absorbing the impact of getting crushed.
    These experiments are possible because of the work of Keith Brown, an ENG associate professor of mechanical engineering, and his team in the KABlab. The robot, named MAMA BEAR — short for its lengthy full title, Mechanics of Additively Manufactured Architectures Bayesian Experimental Autonomous Researcher — has evolved since it was first conceptualized by Brown and his lab in 2018. By 2021, the lab had set the machine on its quest to make a shape that absorbs the most energy, a property known as its mechanical energy absorption efficiency. This current iteration has run continuously for over three years, filling dozens of boxes with more than 25,000 3D-printed structures.
    Why so many shapes? There are countless uses for something that can efficiently absorb energy — say, cushioning for delicate electronics being shipped across the world or for knee pads and wrist guards for athletes. “You could draw from this library of data to make better bumpers in a car, or packaging equipment, for example,” Brown says.
    To work ideally, the structures have to strike the perfect balance: they can’t be so strong that they cause damage to whatever they’re supposed to protect, but should be strong enough to absorb impact. Before MAMA BEAR, the best structure anyone ever observed was about 71 percent efficient at absorbing energy, says Brown. But on a chilly January afternoon in 2023, Brown’s lab watched their robot hit 75 percent efficiency, breaking the known record. The results have just been published in Nature Communications.
    “When we started out, we didn’t know if there was going to be this record-breaking shape,” says Kelsey Snapp (ENG’25), a PhD student in Brown’s lab who oversees MAMA BEAR. “Slowly but surely we kept inching up, and broke through.”
    The record-breaking structure looks like nothing the researchers would have expected: it has four points, shaped like thin flower petals, and is taller and narrower than the early designs.

    “We’re excited that there’s so much mechanical data here, that we’re using this to learn lessons about design more generally,” Brown says.
    Their extensive data is already getting its first real-life application, helping to inform the design of new helmet padding for US Army soldiers. Brown, Snapp, and project collaborator Emily Whiting, a BU College of Arts & Sciences associate professor of computer science, worked with the US Army and went through field testing to ensure helmets using their patent-pending padding are comfortable and provide sufficient protection from impact. The 3D structure used for the padding is different from the record-breaking piece — with a softer center and shorter stature to help with comfort.
    MAMA BEAR is not Brown’s only autonomous research robot. His lab has other “BEAR” robots performing different tasks — like the nano BEAR, which studies the way materials behave at the molecular scale using a technology called atomic force microscopy. Brown has also been working with Jörg Werner, an ENG assistant professor of mechanical engineering, to develop another system, known as the PANDA — short for Polymer Analysis and Discovery Array — BEAR to test thousands of thin polymer materials to find one that works best in a battery.
    “They’re all robots that do research,” Brown says. “The philosophy is that they’re using machine learning together with automation to help us do research much faster.”
    “Not just faster,” adds Snapp. “You can do things you couldn’t normally do. We can reach a structure or goal that we wouldn’t have been able to achieve otherwise, because it would have been too expensive and time-consuming.” He has worked closely with MAMA BEAR since the experiments began in 2021, and gave the robot its ability to see — known as machine vision — and clean its own test plate.
    The KABlab is hoping to further demonstrate the importance of autonomous research. Brown wants to keep collaborating with scientists in various fields who need to test incredibly large numbers of structures and solutions. Even though they already broke a record, “we have no ability to know if we’ve reached the maximum efficiency,” Brown says, meaning they could possibly break it again. So, MAMA BEAR will keep on running, pushing boundaries further, while Brown and his team see what other applications the database can be useful for. They’re also exploring how the more than 25,000 crushed pieces can be unwound and reloaded into the 3D printers so the material can be recycled for more experiments.
    “We’re going to keep studying this system, because mechanical efficiency, like so many other material properties, is only accurately measured by experiment,” Brown says, “and using self-driving labs helps us pick the best experiments and perform them as fast as possible.” More

  • in

    Improving statistical methods to protect wildlife populations

    In human populations, it is relatively easy to calculate demographic trends and make projections of the future if data on basic processes such as births and immigration is known. The data, given by individuals, can be also death and emigration, which subtract. In the wild, on the other hand, understanding the processes that determine wildlife demographic patterns is a highly complex challenge for the scientific community. Although a wide range of methods are now available to estimate births and deaths in wildlife, quantifying emigration and immigration has historically been difficult or impossible in many populations of interest, particularly in the case of threatened species.
    A paper published in the journal Biological Conservation warns that missing data on emigration and immigration movements in wildlife can lead to significant biases in species’ demographic projections. As a result, projections about the short-, medium- and long-term future of study populations may be inadequate. This puts their survival at risk due to the implementation of erroneous or ineffective conservation strategies. The authors of the new study are Joan Real, Jaume A. Badia-Boher and Antonio Hernández-Matías, from the Conservation Biology team of the Faculty of Biology of the University of Barcelona and the Institute for Research on Biodiversity (IRBio).
    More reliable population predictions
    This new study on population biology is based on data collected from 2008 to 2020 on the population of the Bonelli’s eagle (Aquila fasciata), a threatened species that can be found in Catalonia in the coastal and pre-coastal mountain ranges, from the Empordà to Terres de L’Ebre. In the study, the team emphasises the precision of the population viability analysis (PVA) methodology to improve the management and conservation of long-lived species in the natural environment.
    “Population viability analyses are a set of methods that allow us to project the demography of a species into the future, mainly to quantify the probability of extinction of a given species or population of interest,” says Joan Real, professor at the Department of Evolutionary Biology, Ecology and Environmental Sciences and head of the Conservation Biology team.
    “To date — he continues — these projections have mostly been carried out only with data on births and deaths, so that migration processes were ignored because of the difficulty of obtaining these data. In other words, we are trying to make demographic projections without considering two key demographic processes.”
    Threats affecting more and more species
    In the study of wildlife, population models that do not incorporate immigration or emigration “have a considerable probability of leading to biased projections of future population trends. However, explicitly considering migratory processes allows us to consider all the key demographic processes that determine the future trend of a population,” says expert Jaume A. Badia-Boher, first author of the study. “This allows us to be much more precise when making demographic predictions, and therefore also when planning future conservation strategies,” he adds.

    The development of new and more sophisticated statistical methods over the last decade has made it possible to estimate emigration and immigration in a much more accessible way than before. Including these processes in population viability analyses is therefore relatively straightforward, the paper details.
    “This new perspective may imply a relevant advance in the reliability of population viability analyses, which will allow us to estimate the future trend of populations more accurately and propose conservation actions more efficiently,” notes Professor Antonio Hernández-Matías. “This is of great importance given that in the current context of global change the extinction rates of species are increasing, and more and more species require urgent and effective conservation actions to reverse their decline,” the expert says.
    Applying methodological developments to conserve biodiversity
    Introducing changes in the structure and modelling of population viability analyses can lead to multiple benefits in many areas of biodiversity research and conservation. “Methodological advances are effective when they are applied. For this reason, the application of the new methodology in populations and species of conservation interest should be promoted. It is a priority to make these methodologies known to the scientific community, managers and administration, in order to prioritise conservation actions with the best available methods,” say the authors.
    “In the future, new methodologies must continue to be developed, as has been done in this study, as they are key to understanding how wild populations function, what measures need to be implemented to conserve them, and how to make these measures as efficient as possible. In the case of endangered species such as the Bonelli’s eagle, knowing the rates of emigration and immigration is key to understanding the state of self-sustainability of a population, and thus implementing efficient conservation measures,” concludes the team. More

  • in

    How AI helps programming a quantum computer

    Researchers from the University of Innsbruck have unveiled a novel method to prepare quantum operations on a given quantum computer, using a machine learning generative model to find the appropriate sequence of quantum gates to execute a quantum operation. The study, recently published in Nature Machine Intelligence, marks a significant step forward in unleashing the full extent of quantum computing.
    Generative models like diffusion models are one of the most important recent developments in Machine Learning (ML), with models as Stable Diffusion and Dall.e revolutionizing the field of image generation. These models are able to produce high quality images based on some text description. “Our new model for programming quantum computers does the same but, instead of generating images, it generates quantum circuits based on the text description of the quantum operation to be performed,” explains Gorka Muñoz-Gil from the Department of Theoretical Physics of the University of Innsbruck, Austria.
    To prepare a certain quantum state or execute an algorithm on a quantum computer, one needs to find the appropriate sequence of quantum gates to perform such operations. While this is rather easy in classical computing, it is a great challenge in quantum computing, due to the particularities of the quantum world. Recently, many scientists have proposed methods to build quantum circuits with many relying machine learning methods. However, training of these ML models is often very hard due to the necessity of simulating quantum circuits as the machine learns.
    Diffusion models avoid such problems due to the way how they are trained. “This provides a tremendous advantage,” explains Gorka Muñoz-Gil, who developed the novel method together with Hans J. Briegel and Florian Fürrutter. “Moreover, we show that denoising diffusion models are accurate in their generation and also very flexible, allowing to generate circuits with different numbers of qubits, as well as types and numbers of quantum gates.” The models also can be tailored to prepare circuits that take into consideration the connectivity of the quantum hardware, i.e. how qubits are connected in the quantum computer. “As producing new circuits is very cheap once the model is trained, one can use it to discover new insights about quantum operations of interest,” Gorka Muñoz-Gil names another potential of the new method.
    The method developed at the University of Innsbruck produces quantum circuits based on user specifications and tailored to the features of the quantum hardware the circuit will be run on. This marks a significant step forward in unleashing the full extent of quantum computing. The work has now been published in Nature Machine Intelligence and was financially supported by the Austrian Science Fund FWF and the European Union, among others. More

  • in

    AI can help improve ER admission decisions

    Generative artificial intelligence (AI), such as GPT-4, can help predict whether an emergency room patient needs to be admitted to the hospital even with only minimal training on a limited number of records, according to investigators at the Icahn School of Medicine at Mount Sinai. Details of the research were published in the May 21 online issue of the Journal of the American Medical Informatics Association.
    In the retrospective study, the researchers analyzed records from seven Mount Sinai Health System hospitals, using both structured data, such as vital signs, and unstructured data, such as nurse triage notes, from more than 864,000 emergency room visits while excluding identifiable patient data. Of these visits, 159,857 (18.5 percent) led to the patient being admitted to the hospital.
    The researchers compared GPT-4 against traditional machine-learning models such as Bio-Clinical-BERT for text and XGBoost for structured data in various scenarios, assessing its performance to predict hospital admissions independently and in combination with the traditional methods.
    “We were motivated by the need to test whether generative AI, specifically large language models (LLMs) like GPT-4, could improve our ability to predict admissions in high-volume settings such as the Emergency Department,” says co-senior author Eyal Klang, MD, Director of the Generative AI Research Program in the Division of Data-Driven and Digital Medicine (D3M) at Icahn Mount Sinai. “Our goal is to enhance clinical decision-making through this technology. We were surprised by how well GPT-4 adapted to the ER setting and provided reasoning for its decisions. This capability of explaining its rationale sets it apart from traditional models and opens up new avenues for AI in medical decision-making.”
    While traditional machine-learning models use millions of records for training, LLMs can effectively learn from just a few examples. Moreover, according to the researchers, LLMs can incorporate traditional machine-learning predictions, improving performance
    “Our research suggests that AI could soon support doctors in emergency rooms by making quick, informed decisions about patient admissions. This work opens the door for further innovation in health care AI, encouraging the development of models that can reason and learn from limited data, like human experts do,” says co-senior author Girish N. Nadkarni, MD, MPH, Irene and Dr. Arthur M. Fishberg Professor of Medicine at Icahn Mount Sinai, Director of The Charles Bronfman Institute of Personalized Medicine, and System Chief of D3M. “However, while the results are encouraging, the technology is still in a supportive role, enhancing the decision-making process by providing additional insights, not taking over the human component of health care, which remains critical.”
    The research team is investigating how to apply large language models to health care systems, with the goal of harmoniously integrating them with traditional machine-learning methods to address complex challenges and decision-making in real-time clinical settings.

    “Our study informs how LLMs can be integrated into health care operations. The ability to rapidly train LLMs highlights their potential to provide valuable insights even in complex environments like health care,” says Brendan Carr, MD, MA, MS, a study co-author and emergency room physician who is Chief Executive Officer of Mount Sinai Health System. “Our study sets the stage for further research on AI integration in health care across the many domains of diagnostic, treatment, operational, and administrative tasks that require continuous optimization.”
    The paper is titled “Evaluating the accuracy of a state-of-the-art large language model for prediction of admissions from the emergency room.”
    The remaining authors of the paper, all with Icahn Mount Sinai, are Benjamin S. Glicksberg, PhD; Dhaval Patel, BS; Ashwin Sawant, MD; Akhil Vaid, MD; Ganesh Raut, BS; Alexander W. Charney, MD, PhD; Donald Apakama, MD; and Robert Freeman, RN.
    The work was supported by the National Heart Lung and Blood Institute NIH grant 5R01HL141841-05. More

  • in

    Age, race impact AI performance on digital mammograms

    In a study of nearly 5,000 screening mammograms interpreted by an FDA-approved AI algorithm, patient characteristics such as race and age influenced false positive results. The study’s results were published today in Radiology, a journal of the Radiological Society of North America (RSNA).
    “AI has become a resource for radiologists to improve their efficiency and accuracy in reading screening mammograms while mitigating reader burnout,” said Derek L. Nguyen, M.D., assistant professor at Duke University in Durham, North Carolina. “However, the impact of patient characteristics on AI performance has not been well studied.”
    Dr. Nguyen said while preliminary data suggests that AI algorithms applied to screening mammography exams may improve radiologists’ diagnostic performance for breast cancer detection and reduce interpretation time, there are some aspects of AI to be aware of.
    “There are few demographically diverse databases for AI algorithm training, and the FDA does not require diverse datasets for validation,” he said. “Because of the differences among patient populations, it’s important to investigate whether AI software can accommodate and perform at the same level for different patient ages, races and ethnicities.”
    In the retrospective study, researchers identified patients with negative (no evidence of cancer) digital breast tomosynthesis screening examinations performed at Duke University Medical Center between 2016 and 2019. All patients were followed for a two-year period after the screening mammograms, and no patients were diagnosed with a breast malignancy.
    The researchers randomly selected a subset of this group consisting of 4,855 patients (median age 54 years) broadly distributed across four ethnic/racial groups. The subset included 1,316 (27%) white, 1,261 (26%) Black, 1,351 (28%) Asian, and 927 (19%) Hispanic patients.
    A commercially available AI algorithm interpreted each exam in the subset of mammograms, generating both a case score (or certainty of malignancy) and a risk score (or one-year subsequent malignancy risk).

    “Our goal was to evaluate whether an AI algorithm’s performance was uniform across age, breast density types and different patient race/ethnicities,” Dr. Nguyen said.
    Given all mammograms in the study were negative for the presence of cancer, anything flagged as suspicious by the algorithm was considered a false positive result. False positive case scores were significantly more likely in Black and older patients (71-80 years) and less likely in Asian patients and younger patients (41-50 years) compared to white patients and women between the ages of 51 and 60.
    “This study is important because it highlights that any AI software purchased by a healthcare institution may not perform equally across all patient ages, races/ethnicities and breast densities,” Dr. Nguyen said. “Moving forward, I think AI software upgrades should focus on ensuring demographic diversity.”
    Dr. Nguyen said healthcare institutions should understand the patient population they serve before purchasing an AI algorithm for screening mammogram interpretation and ask vendors about their algorithm training.
    “Having a baseline knowledge of your institution’s demographics and asking the vendor about the ethnic and age diversity of their training data will help you understand the limitations you’ll face in clinical practice,” he said. More

  • in

    Math discovery provides new method to study cell activity, aging

    New mathematical tools revealing how quickly cell proteins break down are poised to uncover deeper insights into how we age, according to a recently published paper co-authored by a Mississippi State researcher and his colleagues from Harvard Medical School and the University of Cambridge.
    Galen Collins, assistant professor in MSU’s Department of Biochemistry, Molecular Biology, Entomology and Plant Pathology, co-authored the groundbreaking paper published in the Proceedings of the National Academy of Sciences, or PNAS, in April.
    “We already understand how quickly proteins are made, which can happen in a matter of minutes,” said Collins, who is also a scientist in the Mississippi Agricultural and Forestry Experiment Station. “Until now, we’ve had a very poor understanding of how much time it takes them to break down.”
    The paper in applied mathematics, “Maximum entropy determination of mammalian proteome dynamics,” presents the new tools that quantify the degradation rates of cell proteins — how quickly they break down — helping us understand how cells grow and die and how we age. Proteins — complex molecules made from various combinations of amino acids — carry the bulk of the workload within a cell, providing its structure, responding to messages from outside the cell and removing waste.
    The results proved that not all proteins degrade at the same pace but instead fall into one of three categories, breaking down over the course of minutes, hours or days. While previous research has examined cell protein breakdown, this study was the first to quantify mathematically the degradation rates of all cell protein molecules, using a technique called maximum entropy.
    “For certain kinds of scientific questions, experiments can often reveal infinitely many possible answers; however, they are not all equally plausible,” said lead author Alexander Dear, research fellow in applied mathematics at Harvard University. “The principle of maximum entropy is a mathematical law that shows us how to precisely calculate the plausibility of each answer — its ‘entropy’ — so that we can choose the one that is the most likely.”
    “This kind of math is sort of like a camera that zooms in on your license plate from far away and figures out what the numbers should be,” Collins said. “Maximum entropy gives us a clear and precise picture of how protein degradation occurs in cells.”
    In addition, the team used these tools to study some specific implications of protein degradation for humans and animals. For one, they examined how those rates change as muscles develop and adapt to starvation.

    “We found that starvation had the greatest impact on the intermediate group of proteins in muscular cells, which have a half-life of a few hours, causing the breakdown to shift and accelerate,” Collins said. “This discovery could have implications for cancer patients who experience cachexia, or muscle wasting due to the disease and its treatments.”
    They also explored how a shift in the breakdown of certain cell proteins contributes to neurodegenerative disease.
    “These diseases occur when waste proteins, which usually break down quickly, live longer than they should,” Collins said. “The brain becomes like a teenager’s bedroom, accumulating trash, and when you don’t clean it up, it becomes uninhabitable.”
    Dear affirmed the study’s value lies not only in what it revealed about cell protein degeneration, but also in giving scientists a new method to investigate cell activity with precision.
    “Our work provides a powerful new experimental method for quantifying protein metabolism in cells,” he said. “Its simplicity and rapidity make it particularly well-suited for studying metabolic changes.”
    Collins’s post-doctoral advisor at Harvard and a co-author of the article, the late Alfred Goldberg, was a pioneer in studying the life and death of proteins. Collins noted this study was built on nearly five decades of Goldberg’s research and his late-career collaboration with mathematicians from the University of Cambridge. After coming to MSU a year ago, Collins continued collaborating with his colleagues to complete the paper.
    “It’s an incredible honor to be published in PNAS, but it was also a lot of fun being part of this team,” Collins said. “And it’s very meaningful to see my former mentor’s body of work wrapped up and published.” More

  • in

    The neutrino’s quantum fuzziness is beginning to come into focus

    Neutrinos are known for funny business. Now scientists have set a new limit on a quantum trait responsible for the subatomic particles’ quirkiness: uncertainty.

    The lightweight particles morph from one variety of neutrino to another as they travel, a strange phenomenon called neutrino oscillation (SN: 10/6/15). That ability rests on quantum uncertainty, a sort of fuzziness intrinsic to the properties of quantum objects, such as their location or momentum. But despite the importance of quantum uncertainty, the uncertainty in the neutrino’s position has never been directly measured.  More

  • in

    Evolving market dynamics foster consumer inattention that can lead to risky purchases

    Researchers have developed a new theory of how changing market conditions can lead large numbers of otherwise cautious consumers to buy risky products such as subprime mortgages, cryptocurrency or even cosmetic surgery procedures.
    These changes can occur in categories of products that are generally low risk when they enter the market. As demand increases, more companies may enter the market and try to attract consumers with lower priced versions of the product that carry more risk. If the negative effects of that risk are not immediately noticeable, the market can evolve to keep consumers ignorant of the risks, said Michelle Barnhart, an associate professor in Oregon State University’s College of Business and a co-author of a new paper.
    “It’s not just the consumer’s fault. It’s not just the producer’s fault. It’s not just the regulator’s fault. All these things together create this dilemma,” Barnhart said. “Understanding how such a situation develops could help consumers, regulators and even producers make better decisions when they are faced with similar circumstances in the future.”
    The researchers’ findings were recently published in the Journal of Consumer Research. The paper’s lead author is Lena Pellandini-Simanyi of the University of Lugano in Switzerland.
    Barnhart, who studies consumer culture and market systems; has researched credit and debit in the U.S. Pellandini-Simanyi, a sociologist with expertise in consumer markets, has studied personal finance in European contexts. Together they analyzed the case of the Hungarian mortgage crisis to understand how people who generally view themselves as risk averse end up pursuing a high-risk product or service.
    To better understand the consumer mindset, the researchers conducted 47 interviews with Hungarian borrowers who took out low-risk mortgages in the local forint currency or in higher risk foreign currency as the Hungarian mortgage industry evolved between 2001 and 2010. They also conducted a larger survey of mortgage borrowers, interviewed 37 finance and mortgage industry experts and financial regulators and analyzed regulatory documents and parliamentary proceedings.
    They found patterns that led to mortgages becoming riskier over time and social and marketplace changes that lead consumers to enter into a state of collective ignorance of increasing risks. In addition, they identified characteristics that encouraged these patterns. Other markets with these characteristics are likely to develop in a similar way.

    “Typically, when there is a new product on the market, people are quite skeptical. The early adopters carefully examine this product, they become highly educated about it and do a lot of work to determine if the risk is too high,” Pellandini-Sumanyi said. “If they deem the risk too high, they don’t buy it.”
    But if those early adopters use the new product or service successfully, the next round of consumers is likely to assume the product will work for them in a similar fashion without examining it in as much detail, even if the quality of the product has been reduced, the researchers noted.
    “Then everything starts to spiral, with quality dropping in the rush to meet consumer demand and maintain profits, and consumers relying more and more on social information that suggests this is a safe purchase without investigating how the risks might have changed,” Barnhart said.
    “It also can lead to a ‘prudence paradox,’ where the most risk averse people wait to enter the market until the end stages and end up buying super risky products. They exercise caution by waiting but they wait so long, they end up with the worst products.”
    The spiral is typically only broken through intervention, either through market collapse or regulation. For example, while cosmetic surgery is relatively safe, an increase in availability of inexpensive procedures at facilities that lacked sufficient equipment and expertise led to a rise in botched procedures until regulation caught up.
    “These findings demonstrate the power of social information,” Barnhart said. “In this environment, it’s very difficult for any individual consumer to pay attention to and assess risk because doing so is so far outside of the norm.”
    To protect themselves against collective ignorance, consumers should ensure that they are weighing their personal risk against others whose experiences are actually similar, Pellandini-Sumanyi said.
    “Make sure this is an apples-to-apples comparison of products and the consumers’ circumstances,” she said. More