More stories

  • in

    How to use AI for discovery — without leading science astray

    Over the past decade, AI has permeated nearly every corner of science: Machine learning models have been used to predict protein structures, estimate the fraction of the Amazon rainforest that has been lost to deforestation and even classify faraway galaxies that might be home to exoplanets.
    But while AI can be used to speed scientific discovery — helping researchers make predictions about phenomena that may be difficult or costly to study in the real world — it can also lead scientists astray. In the same way that chatbots sometimes “hallucinate,” or make things up, machine learning models can sometimes present misleading or downright false results.
    In a paper published online today (Thursday, Nov. 9) in Science, researchers at the University of California, Berkeley, present a new statistical technique for safely using the predictions obtained from machine learning models to test scientific hypotheses.
    The technique, called prediction-powered inference (PPI), uses a small amount of real-world data to correct the output of large, general models — such as AlphaFold, which predicts protein structures — in the context of specific scientific questions.
    “These models are meant to be general: They can answer many questions, but we don’t know which questions they answer well and which questions they answer badly — and if you use them naively, without knowing which case you’re in, you can get bad answers,” said study author Michael Jordan, the Pehong Chen Distinguished Professor of electrical engineering and computer science and of statistics at UC Berkeley. “With PPI, you’re able to use the model, but correct for possible errors, even when you don’t know the nature of those errors at the outset.”
    The risk of hidden biases
    When scientists conduct experiments, they’re not just looking for a single answer — they want to obtain a range of plausible answers. This is done by calculating a “confidence interval,” which, in the simplest case, can be found by repeating an experiment many times and seeing how the results vary. More

  • in

    New AI noise-canceling headphone technology lets wearers pick which sounds they hear

    Most anyone who’s used noise-canceling headphones knows that hearing the right noise at the right time can be vital. Someone might want to erase car horns when working indoors, but not when walking along busy streets. Yet people can’t choose what sounds their headphones cancel.
    Now, a team led by researchers at the University of Washington has developed deep-learning algorithms that let users pick which sounds filter through their headphones in real time. The team is calling the system “semantic hearing.” Headphones stream captured audio to a connected smartphone, which cancels all environmental sounds. Either through voice commands or a smartphone app, headphone wearers can select which sounds they want to include from 20 classes, such as sirens, baby cries, speech, vacuum cleaners and bird chirps. Only the selected sounds will be played through the headphones.
    The team presented its findings Nov. 1 at UIST ’23 in San Francisco. In the future, the researchers plan to release a commercial version of the system.
    “Understanding what a bird sounds like and extracting it from all other sounds in an environment requires real-time intelligence that today’s noise canceling headphones haven’t achieved,” said senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering. “The challenge is that the sounds headphone wearers hear need to sync with their visual senses. You can’t be hearing someone’s voice two seconds after they talk to you. This means the neural algorithms must process sounds in under a hundredth of a second.”
    Because of this time crunch, the semantic hearing system must process sounds on a device such as a connected smartphone, instead of on more robust cloud servers. Additionally, because sounds from different directions arrive in people’s ears at different times, the system must preserve these delays and other spatial cues so people can still meaningfully perceive sounds in their environment.
    Tested in environments such as offices, streets and parks, the system was able to extract sirens, bird chirps, alarms and other target sounds, while removing all other real-world noise. When 22 participants rated the system’s audio output for the target sound, they said that on average the quality improved compared to the original recording.
    In some cases, the system struggled to distinguish between sounds that share many properties, such as vocal music and human speech. The researchers note that training the models on more real-world data might improve these outcomes.
    Additional co-authors on the paper were Bandhav Veluri and Malek Itani, both UW doctoral students in the Allen School; Justin Chan, who completed this research as a doctoral student in the Allen School and is now at Carnegie Mellon University; and Takuya Yoshioka, director of research at AssemblyAI. More

  • in

    ‘Indoor solar’ to power the Internet of Things

    From Wi-Fi-connected home security systems to smart toilets, the so-called Internet of Things brings personalization and convenience to devices that help run homes. But with that comes tangled electrical cords or batteries that need to be replaced. Now, researchers reporting in ACS Applied Energy Materials have brought solar panel technology indoors to power smart devices. They show which photovoltaic (PV) systems work best under cool white LEDs, a common type of indoor lighting.
    Indoor lighting differs from sunlight. Light bulbs are dimmer than the sun, and sunlight comprises ultraviolet, infrared and visible light, whereas indoor lights typically shine light from a narrower region of the spectrum. Scientists have found ways to harness power from sunlight, using PV solar panels, but those panels are not optimized for converting indoor light into electrical energy. Some next-generation PV materials, including perovskite minerals and organic films, have been tested with indoor light, but it’s not clear which are the most efficient at converting non-natural light into electricity; many of the studies use various types of indoor lights to test PVs made from different materials. So, Uli Würfel and coworkers compared a range of different PV technologies under the same type of indoor lighting.
    The researchers obtained eight types of PV devices, ranging from traditional amorphous silicon to thin-film technologies such as dye-sensitized solar cells. They measured each material’s ability to convert light into electricity, first under simulated sunlight and then under a cool white LED light. Gallium indium phosphide PV cells showed the greatest efficiency under indoor light, converting nearly 40% of the light energy into electricity. As the researchers had expected, the gallium-containing material’s performance under sunlight was modest relative to the other materials tested due to its large band gap. A material called crystalline silicon demonstrated the best efficiency under sunlight but was average under indoor light.Gallium indium phosphide has not been used in commercially available PV cells yet, but this study points to its potential beyond solar power, the researchers say. However, they add that the gallium-containing materials are expensive and may not serve as a viable mass product to power smart home systems. In contrast, perovskite mineral and organic film PV cells are less expensive and do not have stability issues under indoor lighting conditions. Additionally, in the study, the researchers identified that part of the indoor light energy produced heat instead of electricity — information that will help optimize future PVs to power indoor devices.
    The authors acknowledge funding from the Engineering and Physical Sciences Research Council (U.K.), the European Regional Development Fund, the Welsh European Funding Office, First Solar Inc., the German Federal Ministry for Economic Affairs and Energy, and the German Research Foundation. More

  • in

    Scientists use quantum biology, AI to sharpen genome editing tool

    Scientists at Oak Ridge National Laboratory used their expertise in quantum biology, artificial intelligence and bioengineering to improve how CRISPR Cas9 genome editing tools work on organisms like microbes that can be modified to produce renewable fuels and chemicals.
    CRISPR is a powerful tool for bioengineering, used to modify genetic code to improve an organism’s performance or to correct mutations. The CRISPR Cas9 tool relies on a single, unique guide RNA that directs the Cas9 enzyme to bind with and cleave the corresponding targeted site in the genome. Existing models to computationally predict effective guide RNAs for CRISPR tools were built on data from only a few model species, with weak, inconsistent efficiency when applied to microbes.
    “A lot of the CRISPR tools have been developed for mammalian cells, fruit flies or other model species. Few have been geared towards microbes where the chromosomal structures and sizes are very different,” said Carrie Eckert, leader of the Synthetic Biology group at ORNL. “We had observed that models for designing the CRISPR Cas9 machinery behave differently when working with microbes, and this research validates what we’d known anecdotally.”
    To improve the modeling and design of guide RNA, the ORNL scientists sought a better understanding of what’s going on at the most basic level in cell nuclei, where genetic material is stored. They turned to quantum biology, a field bridging molecular biology and quantum chemistry that investigates the effects that electronic structure can have on the chemical properties and interactions of nucleotides, the molecules that form the building blocks of DNA and RNA.
    The way electrons are distributed in the molecule influences reactivity and conformational stability, including the likelihood that the Cas9 enzyme-guide RNA complex will effectively bind with the microbe’s DNA, said Erica Prates, computational systems biologist at ORNL.
    The best guide through a forest of decisions
    The scientists built an explainable artificial intelligence model called iterative random forest. They trained the model on a dataset of around 50,000 guide RNAs targeting the genome of E. coli bacteria while also taking into account quantum chemical properties, in an approach described in the journal Nucleic Acids Research. More

  • in

    Engineers are on a failure-finding mission

    From vehicle collision avoidance to airline scheduling systems to power supply grids, many of the services we rely on are managed by computers. As these autonomous systems grow in complexity and ubiquity, so too could the ways in which they fail.
    Now, MIT engineers have developed an approach that can be paired with any autonomous system, to quickly identify a range of potential failures in that system before they are deployed in the real world. What’s more, the approach can find fixes to the failures, and suggest repairs to avoid system breakdowns.
    The team has shown that the approach can root out failures in a variety of simulated autonomous systems, including a small and large power grid network, an aircraft collision avoidance system, a team of rescue drones, and a robotic manipulator. In each of the systems, the new approach, in the form of an automated sampling algorithm, quickly identifies a range of likely failures as well as repairs to avoid those failures.
    The new algorithm takes a different tack from other automated searches, which are designed to spot the most severe failures in a system. These approaches, the team says, could miss subtler though significant vulnerabilities that the new algorithm can catch.
    “In reality, there’s a whole range of messiness that could happen for these more complex systems,” says Charles Dawson, a graduate student in MIT’s Department of Aeronautics and Astronautics. “We want to be able to trust these systems to drive us around, or fly an aircraft, or manage a power grid. It’s really important to know their limits and in what cases they’re likely to fail.”
    Dawson and Chuchu Fan, assistant professor of aeronautics and astronautics at MIT, are presenting their work this week at the Conference on Robotic Learning.
    Sensitivity over adversaries
    In 2021, a major system meltdown in Texas got Fan and Dawson thinking. In February of that year, winter storms rolled through the state, bringing unexpectedly frigid temperatures that set off failures across the power grid. The crisis left more than 4.5 million homes and businesses without power for multiple days. The system-wide breakdown made for the worst energy crisis in Texas’ history. More

  • in

    How human faces can teach androids to smile

    Robots able to display human emotion have long been a mainstay of science fiction stories. Now, Japanese researchers have been studying the mechanical details of real human facial expressions to bring those stories closer to reality.
    In a recent study published by the Mechanical Engineering Journal, a multi-institutional research team led by Osaka University have begun mapping out the intricacies of human facial movements. The researchers used 125 tracking markers attached to a person’s face to closely examine 44 different, singular facial actions, such as blinking or raising the corner of the mouth.
    Every facial expression comes with a variety of local deformation as muscles stretch and compress the skin. Even the simplest motions can be surprisingly complex. Our faces contain a collection of different tissues below the skin, from muscle fibers to fatty adipose, all working in concert to convey how we’re feeling. This includes everything from a big smile to a slight raise of the corner of the mouth. This level of detail is what makes facial expressions so subtle and nuanced, in turn making them challenging to replicate artificially. Until now, this has relied on much simpler measurements, of the overall face shape and motion of points chosen on skin before and after movements.
    “Our faces are so familiar to us that we don’t notice the fine details,” explains Hisashi Ishihara, main author of the study. “But from an engineering perspective, they are amazing information display devices. By looking at people’s facial expressions, we can tell when a smile is hiding sadness, or whether someone’s feeling tired or nervous.”
    Information gathered by this study can help researchers working with artificial faces, both created digitally on screens and, ultimately, the physical faces of android robots. Precise measurements of human faces, to understand all the tensions and compressions in facial structure, will allow these artificial expressions to appear both more accurate and natural.
    “The facial structure beneath our skin is complex,” says Akihiro Nakatani, senior author. “The deformation analysis in this study could explain how sophisticated expressions, which comprise both stretched and compressed skin, can result from deceivingly simple facial actions.”
    This work has applications beyond robotics as well, for example, improved facial recognition or medical diagnoses, the latter of which currently relies on doctor intuition to notice abnormalities in facial movement.
    So far, this study has only examined the face of one person, but the researchers hope to use their work as a jumping off point to gain a fuller understanding of human facial motions. As well as helping robots to both recognize and convey emotion, this research could also help to improve facial movements in computer graphics, like those used in movies and video games, helping to avoid the dreaded ‘uncanny valley’ effect. More

  • in

    AI algorithm developed to measure muscle development, provide growth chart for children

    Leveraging artificial intelligence and the largest pediatric brain MRI dataset to date, researchers have now developed a growth chart for tracking muscle mass in growing children. The new study led by investigators from Brigham and Women’s Hospital, a founding member of the Mass General Brigham healthcare system, found that their artificial intelligence-based tool is the first to offer a standardized, accurate, and reliable way to assess and track indicators of muscle mass on routine MRI. Their results were published today in Nature Communications.
    “Pediatric cancer patients often struggle with low muscle mass, but there is no standard way to measure this. We were motivated to use artificial intelligence to measure temporalis muscle thickness and create a standardized reference,” said senior author Ben Kann, MD, a radiation oncologist in the Brigham’s Department of Radiation Oncology and Mass General Brigham’s Artificial Intelligence in Medicine Program. “Our methodology produced a growth chart that we can use to track muscle thickness within developing children quickly and in real-time. Through this, we can determine whether they are growing within an ideal range.”
    Lean muscle mass in humans has been linked to quality of life, daily functional status, and is an indicator of overall health and longevity. Individuals with conditions such as sarcopenia or low lean muscle mass are at risk of dying earlier, or otherwise being prone to various diseases that can affect their quality of life. Historically, there has not been a widespread or practical way to track lean muscle mass, with body mass index (BMI) serving as a default form of measurement. The weakness in using BMI is that while it considers weight, it does not indicate how much of that weight is muscle. For decades, scientists have known that the thickness of the temporalis muscle outside the skull is associated with lean muscle mass in the body. However, the thickness of this muscle has been difficult to measure in real-time in the clinic and there was no way to diagnose normal from abnormal thickness. Traditional methods have typically involved manual measurements, but these practices are time consuming and are not standardized.
    To address this, the research team applied their deep learning pipeline to MRI scans of patients with pediatric brain tumors treated at Boston Children’s Hospital/Dana-Farber Cancer Institute in collaboration with Boston Children’s Radiology Department. The team analyzed 23,852 normal healthy brain MRIs from individuals aged 4 through 35 to calculate temporalis muscle thickness (iTMT) and develop normal-reference growth charts for the muscle. MRI results were aggregated to create sex-specific iTMT normal growth charts with percentiles and ranges. They found that iTMT is accurate for a wide range of patients and is comparable to the analysis of trained human experts.
    “The idea is that these growth charts can be used to determine if a patient’s muscle mass is within a normal range, in a similar way that height and weight growth charts are typically used in the doctor’s office,” said Kann.
    In essence, the new method could be used to assess patients who are already receiving routine brain MRIs that track medical conditions such as pediatric cancers and neurodegenerative diseases. The team hopes that the ability to monitor the temporalis muscle instantly and quantitatively will enable clinicians to quickly intervene for patients who demonstrate signs of muscle loss, and thus prevent the negative effects of sarcopenia and low muscle mass.
    One of the limitations lies in the algorithms reliance on scan quality, and how a suboptimal resolution can affect measurements and the interpretation of results. Another drawback is the limited amount of MRI datasets available outside of the United States and Europe that can give an accurate global picture.
    “In the future, we may want to explore if the utility of iTMT will be high enough to justify getting MRIs on a regular basis for more patients,” said Kann. “We plan to improve model performance by training it on more challenging and variable cases. Future applications of iTMT could allow us to track and predict morbidity, as well as reveal critical physiologic states in patients that require intervention.” More

  • in

    21st century Total Wars will enlist technologies in ways we don’t yet understand

    The war in Ukraine is not only the largest European land war since the Second World War. It is also the first large-scale shooting war between two technologically advanced countries to also be fought in cyberspace.
    And each country’s technological and information prowess is becoming critical to the fight.
    Especially for outmanned and outgunned Ukraine, the conflict has developed into a Total War.
    A Total War is one in which all the resources of a country, including its people, are seen as part of the war effort. Civilians become military targets, which inevitably leads to higher casualties. Non-offensive infrastructure is also attacked.
    As new technologies like artificial intelligence, unmanned aerial vehicles (UAVs) such as drones and so-called ‘cyberweapons’ such as malware and Internet-based disinformation campaigns become integral to our daily lives, researchers are working to grasp the role they will play in warfare.
    Jordan Richard Schoenherr, an assistant professor in the Department of Psychology, writes in a new paper that our understanding of warfare is now outdated. The role sociotechnical systems — meaning the way technology relates to human organizational behaviour in a complex, interdependent system — plays in strategic thinking is still far from fully developed. Understanding their potential and their vulnerabilities will be an important task for planners in the years ahead.
    “We need to think about the networks of people and technology — that is what a sociotechnical system is,” Schoenherr explains. More