More stories

  • in

    Scientists use quantum biology, AI to sharpen genome editing tool

    Scientists at Oak Ridge National Laboratory used their expertise in quantum biology, artificial intelligence and bioengineering to improve how CRISPR Cas9 genome editing tools work on organisms like microbes that can be modified to produce renewable fuels and chemicals.
    CRISPR is a powerful tool for bioengineering, used to modify genetic code to improve an organism’s performance or to correct mutations. The CRISPR Cas9 tool relies on a single, unique guide RNA that directs the Cas9 enzyme to bind with and cleave the corresponding targeted site in the genome. Existing models to computationally predict effective guide RNAs for CRISPR tools were built on data from only a few model species, with weak, inconsistent efficiency when applied to microbes.
    “A lot of the CRISPR tools have been developed for mammalian cells, fruit flies or other model species. Few have been geared towards microbes where the chromosomal structures and sizes are very different,” said Carrie Eckert, leader of the Synthetic Biology group at ORNL. “We had observed that models for designing the CRISPR Cas9 machinery behave differently when working with microbes, and this research validates what we’d known anecdotally.”
    To improve the modeling and design of guide RNA, the ORNL scientists sought a better understanding of what’s going on at the most basic level in cell nuclei, where genetic material is stored. They turned to quantum biology, a field bridging molecular biology and quantum chemistry that investigates the effects that electronic structure can have on the chemical properties and interactions of nucleotides, the molecules that form the building blocks of DNA and RNA.
    The way electrons are distributed in the molecule influences reactivity and conformational stability, including the likelihood that the Cas9 enzyme-guide RNA complex will effectively bind with the microbe’s DNA, said Erica Prates, computational systems biologist at ORNL.
    The best guide through a forest of decisions
    The scientists built an explainable artificial intelligence model called iterative random forest. They trained the model on a dataset of around 50,000 guide RNAs targeting the genome of E. coli bacteria while also taking into account quantum chemical properties, in an approach described in the journal Nucleic Acids Research. More

  • in

    Engineers are on a failure-finding mission

    From vehicle collision avoidance to airline scheduling systems to power supply grids, many of the services we rely on are managed by computers. As these autonomous systems grow in complexity and ubiquity, so too could the ways in which they fail.
    Now, MIT engineers have developed an approach that can be paired with any autonomous system, to quickly identify a range of potential failures in that system before they are deployed in the real world. What’s more, the approach can find fixes to the failures, and suggest repairs to avoid system breakdowns.
    The team has shown that the approach can root out failures in a variety of simulated autonomous systems, including a small and large power grid network, an aircraft collision avoidance system, a team of rescue drones, and a robotic manipulator. In each of the systems, the new approach, in the form of an automated sampling algorithm, quickly identifies a range of likely failures as well as repairs to avoid those failures.
    The new algorithm takes a different tack from other automated searches, which are designed to spot the most severe failures in a system. These approaches, the team says, could miss subtler though significant vulnerabilities that the new algorithm can catch.
    “In reality, there’s a whole range of messiness that could happen for these more complex systems,” says Charles Dawson, a graduate student in MIT’s Department of Aeronautics and Astronautics. “We want to be able to trust these systems to drive us around, or fly an aircraft, or manage a power grid. It’s really important to know their limits and in what cases they’re likely to fail.”
    Dawson and Chuchu Fan, assistant professor of aeronautics and astronautics at MIT, are presenting their work this week at the Conference on Robotic Learning.
    Sensitivity over adversaries
    In 2021, a major system meltdown in Texas got Fan and Dawson thinking. In February of that year, winter storms rolled through the state, bringing unexpectedly frigid temperatures that set off failures across the power grid. The crisis left more than 4.5 million homes and businesses without power for multiple days. The system-wide breakdown made for the worst energy crisis in Texas’ history. More

  • in

    How human faces can teach androids to smile

    Robots able to display human emotion have long been a mainstay of science fiction stories. Now, Japanese researchers have been studying the mechanical details of real human facial expressions to bring those stories closer to reality.
    In a recent study published by the Mechanical Engineering Journal, a multi-institutional research team led by Osaka University have begun mapping out the intricacies of human facial movements. The researchers used 125 tracking markers attached to a person’s face to closely examine 44 different, singular facial actions, such as blinking or raising the corner of the mouth.
    Every facial expression comes with a variety of local deformation as muscles stretch and compress the skin. Even the simplest motions can be surprisingly complex. Our faces contain a collection of different tissues below the skin, from muscle fibers to fatty adipose, all working in concert to convey how we’re feeling. This includes everything from a big smile to a slight raise of the corner of the mouth. This level of detail is what makes facial expressions so subtle and nuanced, in turn making them challenging to replicate artificially. Until now, this has relied on much simpler measurements, of the overall face shape and motion of points chosen on skin before and after movements.
    “Our faces are so familiar to us that we don’t notice the fine details,” explains Hisashi Ishihara, main author of the study. “But from an engineering perspective, they are amazing information display devices. By looking at people’s facial expressions, we can tell when a smile is hiding sadness, or whether someone’s feeling tired or nervous.”
    Information gathered by this study can help researchers working with artificial faces, both created digitally on screens and, ultimately, the physical faces of android robots. Precise measurements of human faces, to understand all the tensions and compressions in facial structure, will allow these artificial expressions to appear both more accurate and natural.
    “The facial structure beneath our skin is complex,” says Akihiro Nakatani, senior author. “The deformation analysis in this study could explain how sophisticated expressions, which comprise both stretched and compressed skin, can result from deceivingly simple facial actions.”
    This work has applications beyond robotics as well, for example, improved facial recognition or medical diagnoses, the latter of which currently relies on doctor intuition to notice abnormalities in facial movement.
    So far, this study has only examined the face of one person, but the researchers hope to use their work as a jumping off point to gain a fuller understanding of human facial motions. As well as helping robots to both recognize and convey emotion, this research could also help to improve facial movements in computer graphics, like those used in movies and video games, helping to avoid the dreaded ‘uncanny valley’ effect. More

  • in

    AI algorithm developed to measure muscle development, provide growth chart for children

    Leveraging artificial intelligence and the largest pediatric brain MRI dataset to date, researchers have now developed a growth chart for tracking muscle mass in growing children. The new study led by investigators from Brigham and Women’s Hospital, a founding member of the Mass General Brigham healthcare system, found that their artificial intelligence-based tool is the first to offer a standardized, accurate, and reliable way to assess and track indicators of muscle mass on routine MRI. Their results were published today in Nature Communications.
    “Pediatric cancer patients often struggle with low muscle mass, but there is no standard way to measure this. We were motivated to use artificial intelligence to measure temporalis muscle thickness and create a standardized reference,” said senior author Ben Kann, MD, a radiation oncologist in the Brigham’s Department of Radiation Oncology and Mass General Brigham’s Artificial Intelligence in Medicine Program. “Our methodology produced a growth chart that we can use to track muscle thickness within developing children quickly and in real-time. Through this, we can determine whether they are growing within an ideal range.”
    Lean muscle mass in humans has been linked to quality of life, daily functional status, and is an indicator of overall health and longevity. Individuals with conditions such as sarcopenia or low lean muscle mass are at risk of dying earlier, or otherwise being prone to various diseases that can affect their quality of life. Historically, there has not been a widespread or practical way to track lean muscle mass, with body mass index (BMI) serving as a default form of measurement. The weakness in using BMI is that while it considers weight, it does not indicate how much of that weight is muscle. For decades, scientists have known that the thickness of the temporalis muscle outside the skull is associated with lean muscle mass in the body. However, the thickness of this muscle has been difficult to measure in real-time in the clinic and there was no way to diagnose normal from abnormal thickness. Traditional methods have typically involved manual measurements, but these practices are time consuming and are not standardized.
    To address this, the research team applied their deep learning pipeline to MRI scans of patients with pediatric brain tumors treated at Boston Children’s Hospital/Dana-Farber Cancer Institute in collaboration with Boston Children’s Radiology Department. The team analyzed 23,852 normal healthy brain MRIs from individuals aged 4 through 35 to calculate temporalis muscle thickness (iTMT) and develop normal-reference growth charts for the muscle. MRI results were aggregated to create sex-specific iTMT normal growth charts with percentiles and ranges. They found that iTMT is accurate for a wide range of patients and is comparable to the analysis of trained human experts.
    “The idea is that these growth charts can be used to determine if a patient’s muscle mass is within a normal range, in a similar way that height and weight growth charts are typically used in the doctor’s office,” said Kann.
    In essence, the new method could be used to assess patients who are already receiving routine brain MRIs that track medical conditions such as pediatric cancers and neurodegenerative diseases. The team hopes that the ability to monitor the temporalis muscle instantly and quantitatively will enable clinicians to quickly intervene for patients who demonstrate signs of muscle loss, and thus prevent the negative effects of sarcopenia and low muscle mass.
    One of the limitations lies in the algorithms reliance on scan quality, and how a suboptimal resolution can affect measurements and the interpretation of results. Another drawback is the limited amount of MRI datasets available outside of the United States and Europe that can give an accurate global picture.
    “In the future, we may want to explore if the utility of iTMT will be high enough to justify getting MRIs on a regular basis for more patients,” said Kann. “We plan to improve model performance by training it on more challenging and variable cases. Future applications of iTMT could allow us to track and predict morbidity, as well as reveal critical physiologic states in patients that require intervention.” More

  • in

    21st century Total Wars will enlist technologies in ways we don’t yet understand

    The war in Ukraine is not only the largest European land war since the Second World War. It is also the first large-scale shooting war between two technologically advanced countries to also be fought in cyberspace.
    And each country’s technological and information prowess is becoming critical to the fight.
    Especially for outmanned and outgunned Ukraine, the conflict has developed into a Total War.
    A Total War is one in which all the resources of a country, including its people, are seen as part of the war effort. Civilians become military targets, which inevitably leads to higher casualties. Non-offensive infrastructure is also attacked.
    As new technologies like artificial intelligence, unmanned aerial vehicles (UAVs) such as drones and so-called ‘cyberweapons’ such as malware and Internet-based disinformation campaigns become integral to our daily lives, researchers are working to grasp the role they will play in warfare.
    Jordan Richard Schoenherr, an assistant professor in the Department of Psychology, writes in a new paper that our understanding of warfare is now outdated. The role sociotechnical systems — meaning the way technology relates to human organizational behaviour in a complex, interdependent system — plays in strategic thinking is still far from fully developed. Understanding their potential and their vulnerabilities will be an important task for planners in the years ahead.
    “We need to think about the networks of people and technology — that is what a sociotechnical system is,” Schoenherr explains. More

  • in

    Machine learning gives users ‘superhuman’ ability to open and control tools in virtual reality

    Researchers have developed a virtual reality application where a range of 3D modelling tools can be opened and controlled using just the movement of a user’s hand.
    The researchers, from the University of Cambridge, used machine learning to develop ‘HotGestures’ — analogous to the hot keys used in many desktop applications.
    HotGestures give users the ability to build figures and shapes in virtual reality without ever having to interact with a menu, helping them stay focused on a task without breaking their train of thought.
    The idea of being able to open and control tools in virtual reality has been a movie trope for decades, but the researchers say that this is the first time such a ‘superhuman’ ability has been made possible. The results are reported in the journal IEEE Transactions on Visualization and Computer Graphics.
    Virtual reality (VR) and related applications have been touted as game-changers for years, but outside of gaming, their promise has not fully materialised. “Users gain some qualities when using VR, but very few people want to use it for an extended period of time,” said Professor Per Ola Kristensson from Cambridge’s Department of Engineering, who led the research. “Beyond the visual fatigue and ergonomic issues, VR isn’t really offering anything you can’t get in the real world.”
    Most users of desktop software will be familiar with the concept of hot keys — command shortcuts such as ctrl-c to copy and ctrl-v to paste. While these shortcuts omit the need to open a menu to find the right tool or command, they rely on the user having the correct command memorised.
    “We wanted to take the concept of hot keys and turn it into something more meaningful for virtual reality — something that wouldn’t rely on the user having a shortcut in their head already,” said Kristensson, who is also co-Director of the Centre for Human-Inspired Artificial Intelligence. More

  • in

    Neuromorphic computing will be great… if hardware can handle the workload

    Technology is edging closer and closer to the super-speed world of computing with artificial intelligence. But is the world equipped with the proper hardware to be able to handle the workload of new AI technological breakthroughs?
    “The brain-inspired codes of the AI revolution are largely being run on conventional silicon computer architectures which were not designed for it,” explains Erica Carlson, 150th Anniversary Professor of Physics and Astronomy at Purdue University.
    A joint effort between Physicists from Purdue University, University of California San Diego (USCD) and École Supérieure de Physique et de Chimie Industrielles (ESPCI) in Paris, France, believe they may have discovered a way to rework the hardware…. By mimicking the synapses of the human brain. They published their findings, “Spatially Distributed Ramp Reversal Memory in VO2” in Advanced Electronic Materials which is featured on the back cover of the October 2023 edition.
    New paradigms in hardware will be necessary to handle the complexity of tomorrow’s computational advances. According to Carlson, lead theoretical scientist of this research, “neuromorphic architectures hold promise for lower energy consumption processors, enhanced computation, fundamentally different computational modes, native learning and enhanced pattern recognition.”
    Neuromorphic architecture basically boils down to computer chips mimicking brain behavior. Neurons are cells in the brain that transmit information. Neurons have small gaps at their ends that allow signals to pass from one neuron to the next which are called synapses. In biological brains, these synapses encode memory. This team of scientists concludes that vanadium oxides show tremendous promise for neuromorphic computing because they can be used to make both artificial neurons and synapses.
    “The dissonance between hardware and software is the origin of the enormously high energy cost of training, for example, large language models like ChatGPT,” explains Carlson. “By contrast, neuromorphic architectures hold promise for lower energy consumption by mimicking the basic components of a brain: neurons and synapses. Whereas silicon is good at memory storage, the material does not easily lend itself to neuron-like behavior. Ultimately, to provide efficient, feasible neuromorphic hardware solutions requires research into materials with radically different behavior from silicon — ones that can naturally mimic synapses and neurons. Unfortunately, the competing design needs of artificial synapses and neurons mean that most materials that make good synaptors fail as neuristors, and vice versa. Only a handful of materials, most of them quantum materials, have the demonstrated ability to do both.”
    The team relied on a recently discovered type of non-volatile memory which is driven by repeated partial temperature cycling through the insulator-to-metal transition. This memory was discovered in vanadium oxides. More

  • in

    Lightening the load: Researchers develop autonomous electrochemistry robot

    Researchers at the Beckman Institute for Advanced Science and Technology developed an automated laboratory robot to run complex electrochemical experiments and analyze data.
    With affordability and accessibility in mind, the researchers collaboratively created a benchtop robot that rapidly performs electrochemistry. Aptly named the Electrolab, this instrument greatly reduces the effort and time needed for electrochemical studies by automating many basic and repetitive laboratory tasks.
    The Electrolab can be used to explore energy storage materials and chemical reactions that promote the use of alternative and renewable power sources like solar or wind energy, which are essential to combating climate change.
    “We hope the Electrolab will allow new discoveries in energy storage while helping us share knowledge and data with other electrochemists — and non-electrochemists! We want them to be able to try things they couldn’t before,” said Joaquín Rodríguez-López, a professor in the Department of Chemistry at the University of Illinois Urbana-Champaign.
    The interdisciplinary team was co-led by Rodríguez-López and Charles Schroeder, the James Economy professor in the Department of Materials Science and Engineering and a professor of chemical and biomolecular engineering at UIUC. Their work appears in the journal Device.
    Electrochemistry is the study of electricity and its relation to chemistry. Chemical reactions release energy that can be converted into electricity — batteries used to power remote controllers or electric vehicles are perfect examples of this phenomenon.
    In the opposite direction, electricity can also be used to drive chemical reactions. Electrochemistry can provide a green and sustainable alternative to many reactions that would otherwise require the use of harsh chemicals, and it can even drive chemical reactions that convert greenhouse gasses such as carbon dioxide into chemicals that are useful in other industries. These are relatively simple demonstrations of electrochemistry, but the growing demand to generate and store massive amounts of energy on a much larger scale is currently a prominent challenge. More