More stories

  • in

    How human faces can teach androids to smile

    Robots able to display human emotion have long been a mainstay of science fiction stories. Now, Japanese researchers have been studying the mechanical details of real human facial expressions to bring those stories closer to reality.
    In a recent study published by the Mechanical Engineering Journal, a multi-institutional research team led by Osaka University have begun mapping out the intricacies of human facial movements. The researchers used 125 tracking markers attached to a person’s face to closely examine 44 different, singular facial actions, such as blinking or raising the corner of the mouth.
    Every facial expression comes with a variety of local deformation as muscles stretch and compress the skin. Even the simplest motions can be surprisingly complex. Our faces contain a collection of different tissues below the skin, from muscle fibers to fatty adipose, all working in concert to convey how we’re feeling. This includes everything from a big smile to a slight raise of the corner of the mouth. This level of detail is what makes facial expressions so subtle and nuanced, in turn making them challenging to replicate artificially. Until now, this has relied on much simpler measurements, of the overall face shape and motion of points chosen on skin before and after movements.
    “Our faces are so familiar to us that we don’t notice the fine details,” explains Hisashi Ishihara, main author of the study. “But from an engineering perspective, they are amazing information display devices. By looking at people’s facial expressions, we can tell when a smile is hiding sadness, or whether someone’s feeling tired or nervous.”
    Information gathered by this study can help researchers working with artificial faces, both created digitally on screens and, ultimately, the physical faces of android robots. Precise measurements of human faces, to understand all the tensions and compressions in facial structure, will allow these artificial expressions to appear both more accurate and natural.
    “The facial structure beneath our skin is complex,” says Akihiro Nakatani, senior author. “The deformation analysis in this study could explain how sophisticated expressions, which comprise both stretched and compressed skin, can result from deceivingly simple facial actions.”
    This work has applications beyond robotics as well, for example, improved facial recognition or medical diagnoses, the latter of which currently relies on doctor intuition to notice abnormalities in facial movement.
    So far, this study has only examined the face of one person, but the researchers hope to use their work as a jumping off point to gain a fuller understanding of human facial motions. As well as helping robots to both recognize and convey emotion, this research could also help to improve facial movements in computer graphics, like those used in movies and video games, helping to avoid the dreaded ‘uncanny valley’ effect. More

  • in

    AI algorithm developed to measure muscle development, provide growth chart for children

    Leveraging artificial intelligence and the largest pediatric brain MRI dataset to date, researchers have now developed a growth chart for tracking muscle mass in growing children. The new study led by investigators from Brigham and Women’s Hospital, a founding member of the Mass General Brigham healthcare system, found that their artificial intelligence-based tool is the first to offer a standardized, accurate, and reliable way to assess and track indicators of muscle mass on routine MRI. Their results were published today in Nature Communications.
    “Pediatric cancer patients often struggle with low muscle mass, but there is no standard way to measure this. We were motivated to use artificial intelligence to measure temporalis muscle thickness and create a standardized reference,” said senior author Ben Kann, MD, a radiation oncologist in the Brigham’s Department of Radiation Oncology and Mass General Brigham’s Artificial Intelligence in Medicine Program. “Our methodology produced a growth chart that we can use to track muscle thickness within developing children quickly and in real-time. Through this, we can determine whether they are growing within an ideal range.”
    Lean muscle mass in humans has been linked to quality of life, daily functional status, and is an indicator of overall health and longevity. Individuals with conditions such as sarcopenia or low lean muscle mass are at risk of dying earlier, or otherwise being prone to various diseases that can affect their quality of life. Historically, there has not been a widespread or practical way to track lean muscle mass, with body mass index (BMI) serving as a default form of measurement. The weakness in using BMI is that while it considers weight, it does not indicate how much of that weight is muscle. For decades, scientists have known that the thickness of the temporalis muscle outside the skull is associated with lean muscle mass in the body. However, the thickness of this muscle has been difficult to measure in real-time in the clinic and there was no way to diagnose normal from abnormal thickness. Traditional methods have typically involved manual measurements, but these practices are time consuming and are not standardized.
    To address this, the research team applied their deep learning pipeline to MRI scans of patients with pediatric brain tumors treated at Boston Children’s Hospital/Dana-Farber Cancer Institute in collaboration with Boston Children’s Radiology Department. The team analyzed 23,852 normal healthy brain MRIs from individuals aged 4 through 35 to calculate temporalis muscle thickness (iTMT) and develop normal-reference growth charts for the muscle. MRI results were aggregated to create sex-specific iTMT normal growth charts with percentiles and ranges. They found that iTMT is accurate for a wide range of patients and is comparable to the analysis of trained human experts.
    “The idea is that these growth charts can be used to determine if a patient’s muscle mass is within a normal range, in a similar way that height and weight growth charts are typically used in the doctor’s office,” said Kann.
    In essence, the new method could be used to assess patients who are already receiving routine brain MRIs that track medical conditions such as pediatric cancers and neurodegenerative diseases. The team hopes that the ability to monitor the temporalis muscle instantly and quantitatively will enable clinicians to quickly intervene for patients who demonstrate signs of muscle loss, and thus prevent the negative effects of sarcopenia and low muscle mass.
    One of the limitations lies in the algorithms reliance on scan quality, and how a suboptimal resolution can affect measurements and the interpretation of results. Another drawback is the limited amount of MRI datasets available outside of the United States and Europe that can give an accurate global picture.
    “In the future, we may want to explore if the utility of iTMT will be high enough to justify getting MRIs on a regular basis for more patients,” said Kann. “We plan to improve model performance by training it on more challenging and variable cases. Future applications of iTMT could allow us to track and predict morbidity, as well as reveal critical physiologic states in patients that require intervention.” More

  • in

    21st century Total Wars will enlist technologies in ways we don’t yet understand

    The war in Ukraine is not only the largest European land war since the Second World War. It is also the first large-scale shooting war between two technologically advanced countries to also be fought in cyberspace.
    And each country’s technological and information prowess is becoming critical to the fight.
    Especially for outmanned and outgunned Ukraine, the conflict has developed into a Total War.
    A Total War is one in which all the resources of a country, including its people, are seen as part of the war effort. Civilians become military targets, which inevitably leads to higher casualties. Non-offensive infrastructure is also attacked.
    As new technologies like artificial intelligence, unmanned aerial vehicles (UAVs) such as drones and so-called ‘cyberweapons’ such as malware and Internet-based disinformation campaigns become integral to our daily lives, researchers are working to grasp the role they will play in warfare.
    Jordan Richard Schoenherr, an assistant professor in the Department of Psychology, writes in a new paper that our understanding of warfare is now outdated. The role sociotechnical systems — meaning the way technology relates to human organizational behaviour in a complex, interdependent system — plays in strategic thinking is still far from fully developed. Understanding their potential and their vulnerabilities will be an important task for planners in the years ahead.
    “We need to think about the networks of people and technology — that is what a sociotechnical system is,” Schoenherr explains. More

  • in

    Machine learning gives users ‘superhuman’ ability to open and control tools in virtual reality

    Researchers have developed a virtual reality application where a range of 3D modelling tools can be opened and controlled using just the movement of a user’s hand.
    The researchers, from the University of Cambridge, used machine learning to develop ‘HotGestures’ — analogous to the hot keys used in many desktop applications.
    HotGestures give users the ability to build figures and shapes in virtual reality without ever having to interact with a menu, helping them stay focused on a task without breaking their train of thought.
    The idea of being able to open and control tools in virtual reality has been a movie trope for decades, but the researchers say that this is the first time such a ‘superhuman’ ability has been made possible. The results are reported in the journal IEEE Transactions on Visualization and Computer Graphics.
    Virtual reality (VR) and related applications have been touted as game-changers for years, but outside of gaming, their promise has not fully materialised. “Users gain some qualities when using VR, but very few people want to use it for an extended period of time,” said Professor Per Ola Kristensson from Cambridge’s Department of Engineering, who led the research. “Beyond the visual fatigue and ergonomic issues, VR isn’t really offering anything you can’t get in the real world.”
    Most users of desktop software will be familiar with the concept of hot keys — command shortcuts such as ctrl-c to copy and ctrl-v to paste. While these shortcuts omit the need to open a menu to find the right tool or command, they rely on the user having the correct command memorised.
    “We wanted to take the concept of hot keys and turn it into something more meaningful for virtual reality — something that wouldn’t rely on the user having a shortcut in their head already,” said Kristensson, who is also co-Director of the Centre for Human-Inspired Artificial Intelligence. More

  • in

    Neuromorphic computing will be great… if hardware can handle the workload

    Technology is edging closer and closer to the super-speed world of computing with artificial intelligence. But is the world equipped with the proper hardware to be able to handle the workload of new AI technological breakthroughs?
    “The brain-inspired codes of the AI revolution are largely being run on conventional silicon computer architectures which were not designed for it,” explains Erica Carlson, 150th Anniversary Professor of Physics and Astronomy at Purdue University.
    A joint effort between Physicists from Purdue University, University of California San Diego (USCD) and École Supérieure de Physique et de Chimie Industrielles (ESPCI) in Paris, France, believe they may have discovered a way to rework the hardware…. By mimicking the synapses of the human brain. They published their findings, “Spatially Distributed Ramp Reversal Memory in VO2” in Advanced Electronic Materials which is featured on the back cover of the October 2023 edition.
    New paradigms in hardware will be necessary to handle the complexity of tomorrow’s computational advances. According to Carlson, lead theoretical scientist of this research, “neuromorphic architectures hold promise for lower energy consumption processors, enhanced computation, fundamentally different computational modes, native learning and enhanced pattern recognition.”
    Neuromorphic architecture basically boils down to computer chips mimicking brain behavior. Neurons are cells in the brain that transmit information. Neurons have small gaps at their ends that allow signals to pass from one neuron to the next which are called synapses. In biological brains, these synapses encode memory. This team of scientists concludes that vanadium oxides show tremendous promise for neuromorphic computing because they can be used to make both artificial neurons and synapses.
    “The dissonance between hardware and software is the origin of the enormously high energy cost of training, for example, large language models like ChatGPT,” explains Carlson. “By contrast, neuromorphic architectures hold promise for lower energy consumption by mimicking the basic components of a brain: neurons and synapses. Whereas silicon is good at memory storage, the material does not easily lend itself to neuron-like behavior. Ultimately, to provide efficient, feasible neuromorphic hardware solutions requires research into materials with radically different behavior from silicon — ones that can naturally mimic synapses and neurons. Unfortunately, the competing design needs of artificial synapses and neurons mean that most materials that make good synaptors fail as neuristors, and vice versa. Only a handful of materials, most of them quantum materials, have the demonstrated ability to do both.”
    The team relied on a recently discovered type of non-volatile memory which is driven by repeated partial temperature cycling through the insulator-to-metal transition. This memory was discovered in vanadium oxides. More

  • in

    Lightening the load: Researchers develop autonomous electrochemistry robot

    Researchers at the Beckman Institute for Advanced Science and Technology developed an automated laboratory robot to run complex electrochemical experiments and analyze data.
    With affordability and accessibility in mind, the researchers collaboratively created a benchtop robot that rapidly performs electrochemistry. Aptly named the Electrolab, this instrument greatly reduces the effort and time needed for electrochemical studies by automating many basic and repetitive laboratory tasks.
    The Electrolab can be used to explore energy storage materials and chemical reactions that promote the use of alternative and renewable power sources like solar or wind energy, which are essential to combating climate change.
    “We hope the Electrolab will allow new discoveries in energy storage while helping us share knowledge and data with other electrochemists — and non-electrochemists! We want them to be able to try things they couldn’t before,” said Joaquín Rodríguez-López, a professor in the Department of Chemistry at the University of Illinois Urbana-Champaign.
    The interdisciplinary team was co-led by Rodríguez-López and Charles Schroeder, the James Economy professor in the Department of Materials Science and Engineering and a professor of chemical and biomolecular engineering at UIUC. Their work appears in the journal Device.
    Electrochemistry is the study of electricity and its relation to chemistry. Chemical reactions release energy that can be converted into electricity — batteries used to power remote controllers or electric vehicles are perfect examples of this phenomenon.
    In the opposite direction, electricity can also be used to drive chemical reactions. Electrochemistry can provide a green and sustainable alternative to many reactions that would otherwise require the use of harsh chemicals, and it can even drive chemical reactions that convert greenhouse gasses such as carbon dioxide into chemicals that are useful in other industries. These are relatively simple demonstrations of electrochemistry, but the growing demand to generate and store massive amounts of energy on a much larger scale is currently a prominent challenge. More

  • in

    450-million-year-old organism finds new life in Softbotics

    Researchers in the Department of Mechanical Engineering at Carnegie Mellon University, in collaboration with paleontologists from Spain and Poland, used fossil evidence to engineer a soft robotic replica of pleurocystitid, a marine organism that existed nearly 450 million years ago and is believed to be one of the first echinoderms capable of movement using a muscular stem.
    Published today in The Proceedings of the National Academy of Science (PNAS), the research seeks to broaden modern perspective of animal design and movement by introducing a new a field of study — Paleobionics — aimed at using Softbotics, robotics with flexible electronics and soft materials, to understand the biomechanical factors that drove evolution using extinct organisms.
    “Softbotics is another approach to inform science using soft materials to construct flexible robot limbs and appendages. Many fundamental principles of biology and nature can only fully be explained if we look back at the evolutionary timeline of how animals evolved. We are building robot analogues to study how locomotion has changed,” said Carmel Majidi, lead author and Professor of Mechanical Engineering at Carnegie Mellon University.
    With humans’ time on earth representing only 0.007% of the planet’s history, the modern-day animal kingdom that influences understanding of evolution and inspires today’s mechanical systems is only a fraction of all creatures that have existed through history.
    Using fossil evidence to guide their design and a combination of 3D printed elements and polymers to mimic the flexible columnar structure of the moving appendage, the team demonstrated that pleurocystitids were likely able to move over the sea bottom by means of a muscular stem that pushed the animal forward. Despite the absence of a current day analogue (echinoderms have since evolved to include modern day starfish and sea urchins), pleurocystitids have been of interest to paleontologists due to their pivotal role in echinoderm evolution.
    The team determined that wide sweeping movements were likely the most effective motion and that increasing the length of the stem significantly increased the animals’ speed without forcing it to exert more energy.
    “Researchers in the bio-inspired robotics community need to pick and choose important features worth adopting from organisms,” explained Richard Desatnik, PhD candidate and co-first author. More

  • in

    Artificial intelligence may help predict — possibly prevent — sudden cardiac death

    Predicting sudden cardiac death, and perhaps even addressing a person’s risk to prevent future death, may be possible through artificial intelligence (AI) and could offer a new move toward prevention and global health strategies, according to preliminary research to be presented at the American Heart Association’s Resuscitation Science Symposium 2023. The meeting, Nov. 11-12, in Philadelphia is a premier global exchange of the most recent advances related to treating cardiopulmonary arrest and life-threatening traumatic injury.
    “Sudden cardiac death, a public health burden, represents 10% to 20% of overall deaths. Predicting it is difficult, and the usual approaches fail to identify high-risk people, particularly at an individual level,” said Xavier Jouven, M.D., Ph.D., the lead author of the study and professor of cardiology and epidemiology at the Paris Cardiovascular Research Center, Inserm U970-University of Paris. “We proposed a new approach not restricted to the usual cardiovascular risk factors but encompassing all medical information available in electronic health records.”
    The research team analyzed medical information with AI from registries and databases in Paris, France and Seattle for 25,000 people who had died from sudden cardiac arrest and 70,000 people from the general population, with data from the two groups matched by age, sex and residential area. The data, which represented more than 1 million hospital diagnoses and 10 million medication prescriptions, was gathered from medical records up to ten years prior to each death. Using AI to analyze the data, researchers built nearly 25,000 equations with personalized health factors used to identify those people who were at very high risk of sudden cardiac death. Additionally, they developed a customized risk profile for each of the individuals in the study.
    The personalized risk equations included a person’s medical details, such as treatment for high blood pressure and history of heart disease, as well as mental and behavioral disorders including alcohol abuse. The analysis identified those factors most likely to decrease or increase the risk of sudden cardiac death at a particular percentage and time frame, for example, 89% risk of sudden cardiac death within three months.
    The AI analysis was able to identify people who had more than 90% of risk to die suddenly, and they represented more than one fourth of all cases of sudden cardiac death.
    “We have been working for almost 30 years in the field of sudden cardiac death prediction, however, we did not expect to reach such a high level of accuracy. We also discovered that the personalized risk factors are very different between the participants and are often issued from different medical fields (a mix of neurological, psychiatric, metabolic and cardiovascular data) — a picture difficult to catch for the medical eyes and brain of a specialist in one given field” said Jouven, who is also founder of the Paris Sudden Death Expertise Center. “While doctors have efficient treatments such as correction of risk factors, specific medications and implantable defibrillators, the use of AI is necessary to detect in a given subject a succession of medical information registered over the years that will form a trajectory associated with an increased risk of sudden cardiac death. We hope that with a personalized list of risk factors, patients will be able to work with their clinicians to reduce those risk factors and ultimately decrease the potential for sudden cardiac death.”
    Among the study’s limitations are the potential use of the prediction models beyond this research. In addition, the medical data collected in electronic health records sometimes include proxies instead of raw data, and the data collected may be different among countries, requiring an adaptation of the prediction models. More