More stories

  • in

    AI-guided brain stimulation aids memory in traumatic brain injury

    Traumatic brain injury (TBI) has disabled 1 to 2% of the population, and one of their most common disabilities is problems with short-term memory. Electrical stimulation has emerged as a viable tool to improve brain function in people with other neurological disorders.
    Now, a new study in the journal Brain Stimulation shows that targeted electrical stimulation in patients with traumatic brain injury led to an average 19% boost in recalling words.
    Led by University of Pennsylvania psychology professor Michael Jacob Kahana, a team of neuroscientists studied TBI patients with implanted electrodes, analyzed neural data as patients studied words, and used a machine learning algorithm to predict momentary memory lapses. Other lead authors included Wesleyan University psychology professor Youssef Ezzyat and Penn research scientist Paul Wanda.
    “The last decade has seen tremendous advances in the use of brain stimulation as a therapy for several neurological and psychiatric disorders including epilepsy, Parkinson’s disease, and depression,” Kahana says. “Memory loss, however, represents a huge burden on society. We lack effective therapies for the 27 million Americans suffering.”
    Study co-author Ramon Diaz-Arrastia, director of the Traumatic Brain Injury Clinical Research Center at Penn Medicine, says the technology Kahana and his team developed delivers “the right stimulation at the right time, informed by the wiring of the individual’s brain and that individual’s successful memory retrieval.”
    He says the top causes of TBI are motor vehicle accidents, which are decreasing, and falls, which are rising because of the aging population. The next most common causes are assaults and head injuries from participation in contact sports.
    This new study builds off the previous work of Ezzyat, Kahana, and their collaborators. Publishing their findings in 2017, they showed that stimulation delivered when memory is expected to fail can improve memory, whereas stimulation administered during periods of good functioning worsens memory. The stimulation in that study was open-loop, meaning it was applied by a computer without regard to the state of the brain. More

  • in

    A faster way to teach a robot

    Imagine purchasing a robot to perform household tasks. This robot was built and trained in a factory on a certain set of tasks and has never seen the items in your home. When you ask it to pick up a mug from your kitchen table, it might not recognize your mug (perhaps because this mug is painted with an unusual image, say, of MIT’s mascot, Tim the Beaver). So, the robot fails.
    “Right now, the way we train these robots, when they fail, we don’t really know why. So you would just throw up your hands and say, ‘OK, I guess we have to start over.’ A critical component that is missing from this system is enabling the robot to demonstrate why it is failing so the user can give it feedback,” says Andi Peng, an electrical engineering and computer science (EECS) graduate student at MIT.
    Peng and her collaborators at MIT, New York University, and the University of California at Berkeley created a framework that enables humans to quickly teach a robot what they want it to do, with a minimal amount of effort.
    When a robot fails, the system uses an algorithm to generate counterfactual explanations that describe what needed to change for the robot to succeed. For instance, maybe the robot would have been able to pick up the mug if the mug were a certain color. It shows these counterfactuals to the human and asks for feedback on why the robot failed. Then the system utilizes this feedback and the counterfactual explanations to generate new data it uses to fine-tune the robot.
    Fine-tuning involves tweaking a machine-learning model that has already been trained to perform one task, so it can perform a second, similar task.
    The researchers tested this technique in simulations and found that it could teach a robot more efficiently than other methods. The robots trained with this framework performed better, while the training process consumed less of a human’s time.
    This framework could help robots learn faster in new environments without requiring a user to have technical knowledge. In the long run, this could be a step toward enabling general-purpose robots to efficiently perform daily tasks for the elderly or individuals with disabilities in a variety of settings. More

  • in

    Efficient discovery of improved energy materials by a new AI-guided workflow

    Scientists of the NOMAD Laboratory at the Fritz Haber Institute of the Max Planck Society recently proposed a workflow that can dramatically accelerate the search for novel materials with improved properties. They demonstrated the power of the approach by identifying more than 50 strongly thermally insulating materials. These can help alleviate the ongoing energy crisis, by allowing for more efficient thermoelectric elements, i.e., devices able to convert otherwise wasted heat into useful electrical voltage.
    Discovering new and reliable thermoelectric materials is paramount for making use of the more than 40% of energy given off as waste heat globally and help mitigate the growing challenges of climate change. One way to increase the thermoelectric efficiency of a material is to reduce its thermal conductivity, κ, and thereby maintaining the temperature gradient needed to generate electricity. However, the cost associated with studying these properties limited the computational and experimental investigations of κ to only a minute subset of all possible materials. A team of the NOMAD Laboratory recently made efforts to reduce these costs by creating an AI-guided workflow that hierarchically screens out materials to efficiently find new and better thermal insulators.
    The work recently published in npj Computational Materials proposes a new way of using Artificial Intelligence (AI) to guide the high-throughput search for new materials. Instead of using physical/chemical intuition to screen out materials based on general, known or suspected trends, the new procedure learns the conditions that lead to the desired outcome with advanced AI methods. This work has the potential to quantify the search for new energy materials and increase the efficiency of these searches.
    The first step in designing these workflows is to use advanced statistical and AI methods to approximate the target property of interest, κ in this case. To this end, the sure-independence screening and sparsifying operator (SISSO) approach is used. SISSO is a machine learning method that reveals the fundamental dependencies between different materials properties from a set of billions of possible expressions. Compared to other “black-box” AI models, this approach is similarly accurate, but additionally yields analytic relationships between different material properties. This allows us to apply modern feature importance metrics to shed light on which material properties are the most important. In the case of κ, these are the molar volume, Vm; the high-temperature limit Debye Temperature, θD,∞; and the anharmonicity metricfactor, σA.
    Furthermore, the described statistical analysis allows to distill out rule-of-thumbs for the individual features that enable to a priori estimate the potential of material to be a thermal insulator. Working with the three most important primary features hence allowed to create AI-guided computational workflows for discovering new thermal insulators. These workflows use state-of-the-art electronic structure programs to calculate each of the selected features. During each step materials were screened out that are unlikely to be good insulators based on their values of Vm, θD,∞, and σA. With this, it is possible to reduce the number of calculations needed to find thermally insulating materials by over two orders of magnitude. In this work, this is demonstrated by identifying 96 thermal insulators (κ < 10 Wm-1K-1) in an initial set of 732 materials. The reliability of this approach was further verified by calculating κ for 4 of these predictions with highest possible accuracy. Besides facilitating the active search for new thermoelectric materials, the formalisms proposed by the NOMAD team can be also applied to solve other urgent material science problems. More

  • in

    Bot inspired by baby turtles can swim under the sand

    This robot can swim under the sand and dig itself out too, thanks to two front limbs that mimic the oversized flippers of turtle hatchlings.
    It’s the only robot that is able to travel in sand at a depth of 5 inches. It can also travel at a speed of 1.2 millimeters per second-roughly 4 meters, or 13 feet, per hour. This may seem slow but is comparable to other subterranean animals like worms and clams. The robot is equipped with force sensors at the end of its limbs that allow it to detect obstacles while in motion. It can operate untethered and be controlled via WiFi.
    Robots that can move through sand face significant challenges like dealing with higher forces than robots that move in air or water. They also get damaged more easily. However, the potential benefits of solving locomotion in sand include inspection of grain silos, measurements for soil contaminants, seafloor digging, extraterrestrial exploration,and search and rescue.
    The robot is the result of several experiments conducted by a team of roboticists at the University of California San Diego to better understand sand and how robots could travel through it. Sand is particularly challenging because of the friction between sand grains that leads to large forces; difficulty sensing obstacles; and the fact that it switches between behaving like a liquid and a solid depending on the context.
    The team believed that observing animals would be key to developing a bot that can swim in sand and dig itself out of sand as well. After considering worms, they landed on sea turtle hatchlings, which have enlarged front fins that allow them to surface after hatching. Turtle-like flippers can generate large propulsive forces; allow the robot to steer; and have the potential to detect obstacles.
    Scientists still do not fully understand how robots with flipper-like appendages move within sand. The research team at UC San Diego conducted extensive simulations and testing, and finally landed on a tapered body design and a shovel-shaped nose.
    “We needed to build a robot that is both strong and streamlined,” said Shivam Chopra, lead author of the paper describing the robot in the journal Advanced Intelligent Systems and a Ph.D. student in the research group of professor Nick Gravish at the Jacobs School of Engineering at UC San Diego. More

  • in

    Researchers develop AI model to better predict which drugs may cause birth defects

    Data scientists at the Icahn School of Medicine at Mount Sinai in New York and colleagues have created an artificial intelligence model that may more accurately predict which existing medicines, not currently classified as harmful, may in fact lead to congenital disabilities.
    The model, or “knowledge graph,” described in the July 17 issue of the Naturejournal Communications Medicine, also has the potential to predict the involvement of pre-clinical compounds that may harm the developing fetus. The study is the first known of its kind to use knowledge graphs to integrate various data types to investigate the causes of congenital disabilities.
    Birth defects are abnormalities that affect about 1 in 33 births in the United States. They can be functional or structural and are believed to result from various factors, including genetics. However, the causes of most of these disabilities remain unknown. Certain substances found in medicines, cosmetics, food, and environmental pollutants can potentially lead to birth defects if exposed during pregnancy.
    “We wanted to improve our understanding of reproductive health and fetal development, and importantly, warn about the potential of new drugs to cause birth defects before these drugs are widely marketed and distributed,” says Avi Ma’ayan, PhD, Professor, Pharmacological Sciences, and Director of the Mount Sinai Center for Bioinformatics at Icahn Mount Sinai, and senior author of the paper. “Although identifying the underlying causes is a complicated task, we offer hope that through complex data analysis like this that integrates evidence from multiple sources, we will be able, in some cases, to better predict, regulate, and protect against the significant harm that congenital disabilities could cause.”
    The researchers gathered knowledge across several datasets on birth-defect associations noted in published work, including those produced by NIH Common Fund programs, to demonstrate how integrating data from these resources can lead to synergistic discoveries. Particularly, the combined data is from the known genetics of reproductive health, classification of medicines based on their risk during pregnancy, and how drugs and pre-clinical compounds affect the biological mechanisms inside human cells.
    Specifically, the data included studies on genetic associations, drug- and preclinical-compound-induced gene expression changes in cell lines, known drug targets, genetic burden scores for human genes, and placental crossing scores for small molecule drugs.
    Importantly, using ReproTox-KG, with semi-supervised learning (SSL), the research team prioritized 30,000 preclinical small molecule drugs for their potential to cross the placenta and induce birth defects. SSL is a branch of machine learning that uses a small amount of labeled data to guide predictions for much larger unlabeled data. In addition, by analyzing the topology of the ReproTox-KG more than 500 birth-defect/gene/drug cliques were identified that could explain molecular mechanisms that underlie drug-induced birth defects. In graph theory terms, cliques are subsets of a graph where all the nodes in the clique are directly connected to all other nodes in the clique.
    The investigators caution that the study’s findings are preliminary and that further experiments are needed for validation.
    Next, the investigators plan to use a similar graph-based approach for other projects focusing on the relationship between genes, drugs, and diseases. They also aim to use the processed dataset as training materials for courses and workshops on bioinformatics analysis. In addition, they plan to extend the study to consider more complex data, such as gene expression from specific tissues and cell types collected at multiple stages of development.
    “We hope that our collaborative work will lead to a new global framework to assess potential toxicity for new drugs and explain the biological mechanisms by which some drugs, known to cause birth defects, may operate. It’s possible that at some point in the future, regulatory agencies such as the U.S. Food and Drug Administration and the U.S. Environmental Protection Agency may use this approach to evaluate the risk of new drugs or other chemical applications,” says Dr. Ma’ayan. More

  • in

    Robotics: New skin-like sensors fit almost everywhere

    Researchers from the Munich Institute of Robotics and Machine Intelligence (MIRMI) at the Technical University of Munich (TUM) have developed an automatic process for making soft sensors. These universal measurement cells can be attached to almost any kind of object. Applications are envisioned especially in robotics and prosthetics.
    “Detecting and sensing our environment is essential for understanding how to interact with it effectively,” says Sonja Groß. An important factor for interactions with objects is their shape. “This determines how we can perform certain tasks,” says the researcher from the Munich Institute of Robotics and Machine Intelligence (MIRMI) at TUM. In addition, physical properties of objects, such as their hardness and flexibility, influence how we can grasp and manipulate them, for example.
    Artificial hand: interaction with the robotic system
    The holy grail in robotics and prosthetics is a realistic emulation of the sensorimotoric skills of a person such as those in a human hand. In robotics, force and torque sensors are fully integrated into most devices. These measurement sensors provide valuable feedback on the interactions of the robotic system, such as an artificial hand, with its surroundings. However, traditional sensors have been limited in terms of customization possibilities. Nor can they be attached to arbitrary objects. In short: until now, no process existed for producing sensors for rigid objects of arbitrary shapes and sizes.
    New framework for soft sensors presented for the first time
    This was the starting point for the research of Sonja Groß and Diego Hidalgo, which they have now presented at the ICRA robotics conference in London. The difference: a soft, skin-like material that wraps around objects. The research group has also developed a framework that largely automates the production process for this skin. It works as follows: “We use software to build the structure for the sensory systems,” says Hidalgo. “We then send this information to a 3D printer where our soft sensors are made.” The printer injects a conductive black paste into liquid silicone. The silicone hardens, but the paste is enclosed by it and remains liquid. When the sensors are squeezed or stretched, their electrical resistance changes. “That tells us how much compression or stretching force is applied to a surface. We use this principle to gain a general understanding of interactions with objects and, specifically, to learn how to control an artificial hand interacting with these objects,” explains Hidalgo. What sets their work apart: the sensors embedded in silicon adjust to the surface in question (such as fingers or hands) but still provide precise data that can be used for the interaction with the environment.
    New perspectives for robotics and especially prosthetics
    “The integration of these soft, skin-like sensors in 3D objects opens up new paths for advanced haptic sensing in artificial intelligence,” says MIRMI Executive Director Prof. Sami Haddadin. The sensors provide valuable data on compressive forces and deformations in real time — thus providing immediate feedback. This expands the range of perception of an object or a robotic hand — facilitating a more sophisticated and sensitive interaction. Haddadin: “This work has the potential to bring about a general revolution in industries such as robotics, prosthetics and the human/machine interaction by making it possible to create wireless and customizable sensor technology for arbitrary objects and machines.”
    Video showing the entire process: https://youtu.be/i43wgx9bT-E More

  • in

    AI to predict your health later in life — all at the press of a button

    Thanks to artificial intelligence, we will soon be able to predict our risk of developing serious health conditions later in life, at the press of a button.
    Abdominal aortic calcification, or AAC, is a calcification which can build up within the walls of the abdominal aorta and predicts your risk of developing cardiovascular disease events such as heart attacks and stroke.
    It also predicts your risk of falls, fractures and late-life dementia.
    Conveniently, common bone density machine scans used to detect osteoporosis, can also detect AAC.
    However, highly trained expert readers are needed to analyse the images, a process which can take 5-15 minutes per image.
    But researchers from Edith Cowan University’s (ECU) School of Science and School of Medical and Health Sciences have collaborated to develop software which can analyse scans much, much faster: roughly 60,000 images in a single day.
    Researcher and Heart Foundation Future Leader Fellow Associate Professor Joshua Lewis said this significant boost in efficiency will be crucial for the widespread use of AAC in research and helping people avoid developing health problems later in life. More

  • in

    ChatGPT’s responses to people’s healthcare-related queries are nearly indistinguishable from those provided by humans, new study reveals

    ChatGPT’s responses to people’s healthcare-related queries are nearly indistinguishable from those provided by humans, a new study from NYU Tandon School of Engineering and Grossman School of Medicine reveals, suggesting the potential for chatbots to be effective allies to healthcare providers’ communications with patients.
    An NYU research team presented 392 people aged 18 and above with ten patient questions and responses, with half of the responses generated by a human healthcare provider and the other half by ChatGPT.
    Participants were asked to identify the source of each response and rate their trust in the ChatGPT responses using a 5-point scale from completely untrustworthy to completely trustworthy.
    The study found people have limited ability to distinguish between chatbot and human-generated responses. On average, participants correctly identified chatbot responses 65.5% of the time and provider responses 65.1% of the time, with ranges of 49.0% to 85.7% for different questions. Results remained consistent no matter the demographic categories of the respondents.
    The study found participants mildly trust chatbots’ responses overall (3.4 average score), with lower trust when the health-related complexity of the task in question was higher. Logistical questions (e.g. scheduling appointments, insurance questions) had the highest trust rating (3.94 average score), followed by preventative care (e.g. vaccines, cancer screenings, 3.52 average score). Diagnostic and treatment advice had the lowest trust ratings (scores 2.90 and 2.89, respectively).
    According to the researchers, the study highlights the possibility that chatbots can assist in patient-provider communication particularly related to administrative tasks and common chronic disease management. Further research is needed, however, around chatbots’ taking on more clinical roles. Providers should remain cautious and exercise critical judgment when curating chatbot-generated advice due to the limitations and potential biases of AI models. More