More stories

  • in

    New AI technology gives robot recognition skills a big lift

    A robot moves a toy package of butter around a table in the Intelligent Robotics and Vision Lab at The University of Texas at Dallas. With every push, the robot is learning to recognize the object through a new system developed by a team of UT Dallas computer scientists.
    The new system allows the robot to push objects multiple times until a sequence of images are collected, which in turn enables the system to segment all the objects in the sequence until the robot recognizes the objects. Previous approaches have relied on a single push or grasp by the robot to “learn” the object.
    The team presented its research paper at the Robotics: Science and Systems conference July 10-14 in Daegu, South Korea. Papers for the conference are selected for their novelty, technical quality, significance, potential impact and clarity.
    The day when robots can cook dinner, clear the kitchen table and empty the dishwasher is still a long way off. But the research group has made a significant advance with its robotic system that uses artificial intelligence to help robots better identify and remember objects, said Dr. Yu Xiang, senior author of the paper.
    “If you ask a robot to pick up the mug or bring you a bottle of water, the robot needs to recognize those objects,” said Xiang, assistant professor of computer science in the Erik Jonsson School of Engineering and Computer Science.
    The UTD researchers’ technology is designed to help robots detect a wide variety of objects found in environments such as homes and to generalize, or identify, similar versions of common items such as water bottles that come in varied brands, shapes or sizes.
    Inside Xiang’s lab is a storage bin full of toy packages of common foods, such as spaghetti, ketchup and carrots, which are used to train the lab robot, named Ramp. Ramp is a Fetch Robotics mobile manipulator robot that stands about 4 feet tall on a round mobile platform. Ramp has a long mechanical arm with seven joints. At the end is a square “hand” with two fingers to grasp objects. More

  • in

    AI helps ID cancer risk factors

    A novel study from the University of South Australia has identified a range of metabolic biomarkers that could help predict the risk of cancer.
    Deploying machine learning to examine data from 459,169 participants in the UK Biobank, the study identified 84 features that could signal increased cancer risk.
    Several markers also signalled chronic kidney or liver disease, highlighting the significance of exploring the underlying pathogenic mechanisms of these diseases for their potential connections with cancer.
    The study, “Hypothesis-free discovery of novel cancer predictors using machine learning,” was conducted by UniSA researchers: Dr Iqbal Madakkatel, Dr Amanda Lumsden, Dr Anwar Mulugeta, and Professor Elina Hyppönen, with University of Adelaide’s Professor Ian Olver.
    “We conducted a hypothesis-free analysis using artificial intelligence and statistical approaches to identify cancer risk factors among more than 2800 features,” Dr Madakkatel says.
    “More than 40% of the features identified by the model were found to be biomarkers — biological molecules that can signal health or unhealthy conditions depending on their status — and several of these were jointly linked to cancer risk and kidney or liver disease.”
    Dr Amanda Lumsden says this study provides important information on mechanisms which may contribute to cancer risk. More

  • in

    Is digital media use a risk factor for psychosis in young adults?

    On average, young adults in Canada spend several hours on their smartphones every day. Many jump from TikTok to Netflix to Instagram, putting their phone down only to pick up a video game controller. A growing body of research is looking into the potential dangers of digital media overuse, as well as potential benefits of moderate digital media use, from a mental health standpoint.
    A recent McGill University study of 425 Quebecers between the ages of 18 and 25 has found that young adults who have more frequent psychotic experiences also tend to spend more time using digital media. Interestingly, the study, which surveyed the participants over a period of six months, also found that spending more time on digital media did not seem to cause any change in the frequency of psychotic experiences over time, said lead author and psychiatry resident at McGill, Vincent Paquin.
    By “psychotic experiences,” the researchers refer to a range of unusual thoughts and perceptions, such as the belief of being in danger and the experience of hearing and seeing things that other people cannot see or hear. These experiences are relatively common, affecting about 5% of young adults.
    “Our findings are reassuring because they do not show evidence that digital media can cause or exacerbate psychotic experiences in young people,” said Paquin. “It is important to keep in mind that each person is different. In some situations, digital media may be highly beneficial for a person’s well-being, and in other cases, these technologies may cause unintended harms.”
    Accessing mental health services through digital media
    The researchers hope their findings will help improve mental health services for young people. By better understanding the types of digital contents and activities that matter to young people, mental health services can be made more accessible and better aligned with individual needs, they say.
    “It is important for young people, their families, and for clinicians and policymakers to have scientific evidence on the risks and benefits of digital media for mental health, Paquin said. “Considering that young adults with more psychotic experiences may prefer digital technologies, we can use digital platforms to increase their access to accurate mental health information and to appropriate services.”
    About the study
    “Associations between digital media use and psychotic experiences in young adults of Quebec, Canada: a longitudinal study” by Vincent Paquin et al., was published in Social Psychiatry and Psychiatric Epidemiology. More

  • in

    Breathe! The shape-shifting ball that supports mental health

    A soft ball that ‘personifies’ breath, expanding and contracting in synchronicity with a person’s inhalations and exhalations, has been invented by a PhD student at the University of Bath in the UK. The ball is designed to support mental health, giving users a tangible representation of their breath to keep them focused and to help them regulate their emotions.
    Alexz Farrall, the student in the Department of Computer Science who invented the device, said: “By giving breath physical form, the ball enhances self-awareness and engagement, fostering positive mental health outcomes.”
    Generally, breathing is an ignored activity, yet when done deeply and with focus, it’s known to alleviate anxiety and foster wellbeing. Measured breathing is highly rated by mental health practitioners both for its ability to lower the temperature in emotionally charged situations and to increase a person’s receptivity to more demanding mental-health interventions.
    Disciplines that frequently include mindful breathing include Cognitive Behavioural Therapy (CBT), Mindfulness-Based Stress Reduction (MBSR), Dialectical Behaviour Therapy (DBT) and trauma-focused therapies.
    Most people, however, struggle to sustain attention on their breathing. Once disengaged from the process, they are likely to return to thinking mode and be less receptive to mental-health interventions that require concentration.
    “I hope this device will be part of the solution for many people with problems relating to their mental wellbeing,” said Mr Farrall.
    Focus lowers anxiety
    Recent research led by Mr Farrall shows a significant improvement in people’s ability to focus on their breathing when they use his shape-shifting ball. With their attention heightened, study participants were then able to pay closer attention to a guided audio recording from a meditation app. More

  • in

    Analog and digital: The best of both worlds in one energy-efficient system

    We live in an analog world of continuous information flow that is both processed and stored by our brains at the same time, but our devices process information digitally in the form of discrete binary code, breaking the information into little bits (or bites). Researchers at EPFL have revealed a pioneering technology that combines the potential of continuous analog processing with the precision of digital devices. By seamlessly integratingultra-thin, two-dimensional semiconductors with ferroelectric materials, the research, published in Nature Electronics, unveils a novel way to improve energy efficiency and add new functionalities in computing. The new configuration merges traditional digital logic with brain-like analog operations.
    Faster and more efficient electronics
    The innovation from the Nanoelectronics Device Laboratory (Nanolab), in collaboration with Microsystems Laboratory, revolves around a unique combination of materials leading to brain-inspired functions and advanced electronic switches, including the standout negative capacitance Tunnel Field-Effect Transistor (TFET). In the world of electronics, a transistor or “switch” can be likened to a light switch, determining whether current flows (on) or doesn’t (off). These are the famous 1s and 0s of binary computer language, and this simple action of turning on and off is integral to nearly every function of our electronic devices, from processing information to storing memory. The TFET is a special type of switch designed with an energy-conscious future in mind. Unlike conventional transistors that require a certain minimum voltage to turn on, TFETs can operate at significantly lower voltages. This optimized design means they consume considerably less energy when switching, thus significantly reducing the overall power consumption of devices they are integrated into.
    According to Professor Adrian Ionescu, head of Nanolab, “Our endeavors represent a significant leap forward in the domain of electronics, having shattered previous performance benchmarks, and is exemplified by the outstanding capabilities of the negative-capacitance tungsten diselenide/tin diselenide TFET and the possibility to create synaptic neuron function within the same technology.”
    Sadegh Kamaei, a PhD candidate at EPFL, has harnessed the potential of 2D semiconductors and ferroelectric materials within a fully co-integrated electronic system for the first time. The 2D semiconductions can be used for ultra-efficient digital processors whereas the ferroelectric material provides the possibility to continuously process and store memory at the same time. Combining the two materials creates the opportunity to harness the best of the digital and analog capacities of each. Now the light switch from our above analogy is not only more energy efficient, but the light it turns on can burn even brighter. Kamaei added, “Working with 2D semiconductors and integrating them with ferroelectric materials has been challenging yet immensely rewarding. The potential applications of our findings could redefine how we view and interact with electronic devices in the future.”
    Blending traditional logic with neuromorphic circuits
    Furthermore, the research delves into creating switches similar to biological synapses — the intricate connectors between brain cells — for neuromorphic computing. “The research marks the first-ever co-integration of von Neumann logic circuits and neuromorphic functionalities, charting an exciting course toward the creation of innovative computing architectures characterized by exceptionally low power consumption and hitherto unexplored capabilities of building neuromorphic functions combined with digital information processing,” adds Ionescu.
    Such advances hint at electronic devices that operate in ways parallel to the human brain, marrying computational speed with processing information in a way that is more in line with human cognition. For instance, neuromorphic systems might excel at tasks that traditional computers struggle with, such as pattern recognition, sensory data processing, or even certain types of learning. This blend of traditional logic with neuromorphic circuits indicates a transformative change with far-reaching implications. The future may well see devices that are not just smarter and faster but exponentially more energy-efficient. More

  • in

    AI enabled soft robotic implant monitors scar tissue to self-adapt for personalized drug treatment

    Research teams at University of Galway and Massachusetts Institute of Technology (MIT) have detailed a breakthrough in medical device technology that could lead to intelligent, long-lasting, tailored treatment for patients thanks to soft robotics and artificial intelligence.
    The transatlantic partnership has created a smart implantable device that can administer a drug — while also sensing when it is beginning to be rejected — and use AI to change the shape of the device to maintain drug dosage and simultaneously bypass scar tissue build up.
    The study was published in the journal Science Robotics.
    Implantable medical device technologies offer promise to unlock advanced therapeutic interventions in healthcare, such as insulin release to treat diabetes, but a major issue holding back such devices is the patient’s reaction to a foreign body.
    Dr Rachel Beatty, University of Galway, and co-lead author on the study, explained: “The technology which we have developed, by using soft robotics, advances the potential of implantable devices to be in a patient’s body for extended periods, providing long-lasting therapeutic action. Imagine a therapeutic implant that can also sense its environment and respond as needed using AI — this approach could generate revolutionary changes in implantable drug delivery for a range of chronic diseases.”
    The University of Galway-MIT research team originally developed first-generation flexible devices, known as soft robotic implants, to improve drug delivery and reduce fibrosis. Despite that success, the team regard the technology as one-size-fits-all, as it did not account for how individual patients react and respond differently, or for the progressive nature of fibrosis, where scar tissue builds around the device, encapsulating it, impeding and blocking its purpose, eventually forcing it to fail.
    The latest research, published today in Science Robotics, demonstrates how they have significantly advanced the technology — using AI — making it responsive to the implant environment with the potential to be longer lasting by defending against the body’s natural urge to reject a foreign body. More

  • in

    A simpler way to connect quantum computers

    Researchers have a new way to connect quantum devices over long distances, a necessary step toward allowing the technology to play a role in future communications systems.
    While today’s classical data signals can get amplified across a city or an ocean, quantum signals cannot. They must be repeated in intervals — that is, stopped, copied and passed on by specialized machines called quantum repeaters. Many experts believe these quantum repeaters will play a key role in future communication networks, allowing enhanced security and enabling connections between remote quantum computers.
    The Princeton study, published Aug. 30 in Nature, details the basis for a new approach to building quantum repeaters. It sends telecom-ready light emitted from a single ion implanted in a crystal. The effort was many years in the making, according to Jeff Thompson, the study’s principal author. The work combined advances in photonic design and materials science.
    Other leading quantum repeater designs emit light in the visible spectrum, which degrades quickly over optical fiber and must be converted before traveling long distances. The new device is based on a single rare earth ion implanted in a host crystal. And because this ion emits light at an ideal infrared wavelength, it requires no such signal conversion, which can lead to simpler and more robust networks.
    The device has two parts: a calcium tungstate crystal doped with just a handful of erbium ions, and a nanoscopic piece of silicon etched into a J-shaped channel. Pulsed with a special laser, the ion emits light up through the crystal. But the silicon piece, a whisp of a semiconductor stuck onto the top of the crystal, catches and guides individual photons out into the fiber optic cable.
    Ideally, this photon would be encoded with information from the ion, Thompson said. Or more specifically, from a quantum property of the ion called spin. In a quantum repeater, collecting and interfering the signals from distant nodes would create entanglement between their spins, allowing end-to-end transmission of quantum states despite losses along the way.
    Thompson’s team first started working with erbium ions several years before, but first versions used different crystals that harbored too much noise. In particular, this noise caused the frequency of the emitted photons to jump around randomly in a process known as spectral diffusion. This prevented the delicate quantum interference that is necessary to operate quantum networks. To solve this problem, his lab started working with Nathalie de Leon, associate professor of electrical and computer engineering, and Robert Cava, a leading solid-state materials scientist and Princeton’s Russell Wellman Moore Professor of Chemistry, to explore new materials that could host single erbium ions with much less noise. More

  • in

    Unveiling global warming’s impact on daily precipitation with deep learning

    A collaborative international research team led by Professor Yoo-Geun Ham from Chonnam National University and Professor Seung-Ki Min from Pohang University of Science and Technology (POSTECH) has made a discovery on the impact of global warming on global daily precipitation. Using a deep learning approach, they have unveiled a significant change in the characteristics of global daily precipitation for the first time. Their research findings were published on August 30 in the online version of Nature.
    The research team devised a deep learning model to quantify the relationship between the intensity of global warming and global daily precipitation patterns. They then applied this model to data obtained from satellite-based precipitation observations. The results revealed that on more than 50% of all days, there was a clear deviation from natural variability in the daily precipitation pattern since 2015, influenced by human-induced global warming.
    In contrast to conventional studies, which primarily focus on long-term trends in monthly or annual precipitation, the researchers employed explainable artificial intelligence to demonstrate that changes in daily precipitation variations were gradually intensifying upon weather timescales. These fluctuations in rainfall at this weather time scale served as the most conspicuous indicators of global warming. The study further affirmed that the most evident changes in daily precipitation variability were observed over the sub-tropical East Pacific and mid-altitude storm track regions.
    The researchers explained that traditional linear statistical methods used in previous climate change detection research had limitations in discerning non-linear reactions such as the intensified variability in daily precipitation. Deep learning, however, overcame these limitations by employing non-linear activation functions. Moreover, while previous research methods primarily investigated global precipitation change patterns due to global warming, convolutional deep learning offered a distinct advantage in effectively detecting regional change patterns resulting from global warming.
    Professor Yoo-Geun Ham explained, “Intensification of day-to-day precipitation variability implies an increase in the frequency of extreme precipitation events as well as a higher occurrence of heatwaves during the summer due to extended dry spells.” Professor Seung-Ki Min added, “Given the ongoing trajectory of global warming, it is imperative to develop countermeasures as the consecutive occurrence of extreme precipitation and heatwaves are likely to become more frequent in the future.”
    This study was conducted with the support from the Ministry of Environment and the National Research Foundation of Korea. More