More stories

  • in

    Is digital media use a risk factor for psychosis in young adults?

    On average, young adults in Canada spend several hours on their smartphones every day. Many jump from TikTok to Netflix to Instagram, putting their phone down only to pick up a video game controller. A growing body of research is looking into the potential dangers of digital media overuse, as well as potential benefits of moderate digital media use, from a mental health standpoint.
    A recent McGill University study of 425 Quebecers between the ages of 18 and 25 has found that young adults who have more frequent psychotic experiences also tend to spend more time using digital media. Interestingly, the study, which surveyed the participants over a period of six months, also found that spending more time on digital media did not seem to cause any change in the frequency of psychotic experiences over time, said lead author and psychiatry resident at McGill, Vincent Paquin.
    By “psychotic experiences,” the researchers refer to a range of unusual thoughts and perceptions, such as the belief of being in danger and the experience of hearing and seeing things that other people cannot see or hear. These experiences are relatively common, affecting about 5% of young adults.
    “Our findings are reassuring because they do not show evidence that digital media can cause or exacerbate psychotic experiences in young people,” said Paquin. “It is important to keep in mind that each person is different. In some situations, digital media may be highly beneficial for a person’s well-being, and in other cases, these technologies may cause unintended harms.”
    Accessing mental health services through digital media
    The researchers hope their findings will help improve mental health services for young people. By better understanding the types of digital contents and activities that matter to young people, mental health services can be made more accessible and better aligned with individual needs, they say.
    “It is important for young people, their families, and for clinicians and policymakers to have scientific evidence on the risks and benefits of digital media for mental health, Paquin said. “Considering that young adults with more psychotic experiences may prefer digital technologies, we can use digital platforms to increase their access to accurate mental health information and to appropriate services.”
    About the study
    “Associations between digital media use and psychotic experiences in young adults of Quebec, Canada: a longitudinal study” by Vincent Paquin et al., was published in Social Psychiatry and Psychiatric Epidemiology. More

  • in

    Breathe! The shape-shifting ball that supports mental health

    A soft ball that ‘personifies’ breath, expanding and contracting in synchronicity with a person’s inhalations and exhalations, has been invented by a PhD student at the University of Bath in the UK. The ball is designed to support mental health, giving users a tangible representation of their breath to keep them focused and to help them regulate their emotions.
    Alexz Farrall, the student in the Department of Computer Science who invented the device, said: “By giving breath physical form, the ball enhances self-awareness and engagement, fostering positive mental health outcomes.”
    Generally, breathing is an ignored activity, yet when done deeply and with focus, it’s known to alleviate anxiety and foster wellbeing. Measured breathing is highly rated by mental health practitioners both for its ability to lower the temperature in emotionally charged situations and to increase a person’s receptivity to more demanding mental-health interventions.
    Disciplines that frequently include mindful breathing include Cognitive Behavioural Therapy (CBT), Mindfulness-Based Stress Reduction (MBSR), Dialectical Behaviour Therapy (DBT) and trauma-focused therapies.
    Most people, however, struggle to sustain attention on their breathing. Once disengaged from the process, they are likely to return to thinking mode and be less receptive to mental-health interventions that require concentration.
    “I hope this device will be part of the solution for many people with problems relating to their mental wellbeing,” said Mr Farrall.
    Focus lowers anxiety
    Recent research led by Mr Farrall shows a significant improvement in people’s ability to focus on their breathing when they use his shape-shifting ball. With their attention heightened, study participants were then able to pay closer attention to a guided audio recording from a meditation app. More

  • in

    Analog and digital: The best of both worlds in one energy-efficient system

    We live in an analog world of continuous information flow that is both processed and stored by our brains at the same time, but our devices process information digitally in the form of discrete binary code, breaking the information into little bits (or bites). Researchers at EPFL have revealed a pioneering technology that combines the potential of continuous analog processing with the precision of digital devices. By seamlessly integratingultra-thin, two-dimensional semiconductors with ferroelectric materials, the research, published in Nature Electronics, unveils a novel way to improve energy efficiency and add new functionalities in computing. The new configuration merges traditional digital logic with brain-like analog operations.
    Faster and more efficient electronics
    The innovation from the Nanoelectronics Device Laboratory (Nanolab), in collaboration with Microsystems Laboratory, revolves around a unique combination of materials leading to brain-inspired functions and advanced electronic switches, including the standout negative capacitance Tunnel Field-Effect Transistor (TFET). In the world of electronics, a transistor or “switch” can be likened to a light switch, determining whether current flows (on) or doesn’t (off). These are the famous 1s and 0s of binary computer language, and this simple action of turning on and off is integral to nearly every function of our electronic devices, from processing information to storing memory. The TFET is a special type of switch designed with an energy-conscious future in mind. Unlike conventional transistors that require a certain minimum voltage to turn on, TFETs can operate at significantly lower voltages. This optimized design means they consume considerably less energy when switching, thus significantly reducing the overall power consumption of devices they are integrated into.
    According to Professor Adrian Ionescu, head of Nanolab, “Our endeavors represent a significant leap forward in the domain of electronics, having shattered previous performance benchmarks, and is exemplified by the outstanding capabilities of the negative-capacitance tungsten diselenide/tin diselenide TFET and the possibility to create synaptic neuron function within the same technology.”
    Sadegh Kamaei, a PhD candidate at EPFL, has harnessed the potential of 2D semiconductors and ferroelectric materials within a fully co-integrated electronic system for the first time. The 2D semiconductions can be used for ultra-efficient digital processors whereas the ferroelectric material provides the possibility to continuously process and store memory at the same time. Combining the two materials creates the opportunity to harness the best of the digital and analog capacities of each. Now the light switch from our above analogy is not only more energy efficient, but the light it turns on can burn even brighter. Kamaei added, “Working with 2D semiconductors and integrating them with ferroelectric materials has been challenging yet immensely rewarding. The potential applications of our findings could redefine how we view and interact with electronic devices in the future.”
    Blending traditional logic with neuromorphic circuits
    Furthermore, the research delves into creating switches similar to biological synapses — the intricate connectors between brain cells — for neuromorphic computing. “The research marks the first-ever co-integration of von Neumann logic circuits and neuromorphic functionalities, charting an exciting course toward the creation of innovative computing architectures characterized by exceptionally low power consumption and hitherto unexplored capabilities of building neuromorphic functions combined with digital information processing,” adds Ionescu.
    Such advances hint at electronic devices that operate in ways parallel to the human brain, marrying computational speed with processing information in a way that is more in line with human cognition. For instance, neuromorphic systems might excel at tasks that traditional computers struggle with, such as pattern recognition, sensory data processing, or even certain types of learning. This blend of traditional logic with neuromorphic circuits indicates a transformative change with far-reaching implications. The future may well see devices that are not just smarter and faster but exponentially more energy-efficient. More

  • in

    AI enabled soft robotic implant monitors scar tissue to self-adapt for personalized drug treatment

    Research teams at University of Galway and Massachusetts Institute of Technology (MIT) have detailed a breakthrough in medical device technology that could lead to intelligent, long-lasting, tailored treatment for patients thanks to soft robotics and artificial intelligence.
    The transatlantic partnership has created a smart implantable device that can administer a drug — while also sensing when it is beginning to be rejected — and use AI to change the shape of the device to maintain drug dosage and simultaneously bypass scar tissue build up.
    The study was published in the journal Science Robotics.
    Implantable medical device technologies offer promise to unlock advanced therapeutic interventions in healthcare, such as insulin release to treat diabetes, but a major issue holding back such devices is the patient’s reaction to a foreign body.
    Dr Rachel Beatty, University of Galway, and co-lead author on the study, explained: “The technology which we have developed, by using soft robotics, advances the potential of implantable devices to be in a patient’s body for extended periods, providing long-lasting therapeutic action. Imagine a therapeutic implant that can also sense its environment and respond as needed using AI — this approach could generate revolutionary changes in implantable drug delivery for a range of chronic diseases.”
    The University of Galway-MIT research team originally developed first-generation flexible devices, known as soft robotic implants, to improve drug delivery and reduce fibrosis. Despite that success, the team regard the technology as one-size-fits-all, as it did not account for how individual patients react and respond differently, or for the progressive nature of fibrosis, where scar tissue builds around the device, encapsulating it, impeding and blocking its purpose, eventually forcing it to fail.
    The latest research, published today in Science Robotics, demonstrates how they have significantly advanced the technology — using AI — making it responsive to the implant environment with the potential to be longer lasting by defending against the body’s natural urge to reject a foreign body. More

  • in

    A simpler way to connect quantum computers

    Researchers have a new way to connect quantum devices over long distances, a necessary step toward allowing the technology to play a role in future communications systems.
    While today’s classical data signals can get amplified across a city or an ocean, quantum signals cannot. They must be repeated in intervals — that is, stopped, copied and passed on by specialized machines called quantum repeaters. Many experts believe these quantum repeaters will play a key role in future communication networks, allowing enhanced security and enabling connections between remote quantum computers.
    The Princeton study, published Aug. 30 in Nature, details the basis for a new approach to building quantum repeaters. It sends telecom-ready light emitted from a single ion implanted in a crystal. The effort was many years in the making, according to Jeff Thompson, the study’s principal author. The work combined advances in photonic design and materials science.
    Other leading quantum repeater designs emit light in the visible spectrum, which degrades quickly over optical fiber and must be converted before traveling long distances. The new device is based on a single rare earth ion implanted in a host crystal. And because this ion emits light at an ideal infrared wavelength, it requires no such signal conversion, which can lead to simpler and more robust networks.
    The device has two parts: a calcium tungstate crystal doped with just a handful of erbium ions, and a nanoscopic piece of silicon etched into a J-shaped channel. Pulsed with a special laser, the ion emits light up through the crystal. But the silicon piece, a whisp of a semiconductor stuck onto the top of the crystal, catches and guides individual photons out into the fiber optic cable.
    Ideally, this photon would be encoded with information from the ion, Thompson said. Or more specifically, from a quantum property of the ion called spin. In a quantum repeater, collecting and interfering the signals from distant nodes would create entanglement between their spins, allowing end-to-end transmission of quantum states despite losses along the way.
    Thompson’s team first started working with erbium ions several years before, but first versions used different crystals that harbored too much noise. In particular, this noise caused the frequency of the emitted photons to jump around randomly in a process known as spectral diffusion. This prevented the delicate quantum interference that is necessary to operate quantum networks. To solve this problem, his lab started working with Nathalie de Leon, associate professor of electrical and computer engineering, and Robert Cava, a leading solid-state materials scientist and Princeton’s Russell Wellman Moore Professor of Chemistry, to explore new materials that could host single erbium ions with much less noise. More

  • in

    Unveiling global warming’s impact on daily precipitation with deep learning

    A collaborative international research team led by Professor Yoo-Geun Ham from Chonnam National University and Professor Seung-Ki Min from Pohang University of Science and Technology (POSTECH) has made a discovery on the impact of global warming on global daily precipitation. Using a deep learning approach, they have unveiled a significant change in the characteristics of global daily precipitation for the first time. Their research findings were published on August 30 in the online version of Nature.
    The research team devised a deep learning model to quantify the relationship between the intensity of global warming and global daily precipitation patterns. They then applied this model to data obtained from satellite-based precipitation observations. The results revealed that on more than 50% of all days, there was a clear deviation from natural variability in the daily precipitation pattern since 2015, influenced by human-induced global warming.
    In contrast to conventional studies, which primarily focus on long-term trends in monthly or annual precipitation, the researchers employed explainable artificial intelligence to demonstrate that changes in daily precipitation variations were gradually intensifying upon weather timescales. These fluctuations in rainfall at this weather time scale served as the most conspicuous indicators of global warming. The study further affirmed that the most evident changes in daily precipitation variability were observed over the sub-tropical East Pacific and mid-altitude storm track regions.
    The researchers explained that traditional linear statistical methods used in previous climate change detection research had limitations in discerning non-linear reactions such as the intensified variability in daily precipitation. Deep learning, however, overcame these limitations by employing non-linear activation functions. Moreover, while previous research methods primarily investigated global precipitation change patterns due to global warming, convolutional deep learning offered a distinct advantage in effectively detecting regional change patterns resulting from global warming.
    Professor Yoo-Geun Ham explained, “Intensification of day-to-day precipitation variability implies an increase in the frequency of extreme precipitation events as well as a higher occurrence of heatwaves during the summer due to extended dry spells.” Professor Seung-Ki Min added, “Given the ongoing trajectory of global warming, it is imperative to develop countermeasures as the consecutive occurrence of extreme precipitation and heatwaves are likely to become more frequent in the future.”
    This study was conducted with the support from the Ministry of Environment and the National Research Foundation of Korea. More

  • in

    Challenge accepted: High-speed AI drone overtakes world-champion drone racers

    Remember when IBM’s Deep Blue won against Gary Kasparov at chess in 1996, or Google’s AlphaGo crushed the top champion Lee Sedol at Go, a much more complex game, in 2016? These competitions where machines prevailed over human champions are key milestones in the history of artificial intelligence. Now a group of researchers from the University of Zurich and Intel has set a new milestone with the first autonomous system capable of beating human champions at a physical sport: drone racing.
    The AI system, called Swift, won multiple races against three world-class champions in first-person view (FPV) drone racing, where pilots fly quadcopters at speeds exceeding 100 km/h, controlling them remotely while wearing a headset linked to an onboard camera.
    Learning by interacting with the physical world
    “Physical sports are more challenging for AI because they are less predictable than board or video games. We don’t have a perfect knowledge of the drone and environment models, so the AI needs to learn them by interacting with the physical world,” says Davide Scaramuzza, head of the Robotics and Perception Group at the University of Zurich — and newly minted drone racing team captain.
    Until very recently, autonomous drones took twice as long as those piloted by humans to fly through a racetrack, unless they relied on an external position-tracking system to precisely control their trajectories. Swift, however, reacts in real time to the data collected by an onboard camera, like the one used by human racers. Its integrated inertial measurement unit measures acceleration and speed while an artificial neural network uses data from the camera to localize the drone in space and detect the gates along the racetrack. This information is fed to a control unit, also based on a deep neural network that chooses the best action to finish the circuit as fast as possible.
    Training in an optimised simulation environment
    Swift was trained in a simulated environment where it taught itself to fly by trial and error, using a type of machine learning called reinforcement learning. The use of simulation helped avoid destroying multiple drones in the early stages of learning when the system often crashes. “To make sure that the consequences of actions in the simulator were as close as possible to the ones in the real world, we designed a method to optimize the simulator with real data,” says Elia Kaufmann, first author of the paper. In this phase, the drone flew autonomously thanks to very precise positions provided by an external position-tracking system, while also recording data from its camera. This way it learned to autocorrect errors it made interpreting data from the onboard sensors. More

  • in

    Surpassing the human eye: Machine learning image analysis rapidly determines chemical mixture composition

    Machine learning model provides quick method for determining the composition of solid chemical mixtures using only photographs of the sample.
    Have you ever accidentally ruined a recipe in the kitchen by adding salt instead of sugar? Due to their similar appearance, it’s an easy mistake to make. Similarly, checking with the naked eye is also used in chemistry labs to provide quick, initial assessments of reactions; however, just like in the kitchen, the human eye has its limitations and can be unreliable.
    To address this, researchers at the Institute of Chemical Reaction Design and Discovery (WPI-ICReDD), Hokkaido University led by Professor Yasuhide Inokuma have developed a machine learning model that can distinguish the composition ratio of solid mixtures of chemical compounds using only photographs of the samples.
    The model was designed and developed using mixtures of sugar and salt as a test case. The team employed a combination of random cropping, flipping and rotating of the original photographs in order to create a larger number of sub images for training and testing. This enabled the model to be developed using only 300 original images for training. The trained model was roughly twice as accurate as the naked eye of even the most expert member of the team.
    “I think it’s fascinating that with machine learning we have been able to reproduce and even exceed the accuracy of the eyes of experienced chemists,” commented Inokuma. “This tool should be able to help new chemists achieve an experienced eye more quickly.”
    After the successful test case, researchers applied this model to the evaluation of different chemical mixtures. The model successfully distinguished different polymorphs and enantiomers, both of which are extremely similar versions of the same molecule with subtle differences in atomic or molecular arrangement. Distinguishing these subtle differences is important in the pharmaceutical industry and normally requires a more time-consuming process.
    The model was even able to handle more complex mixtures, accurately assessing the percentage of a target molecule in a four-component mixture. Reaction yield was also analyzed, determining the progress of a thermal decarboxylation reaction.
    The team further demonstrated the versatility of their model, showing that it could accurately analyze images taken with a mobile phone, after supplemental training was performed. The researchers anticipate a wide variety of applications, both in the research lab and in industry.
    “We see this as being applicable in situations where constant, rapid evaluation is required, such as monitoring reactions at a chemical plant or as an analysis step in an automated process using a synthesis robot,” explained Specially Appointed Assistant Professor Yuki Ide. “Additionally, this could act as an observation tool for those who have impaired vision.” More