More stories

  • in

    A system to keep cloud-based gamers in sync

    Cloud gaming, which involves playing a video game remotely from the cloud, witnessed unprecedented growth during the lockdowns and gaming hardware shortages that occurred during the heart of the Covid-19 pandemic. Today, the burgeoning industry encompasses a $6 billion global market and more than 23 million players worldwide.
    However, interdevice synchronization remains a persistent problem in cloud gaming and the broader field of networking. In cloud gaming, video, audio, and haptic feedback are streamed from one central source to multiple devices, such as a player’s screen and controller, which typically operate on separate networks. These networks aren’t synchronized, leading to a lag between these two separate streams. A player might see something happen on the screen and then hear it on their controller a half second later.
    Inspired by this problem, scientists from MIT and Microsoft Research took a unique approach to synchronizing streams transmitted to two devices. Their system, called Ekho, adds inaudible white noise sequences to the game audio streamed from the cloud server. Then it listens for those sequences in the audio recorded by the player’s controller.
    Ekho uses the mismatch between these noise sequences to continuously measure and compensate for the interstream delay.
    In real cloud gaming sessions, the researchers showed that Ekho is highly reliable. The system can keep streams synchronized to within less than 10 milliseconds of each other, most of the time. Other synchronization methods resulted in consistent delays of more than 50 milliseconds.
    And while Ekho was designed for cloud gaming, this technique could be used more broadly to synchronize media streams traveling to different devices, such as in training situations that utilize multiple augmented or virtual reality headsets.
    “Sometimes, all it takes for a good solution to come out is to think outside what has been defined for you. The entire community has been fixed on how to solve this problem by synchronizing through the network. Synchronizing two streams by listening to the audio in the room sounded crazy, but it turned out to be a very good solution,” says Pouya Hamadanian, an electrical engineering and computer science (EECS) graduate student and lead author of a paper describing Ekho. More

  • in

    An ‘introspective’ AI finds diversity improves performance

    An artificial intelligence with the ability to look inward and fine tune its own neural network performs better when it chooses diversity over lack of diversity, a new study finds. The resulting diverse neural networks were particularly effective at solving complex tasks.
    “We created a test system with a non-human intelligence, an artificial intelligence (AI), to see if the AI would choose diversity over the lack of diversity and if its choice would improve the performance of the AI,” says William Ditto, professor of physics at North Carolina State University, director of NC State’s Nonlinear Artificial Intelligence Laboratory (NAIL) and co-corresponding author of the work. “The key was giving the AI the ability to look inward and learn how it learns.”
    Neural networks are an advanced type of AI loosely based on the way that our brains work. Our natural neurons exchange electrical impulses according to the strengths of their connections. Artificial neural networks create similarly strong connections by adjusting numerical weights and biases during training sessions. For example, a neural network can be trained to identify photos of dogs by sifting through a large number of photos, making a guess about whether the photo is of a dog, seeing how far off it is and then adjusting its weights and biases until they are closer to reality.
    Conventional AI uses neural networks to solve problems, but these networks are typically composed of large numbers of identical artificial neurons. The number and strength of connections between those identical neurons may change as it learns, but once the network is optimized, those static neurons are the network.
    Ditto’s team, on the other hand, gave its AI the ability to choose the number, shape and connection strength between neurons in its neural network, creating sub-networks of different neuron types and connection strengths within the network as it learns.
    “Our real brains have more than one type of neuron,” Ditto says. “So we gave our AI the ability to look inward and decide whether it needed to modify the composition of its neural network. Essentially, we gave it the control knob for its own brain. So it can solve the problem, look at the result, and change the type and mixture of artificial neurons until it finds the most advantageous one. It’s meta-learning for AI.
    “Our AI could also decide between diverse or homogenous neurons,” Ditto says. “And we found that in every instance the AI chose diversity as a way to strengthen its performance.”
    The team tested the AI’s accuracy by asking it to perform a standard numerical classifying exercise, and saw that its accuracy increased as the number of neurons and neuronal diversity increased. A standard, homogenous AI could identify the numbers with 57% accuracy, while the meta-learning, diverse AI was able to reach 70% accuracy. More

  • in

    A step closer to digitizing the sense of smell: Model describes odors better than human panelists

    A main crux of neuroscience is learning how our senses translate light into sight, sound into hearing, food into taste, and texture into touch. Smell is where these sensory relationships get more complex and perplexing.
    To address this question, a research team co-led by the Monell Chemical Senses Center and start-up Osmo, a Cambridge, Mass.-based company spun out of machine learning research done at Google Research, Google DeepMind (formerly known as Google Brain), are investigating how airborne chemicals connect to odor perception in the brain. To this end they discovered that a machine-learning model has achieved human-level proficiency at describing, in words, how chemicals might smell. Their research appears in the September 1 issue of Science.
    “The model addresses age-old gaps in the scientific understanding of the sense of smell,” said senior co-author Joel Mainland, PhD, Monell Center Member. This collaboration moves the world closer to digitizing odors to be recorded and reproduced. It also may identify new odors for the fragrance and flavor industry that could not only decrease dependence on naturally sourced endangered plants, but also identify new functional scents for such uses as mosquito repellent or malodor masking.
    “How our brains and noses work together” Humans have about 400 functional olfactory receptors. These are proteins at the end of olfactory nerves that connect with airborne molecules to transmit an electrical signal to the olfactory bulb. The number of olfactory receptors is much more than we use for color vision — four — or even taste — about 40.
    “In olfaction research, however, the question of what physical properties make an airborne molecule smell the way it does to the brain has remained an enigma,” said Mainland. “But if a computer can discern the relationship between how molecules are shaped and how we ultimately perceive their odors, scientists could use that knowledge to advance the understanding of how our brains and noses work together.”
    To address this, Osmo CEO Alex Wiltschko, PhD and his team created a model that learned how to match the prose descriptions of a molecule’s odor with the odor’s molecular structure. The resulting map of these interactions is essentially groupings of similarly smelling odors, like floral sweet and candy sweet. “Computers have been able to digitize vision and hearing, but not smell — our deepest and oldest sense,” said Wiltschko. “This study proposes and validates a novel data-driven map of human olfaction, matching chemical structure to odor perception.”
    What is the smell of garlic or of ozone?
    The model was trained using an industry dataset that included the molecular structures and odor qualities of 5,000 known odorants. Data input is the shape of a molecule, and the output is a prediction of which odor words best describe its smell. More

  • in

    Electrical noise stimulation applied to the brain could be key to boosting math learning

    Exciting a brain region using electrical noise stimulation can help improve mathematical learning in those who struggle with the subject, according to a new study from the Universities of Surrey and Oxford, Loughborough University, and Radboud University in The Netherlands.
    During this unique study, researchers investigated the impact of neurostimulation on learning. Despite the growing interest in this non-invasive technique, little is known about the neurophysiological changes induced and the effect it has on learning.
    Researchers found that electrical noise stimulation over the frontal part of the brain improved the mathematical ability of people whose brain was less excited (by mathematics) before the application of stimulation. No improvement in mathematical scores was identified in those who had a high level of brain excitation during the initial assessment or in the placebo groups. Researchers believe that electrical noise stimulation acts on the sodium channels in the brain, interfering with the cell membrane of the neurons, which increases cortical excitability.
    Professor Roi Cohen Kadosh, Professor of Cognitive Neuroscience and Head of the School of Psychology at the University of Surrey who led this project, said:
    “Learning is key to everything we do in life — from developing new skills, such as driving a car, to learning how to code. Our brains are constantly absorbing and acquiring new knowledge.
    “Previously, we have shown that a person’s ability to learn is associated with neuronal excitation in their brains. What we wanted to discover in this case is if our novel stimulation protocol could boost, in other words excite, this activity and improve mathematical skills.”
    For the study, 102 participants were recruited, and their mathematical skills were assessed through a series of multiplication problems. Participants were then split into four groups: a learning group exposed to high-frequency random electrical noise stimulation, an overlearning group in which participants practised the multiplication beyond the point of mastery with high-frequency random electrical noise stimulation. The remaining two groups, consisted of a learning and overlearning group but they were exposed to a sham (i.e., placebo) condition, an experience akin to real stimulation without applying significant electrical currents. EEG recordings were taken at the beginning and at the end of the stimulation to measure brain activity. More

  • in

    New AI technology gives robot recognition skills a big lift

    A robot moves a toy package of butter around a table in the Intelligent Robotics and Vision Lab at The University of Texas at Dallas. With every push, the robot is learning to recognize the object through a new system developed by a team of UT Dallas computer scientists.
    The new system allows the robot to push objects multiple times until a sequence of images are collected, which in turn enables the system to segment all the objects in the sequence until the robot recognizes the objects. Previous approaches have relied on a single push or grasp by the robot to “learn” the object.
    The team presented its research paper at the Robotics: Science and Systems conference July 10-14 in Daegu, South Korea. Papers for the conference are selected for their novelty, technical quality, significance, potential impact and clarity.
    The day when robots can cook dinner, clear the kitchen table and empty the dishwasher is still a long way off. But the research group has made a significant advance with its robotic system that uses artificial intelligence to help robots better identify and remember objects, said Dr. Yu Xiang, senior author of the paper.
    “If you ask a robot to pick up the mug or bring you a bottle of water, the robot needs to recognize those objects,” said Xiang, assistant professor of computer science in the Erik Jonsson School of Engineering and Computer Science.
    The UTD researchers’ technology is designed to help robots detect a wide variety of objects found in environments such as homes and to generalize, or identify, similar versions of common items such as water bottles that come in varied brands, shapes or sizes.
    Inside Xiang’s lab is a storage bin full of toy packages of common foods, such as spaghetti, ketchup and carrots, which are used to train the lab robot, named Ramp. Ramp is a Fetch Robotics mobile manipulator robot that stands about 4 feet tall on a round mobile platform. Ramp has a long mechanical arm with seven joints. At the end is a square “hand” with two fingers to grasp objects. More

  • in

    AI helps ID cancer risk factors

    A novel study from the University of South Australia has identified a range of metabolic biomarkers that could help predict the risk of cancer.
    Deploying machine learning to examine data from 459,169 participants in the UK Biobank, the study identified 84 features that could signal increased cancer risk.
    Several markers also signalled chronic kidney or liver disease, highlighting the significance of exploring the underlying pathogenic mechanisms of these diseases for their potential connections with cancer.
    The study, “Hypothesis-free discovery of novel cancer predictors using machine learning,” was conducted by UniSA researchers: Dr Iqbal Madakkatel, Dr Amanda Lumsden, Dr Anwar Mulugeta, and Professor Elina Hyppönen, with University of Adelaide’s Professor Ian Olver.
    “We conducted a hypothesis-free analysis using artificial intelligence and statistical approaches to identify cancer risk factors among more than 2800 features,” Dr Madakkatel says.
    “More than 40% of the features identified by the model were found to be biomarkers — biological molecules that can signal health or unhealthy conditions depending on their status — and several of these were jointly linked to cancer risk and kidney or liver disease.”
    Dr Amanda Lumsden says this study provides important information on mechanisms which may contribute to cancer risk. More

  • in

    Is digital media use a risk factor for psychosis in young adults?

    On average, young adults in Canada spend several hours on their smartphones every day. Many jump from TikTok to Netflix to Instagram, putting their phone down only to pick up a video game controller. A growing body of research is looking into the potential dangers of digital media overuse, as well as potential benefits of moderate digital media use, from a mental health standpoint.
    A recent McGill University study of 425 Quebecers between the ages of 18 and 25 has found that young adults who have more frequent psychotic experiences also tend to spend more time using digital media. Interestingly, the study, which surveyed the participants over a period of six months, also found that spending more time on digital media did not seem to cause any change in the frequency of psychotic experiences over time, said lead author and psychiatry resident at McGill, Vincent Paquin.
    By “psychotic experiences,” the researchers refer to a range of unusual thoughts and perceptions, such as the belief of being in danger and the experience of hearing and seeing things that other people cannot see or hear. These experiences are relatively common, affecting about 5% of young adults.
    “Our findings are reassuring because they do not show evidence that digital media can cause or exacerbate psychotic experiences in young people,” said Paquin. “It is important to keep in mind that each person is different. In some situations, digital media may be highly beneficial for a person’s well-being, and in other cases, these technologies may cause unintended harms.”
    Accessing mental health services through digital media
    The researchers hope their findings will help improve mental health services for young people. By better understanding the types of digital contents and activities that matter to young people, mental health services can be made more accessible and better aligned with individual needs, they say.
    “It is important for young people, their families, and for clinicians and policymakers to have scientific evidence on the risks and benefits of digital media for mental health, Paquin said. “Considering that young adults with more psychotic experiences may prefer digital technologies, we can use digital platforms to increase their access to accurate mental health information and to appropriate services.”
    About the study
    “Associations between digital media use and psychotic experiences in young adults of Quebec, Canada: a longitudinal study” by Vincent Paquin et al., was published in Social Psychiatry and Psychiatric Epidemiology. More

  • in

    Breathe! The shape-shifting ball that supports mental health

    A soft ball that ‘personifies’ breath, expanding and contracting in synchronicity with a person’s inhalations and exhalations, has been invented by a PhD student at the University of Bath in the UK. The ball is designed to support mental health, giving users a tangible representation of their breath to keep them focused and to help them regulate their emotions.
    Alexz Farrall, the student in the Department of Computer Science who invented the device, said: “By giving breath physical form, the ball enhances self-awareness and engagement, fostering positive mental health outcomes.”
    Generally, breathing is an ignored activity, yet when done deeply and with focus, it’s known to alleviate anxiety and foster wellbeing. Measured breathing is highly rated by mental health practitioners both for its ability to lower the temperature in emotionally charged situations and to increase a person’s receptivity to more demanding mental-health interventions.
    Disciplines that frequently include mindful breathing include Cognitive Behavioural Therapy (CBT), Mindfulness-Based Stress Reduction (MBSR), Dialectical Behaviour Therapy (DBT) and trauma-focused therapies.
    Most people, however, struggle to sustain attention on their breathing. Once disengaged from the process, they are likely to return to thinking mode and be less receptive to mental-health interventions that require concentration.
    “I hope this device will be part of the solution for many people with problems relating to their mental wellbeing,” said Mr Farrall.
    Focus lowers anxiety
    Recent research led by Mr Farrall shows a significant improvement in people’s ability to focus on their breathing when they use his shape-shifting ball. With their attention heightened, study participants were then able to pay closer attention to a guided audio recording from a meditation app. More