More stories

  • in

    Rollercoaster of emotions: Exploring emotions with virtual reality

    To the left and right, the landscape drifts idly by, the track in front of you. Suddenly, a fire. The tension builds. The ride reaches its highest point. Only one thing lies ahead: the abyss. Plummeting down into the depths of the earth. These are scenes of a rollercoaster ride as experienced by participants in a recent study at the Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) in Leipzig. However, not in real life, but virtually, with the help of virtual reality (VR) glasses. The aim of the research was to find out what happens in participants’ brains while they experience emotionally engaging situations.
    To find out how the human brain processes emotions, highly simplified experiments have been used until now. Researchers would show participants photos of emotional scenes and record their brain activity. The studies took place under controlled laboratory conditions, so that the results could be easily compared. However, the simulated situations were usually not particularly emotionally arousing and were far removed from the experiences we normally have. This is because emotions are continuously created through an interplay of past experiences and various external influences with which we interact. With regard to emotions, it is therefore particularly important to create situations that are perceived as real as possible. Only in this way can we assume that the simultaneously measured brain activation comes close to that which occurs in real life outside the laboratory. VR glasses provide a remedy here. Through them, participants can immerse themselves dynamically and interactively in situations and experience them close to reality. Emotions are thus evoked in a more natural way.
    The results of the current study showed that the degree to which a person is emotionally aroused can be seen in a specific form of rhythmic brain activity, the so-called alpha oscillations. Accordingly, the lower the strength of this oscillation in the measured EEG signal, the higher the arousal. “The findings thus confirm earlier investigations from classical experiments and prove that the signals also occur under conditions that are closer to everyday life,” says Simon M. Hofmann, one of the authors of the underlying study, which has now appeared in the scientific journal eLife. “Using alpha oscillations, we were able to predict how strongly a person experiences a situation emotionally. Our models learned which brain areas are particularly important for this prediction. Roughly speaking, the less alpha activity measured here, the more aroused the person is,” explains author Felix Klotzsche.
    “In the future, it could be possible to apply these findings and methods to practical applications beyond basic research,” adds author Alberto Mariola. VR glasses, for example, are increasingly being used in psychological therapy. Neurophysiological information about the emotional state of patients could lead to an improvement in treatment. Therapists could, for instance, directly gain an insight into the current emotional feeling during an exposure situation without having to ask the patient directly and thus interrupt the situation.
    The scientists investigated these relationships with the help of electroencephalography (EEG), which allowed them to record the participants’ brain waves during the virtual rollercoaster ride — in order to determine what happens in the brain during the ride. Additionally, the subjects were asked to rate afterwards how excited they were over the course of the VR experience using a video. In this way, the researchers wanted to find out whether the subjective sensations during the ride correlate with the measured data from brain activity. Since people differ in how much they like to ride on rollercoasters, it was irrelevant whether the situation was perceived as positive or negative. What mattered was the strength of the sensation.
    For the evaluation, the researchers used three different machine learning models to predict the subjective sensations as accurately as possible from the EEG data. The authors thereby showed that with the help of these approaches, the connection between EEG signals and emotional feelings can also be confirmed under naturalistic conditions.
    Story Source:
    Materials provided by Max Planck Institute for Human Cognitive and Brain Sciences. Note: Content may be edited for style and length. More

  • in

    Mind-controlled robots now one step closer

    Two EPFL research groups teamed up to develop a machine-learning program that can be connected to a human brain and used to command a robot. The program adjusts the robot’s movements based on electrical signals from the brain. The hope is that with this invention, tetraplegic patients will be able to carry out more day-to-day activities on their own.
    Tetraplegic patients are prisoners of their own bodies, unable to speak or perform the slightest movement. Researchers have been working for years to develop systems that can help these patients carry out some tasks on their own. “People with a spinal cord injury often experience permanent neurological deficits and severe motor disabilities that prevent them from performing even the simplest tasks, such as grasping an object,” says Prof. Aude Billard, the head of EPFL’s Learning Algorithms and Systems Laboratory. “Assistance from robots could help these people recover some of their lost dexterity, since the robot can execute tasks in their place.”
    Prof. Billard carried out a study with Prof. José del R. Millán, who at the time was the head of EPFL’s Brain-Machine Interface laboratory but has since moved to the University of Texas. The two research groups have developed a computer program that can control a robot using electrical signals emitted by a patient’s brain. No voice control or touch function is needed; patients can move the robot simply with their thoughts. The study has been published in Communications Biology, an open-access journal from Nature Portfolio.
    Avoiding obstacles
    To develop their system, the researchers started with a robotic arm that had been developed several years ago. This arm can move back and forth from right to left, reposition objects in front of it and get around objects in its path. “In our study we programmed a robot to avoid obstacles, but we could have selected any other kind of task, like filling a glass of water or pushing or pulling an object,” says Prof. Billard.
    The engineers began by improving the robot’s mechanism for avoiding obstacles so that it would be more precise. “At first, the robot would choose a path that was too wide for some obstacles, taking it too far away, and not wide enough for others, keeping it too close,” says Carolina Gaspar Pinto Ramos Correia, a PhD student at Prof. Billard’s lab. “Since the goal of our robot was to help paralyzed patients, we had to find a way for users to be able to communicate with it that didn’t require speaking or moving.”
    An algorithm that can learn from thoughts
    This entailed developing an algorithm that could adjust the robot’s movements based only on a patient’s thoughts. The algorithm was connected to a headcap equipped with electrodes for running electroencephalogram (EEG) scans of a patient’s brain activity. To use the system, all the patient needs to do is look at the robot. If the robot makes an incorrect move, the patient’s brain will emit an “error message” through a clearly identifiable signal, as if the patient is saying “No, not like that.” The robot will then understand that what it’s doing is wrong — but at first it won’t know exactly why. For instance, did it get too close to, or too far away from, the object? To help the robot find the right answer, the error message is fed into the algorithm, which uses an inverse reinforcement learning approach to work out what the patient wants and what actions the robot needs to take. This is done through a trial-and-error process whereby the robot tries out different movements to see which one is correct. The process goes pretty quickly — only three to five attempts are usually needed for the robot to figure out the right response and execute the patient’s wishes. “The robot’s AI program can learn rapidly, but you have to tell it when it makes a mistake so that it can correct its behavior,” says Prof. Millán. “Developing the detection technology for error signals was one of the biggest technical challenges we faced.” Iason Batzianoulis, the study’s lead author, adds: “What was particularly difficult in our study was linking a patient’s brain activity to the robot’s control system — or in other words, ‘translating’ a patient’s brain signals into actions performed by the robot. We did that by using machine learning to link a given brain signal to a specific task. Then we associated the tasks with individual robot controls so that the robot does what the patient has in mind.”
    Next step: a mind-controlled wheelchair
    The researchers hope to eventually use their algorithm to control wheelchairs. “For now there are still a lot of engineering hurdles to overcome,” says Prof. Billard. “And wheelchairs pose an entirely new set of challenges, since both the patient and the robot are in motion.” The team also plans to use their algorithm with a robot that can read several different kinds of signals and coordinate data received from the brain with those from visual motor functions.
    Story Source:
    Materials provided by Ecole Polytechnique Fédérale de Lausanne. Original written by Valérie Geneux. Note: Content may be edited for style and length. More

  • in

    Giving bug-like bots a boost

    When it comes to robots, bigger isn’t always better. Someday, a swarm of insect-sized robots might pollinate a field of crops or search for survivors amid the rubble of a collapsed building.
    MIT researchers have demonstrated diminutive drones that can zip around with bug-like agility and resilience, which could eventually perform these tasks. The soft actuators that propel these microrobots are very durable, but they require much higher voltages than similarly-sized rigid actuators. The featherweight robots can’t carry the necessary power electronics that would allow them fly on their own.
    Now, these researchers have pioneered a fabrication technique that enables them to build soft actuators that operate with 75 percent lower voltage than current versions while carrying 80 percent more payload. These soft actuators are like artificial muscles that rapidly flap the robot’s wings.
    This new fabrication technique produces artificial muscles with fewer defects, which dramatically extends the lifespan of the components and increases the robot’s performance and payload.
    “This opens up a lot of opportunity in the future for us to transition to putting power electronics on the microrobot. People tend to think that soft robots are not as capable as rigid robots. We demonstrate that this robot, weighing less than a gram, flies for the longest time with the smallest error during a hovering flight. The take-home message is that soft robots can exceed the performance of rigid robots,” says Kevin Chen, who is the D. Reid Weedon, Jr. ’41 assistant professor in the Department of Electrical Engineering and Computer Science, the head of the Soft and Micro Robotics Laboratory in the Research Laboratory of Electronics (RLE), and the senior author of the paper.
    Chen’s coauthors include Zhijian Ren and Suhan Kim, co-lead authors and EECS graduate students; Xiang Ji, a research scientist in EECS; Weikun Zhu, a chemical engineering graduate student; Farnaz Niroui, an assistant professor in EECS; and Jing Kong, a professor in EECS and principal investigator in RLE. The research has been accepted for publication in Advanced Materials and is included in the journal’s Rising Stars series, which recognizes outstanding works from early-career researchers. More

  • in

    Robots use fear to fight invasive fish

    The invasive mosquitofish (Gambusia holbrooki) chews off the tails of freshwater fishes and tadpoles, leaving the native animals to perish while dining on other fishes’ and amphibians’ eggs. In a study published December 16 in the journal iScience, researchers engineered a robot to scare mosquitofish away, revealing how fear alters its behavior, physiology, fertility — and may help turn the tide against invasive species.
    To fight the invasive fish, the international team, composed of biologists and engineers from Australia, the U.S., and Italy, turned to its natural predator — the largemouth bass (Micropterus salmoides) — for inspiration. They crafted a robotic fish that mimics the appearance and simulates the movements of the real predator. Aided by computer vision, the robot strikes when it spots the mosquitofish approaching tadpoles of an Australian species (Litoria moorei), which is threatened by mosquitofish in the wild. Scared and stressed, the mosquitofish showed fearful behaviors and experienced weight loss, changes in body shape, and a reduction in fertility, all of which impair their survival and reproduction.
    “Mosquitofish is one of the 100 world’s worst invasive species, and current methods to eradicate it are too expensive and time-consuming to effectively contrast its spread,” says first author Giovanni Polverino (@GioPolverino) of the University of Western Australia. “This global pest is a serious threat to many aquatic animals. Instead of killing them one by one, we’re presenting an approach that can inform better strategies to control this global pest. We made their worst nightmare become real: a robot that scares the mosquitofish but not the other animals around it.”
    In the presence of the robotic fish, mosquitofish tended to stay closer to each other and spend more time at the center of the testing arena, hesitant to tread uncharted waters. They also swam more frenetically, with frequent and sharp turns, than those who haven’t met the robot. Away from the robot and back in their home aquaria, the effect of fear lasted. The scared fish were less active, ate more, and froze longer, presenting signs of anxiety that continued weeks after their last encounter with the robot.
    For the tadpoles the mosquitofish usually prey on, the robot’s presence was a change for the better. While the mosquitofish is a visual animal that surveys the environment mainly through its eyes, tadpoles have poor eyesight: they don’t see the robot well. “We expected the robot to have neutral effects on the tadpoles, but that wasn’t the case,” says Polverino. Because the robot changed the behavior of the mosquitofish, the tadpoles didn’t have predators at their tails anymore and they were more willing to venture out in the testing arena. “It turned out to be a positive thing for tadpoles. Once freed from the danger of having mosquitofish around, they were not scared anymore. They’re happy.”
    After five weeks of brief encounters between the mosquitofish and the robot, the team found that the fish allocated more energy towards escaping than reproducing. Male fish’s bodies became thin and streamlined with stronger muscles near the tail, built to cut through the water for fleeing. Male fish also had lower sperm counts while females produced lighter eggs, which are changes that are likely to compromise the species’ survival as a whole.
    “While successful at thwarting mosquitofish, the lab-grown robotic fish is not ready to be released into the wild,” says senior author Maurizio Porfiri of New York University. The team will still have to overcome technical challenges. As a first step, they plan to test the method on small, clear pools in Australia, where two endangered fish are threatened by mosquitofish.
    “Invasive species are a huge problem worldwide and are the second cause for the loss of biodiversity,” says Polverino. “Hopefully, our approach of using robotics to reveal the weaknesses of an incredibly successful pest will open the door to improve our biocontrol practices and combat invasive species. We are very excited about this.”
    Story Source:
    Materials provided by Cell Press. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence accurately predicts who will develop dementia in two years

    Artificial intelligence can predict which people who attend memory clinics will develop dementia within two years with 92 per cent accuracy, a largescale new study has concluded.
    Using data from more than 15,300 patients in the US, research from the University of Exeter found that a form of artificial intelligence called machine learning can accurately tell who will go on to develop dementia.
    The technique works by spotting hidden patterns in the data and learning who is most at risk. The study, published in JAMA Network Open and funded by funded by Alzheimer’s Research UK, also suggested that the algorithm could help reduce the number of people who may have been falsely diagnosed with dementia.
    The researchers analysed data from people who attended a network of 30 National Alzheimer’s Coordinating Center memory clinics in the US. The attendees did not have dementia at the start of the study, though many were experiencing problems with memory or other brain functions.
    In the study timeframe between 2005 and 2015, one in ten attendees (1,568) received a new diagnosis of dementia within two years of visiting the memory clinic. The research found that the machine learning model could predict these new dementia cases with up to 92 per cent accuracy — and far more accurately than two existing alternative research methods.
    The researchers also found for the first time that around eight per cent (130) of the dementia diagnoses appeared to be made in error, as their diagnosis was subsequently reversed. Machine learning models accurately identified more than 80 per cent of these inconsistent diagnoses. Artificial intelligence can not only accurately predict who will be diagnosed with dementia, it also has the potential to improve the accuracy of these diagnoses.
    Professor David Llewellyn, an Alan Turing Fellow based at the University of Exeter, who oversaw the study, said: “We’re now able to teach computers to accurately predict who will go on to develop dementia within two years. We’re also excited to learn that our machine learning approach was able to identify patients who may have been misdiagnosed. This has the potential to reduce the guesswork in clinical practice and significantly improve the diagnostic pathway, helping families access the support they need as swiftly and as accurately as possible.”
    Dr Janice Ranson, Research Fellow at the University of Exeter added “We know that dementia is a highly feared condition. Embedding machine learning in memory clinics could help ensure diagnosis is far more accurate, reducing the unnecessary distress that a wrong diagnosis could cause.”
    The researchers found that machine learning works efficiently, using patient information routinely available in clinic, such as memory and brain function, performance on cognitive tests and specific lifestyle factors. The team now plans to conduct follow-up studies to evaluate the practical use of the machine learning method in clinics, to assess whether it can be rolled out to improve dementia diagnosis, treatment and care.
    Dr Rosa Sancho, Head of Research at Alzheimer’s Research UK said “Artificial intelligence has huge potential for improving early detection of the diseases that cause dementia and could revolutionise the diagnosis process for people concerned about themselves or a loved one showing symptoms. This technique is a significant improvement over existing alternative approaches and could give doctors a basis for recommending life-style changes and identifying people who might benefit from support or in-depth assessments.”
    Story Source:
    Materials provided by University of Exeter. Note: Content may be edited for style and length. More

  • in

    A quantum view of 'combs' of light

    Unlike the jumble of frequencies produced by the light that surrounds us in daily life, each frequency of light in a specialized light source known as a “soliton” frequency comb oscillates in unison, generating solitary pulses with consistent timing.
    Each “tooth” of the comb is a different color of light, spaced so precisely that this system is used to measure all manner of phenomena and characteristics. Miniaturized versions of these combs — called microcombs — that are currently in development have the potential to enhance countless technologies, including GPS systems, telecommunications, autonomous vehicles, greenhouse gas tracking, spacecraft autonomy and ultra-precise timekeeping.
    The lab of Stanford University electrical engineer Jelena Vučković only recently joined the microcomb community. “Many groups have demonstrated on-chip frequency combs in a variety of materials, including recently in silicon carbide by our team. However, until now, the quantum optical properties of frequency combs have been elusive,” said Vučković, the Jensen Huang Professor of Global Leadership in the School of Engineering and professor of electrical engineering at Stanford. “We wanted to leverage the quantum optics background of our group to study the quantum properties of the soliton microcomb.”
    While soliton microcombs have been made in other labs, the Stanford researchers are among the first to investigate the system’s quantum optical properties, using a process that they outline in a paper published Dec. 16 in Nature Photonics. When created in pairs, microcomb solitons are thought to exhibit entanglement — a relationship between particles that allows them to influence each other even at incredible distances, which underpins our understanding of quantum physics and is the basis of all proposed quantum technologies. Most of the “classical” light we encounter on a daily basis does not exhibit entanglement.
    “This is one of the first demonstrations that this miniaturized frequency comb can generate interesting quantum light — non-classical light — on a chip,” said Kiyoul Yang, a research scientist in Vučković’s Nanoscale and Quantum Photonics Lab and co-author of the paper. “That can open a new pathway toward broader explorations of quantum light using the frequency comb and photonic integrated circuits for large-scale experiments.”
    Proving the utility of their tool, the researchers also provided convincing evidence of quantum entanglement within the soliton microcomb, which has been theorized and assumed but has yet to be proven by any existing studies. More

  • in

    Fabricating stable, high-mobility transistors for next-generation display technologies

    Amorphous oxide semiconductors (AOS) are a promising option for the next generation of display technologies due to their low costs and high electron (charge carrier) mobility. The high mobility, in particular, is essential for high-speed images. But AOSs also have a distinct drawback that is hampering their commercialization — the mobility-stability tradeoff.
    One of the core tests of stability in TFTs is the “negative-bias temperature stress” (NBTS) stability test. Two AOS TFTs of interest are indium gallium zinc oxide (IGZO) and indium tin zinc oxide (ITZO). IGZO TFTs have high NBTS stability but poor mobility while ITZO TFTs have the opposite characteristics. The existence of this tradeoff is well-known, but thus far there has been no understanding of why it occurs.
    In a recent study published in Nature Electronics, a team of scientists from Japan have now reported a solution to this tradeoff. “In our study, we focused on NBTS stability which is conventionally explained using ‘charge trapping.’ This describes the loss of accumulated charge into the underlying substrate. However, we doubted if this could explain the differences we see in IGZO and ITZO TFTs, so instead we focused on the possibility of a change in carrier density or Fermi level shift in the AOS itself,” explains Assistant Professor Junghwan Kim of Tokyo Tech, who headed the study.
    To investigate the NBTS stability, the team used a “bottom-gate TFT with a bilayer active-channel structure” comprising an NBTS-stable AOS (IGZO) layer and an NBTS-unstable AOS (ITZO) layer. They then characterized the TFT and compared the results with device simulations performed using the charge-trapping and the Fermi-level shift models.
    They found that the experimental data agreed with the Fermi-level shift model. “Once we had this information, the next question was, ‘What is the major factor controlling mobility in AOSs?'” says Prof. Kim.
    The fabrication of AOS TFTs introduces impurities, including carbon monoxide (CO), into the TFT, especially in the ITZO case. The team found that charge transfer was occurring between the AOSs and the unintended impurities. In this case the CO impurities were donating electrons into the active layer of the TFT, which caused the Fermi-level shift and NBTS instability. “The mechanism of this CO-based electron donation is dependent on the location of the conduction band minimum, which is why you see it in high-mobility TFTs such as ITZO but not in IGZO,” elaborates Prof. Kim.
    Armed with this knowledge, the researchers developed an ITZO TFT without CO impurities by treating the TFT at 400°C and found that it was NBTS stable. “Super-high vision technologies need TFTs with an electron mobility above 40 cm2 (Vs)-1. By eliminating the CO impurities, we were able to fabricate an ITZO TFT with a mobility as high as 70 cm2 (Vs)-1,” comments an excited Prof. Kim.
    However, CO impurities alone do not cause instability. “Any impurity that induces a charge transfer with AOSs can cause gate-bias instability. To achieve high-mobility oxide TFTs, we need contributions from the industrial side to clarify all possible origins for impurities,” asserts Prof. Kim.
    The results could pave the way for fabrication of other similar AOS TFTs for use in display technologies, as well as chip input/output devices, image sensors and power systems. Moreover, given their low cost, they might even replace more expensive silicon-based technologies.
    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    How to transform vacancies into quantum information

    Team’s findings could help the design of industrially relevant quantum materials for sensing, computing and communication.
    “Vacancy” is a sign you want to see when searching for a hotel room on a road trip. When it comes to quantum materials, vacancies are also something you want to see. Scientists create them by removing atoms in crystalline materials. Such vacancies can serve as quantum bits or qubits, the basic unit of quantum technology.
    Researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory and the University of Chicago have made a breakthrough that should help pave the way for greatly improved control over the formation of vacancies in silicon carbide, a semiconductor.
    Semiconductors are the material behind the brains in cell phones, computers, medical equipment and more. For those applications, the existence of atomic-scale defects in the form of vacancies is undesirable, as they can interfere with performance. According to recent studies, however, certain types of vacancies in silicon carbide and other semiconductors show promise for the realization of qubits in quantum devices. Applications of qubits could include unhackable communication networks and hypersensitive sensors able to detect individual molecules or cells. Also possible in the future are new types of computers able to solve complex problems beyond the reach of classical computers.
    “Scientists already know how to produce qubit-worthy vacancies in semiconductors such as silicon carbide and diamond,” said Giulia Galli, a senior scientist at Argonne’s Materials Science Division and professor of molecular engineering and chemistry at the University of Chicago. ?”But for practical new quantum applications, they still need to know much more about how to customize these vacancies with desired features.”
    In silicon carbide semiconductors, single vacancies occur upon the removal of individual silicon and carbon atoms in the crystal lattice. Importantly, a carbon vacancy can pair with an adjacent silicon vacancy. This paired vacancy, called a divacancy, is a key candidate as a qubit in silicon carbide. The problem has been that the yield for converting single vacancies into divacancies has been low, a few percent. Scientists are racing to develop a pathway to increase that yield. More