More stories

  • in

    AI trained to read electric vehicle charging station reviews to find infrastructure gaps

    Although electric vehicles that reduce greenhouse gas emissions attract many drivers, the lack of confidence in charging services deters others. Building a reliable network of charging stations is difficult in part because it’s challenging to aggregate data from independent station operators. But now, researchers reporting January 22 in the journal Patterns have developed an AI that can analyze user reviews of these stations, allowing it to accurately identify places where there are insufficient or out-of-service stations.
    “We’re spending billions of both public and private dollars on electric vehicle infrastructure,” says Omar Asensio (@AsensioResearch), principal investigator and assistant professor in the School of Public Policy at the Georgia Institute of Technology. “But we really don’t have a good understanding of how well these investments are serving the public and public interest.”
    Electric vehicle drivers have started to solve the problem of uncertain charging infrastructure by forming communities on charge station locator apps, leaving reviews. The researchers sought to analyze these reviews to better understand the problems facing users.
    With the aid of their AI, Asensio and colleagues were able to predict whether a specific station was functional on a particular day. They also found that micropolitan areas, where the population is between 10,000 and 50,000 people, may be underserved, with more frequent reports of station availability issues. These communities are mostly located in states in the West and Midwest, such as Oregon, Utah, South Dakota, and Nebraska, along with Hawaii.
    “When users are engaging and sharing information about charging experiences, they are often engaging in prosocial or pro-environmental behavior, which gives us rich behavioral information for machine learning,” says Asensio. But compared to analyzing data tables, texts can be challenging for computers to process. “A review could be as short as three words. It could also be as long as 25 or 30 words with misspellings and multiple topics,” says co-author Sameer Dharur of Georgia Institute of Technology. Users sometimes even throw smiley faces or emojis into the texts.
    To address the problem, Asensio and his team tailored their algorithm to electric vehicle transportation lingo. They trained it with reviews from 12,720 US charging stations to classify reviews into eight different categories: functionality, availability, cost, location, dealership, user interaction, service time, and range anxiety. The AI achieved a 91% accuracy and high learning efficiency in parsing the reviews in minutes. “That’s a milestone in the transition for us to deploy these AI tools because it’s no longer ‘can the AI do as good as human?'” says Asensio. “In some cases, the AI exceeded the performance of human experts.”
    As opposed to previous charging infrastructure performance evaluation studies that rely on costly and infrequent self-reported surveys, AI can reduce research costs while providing real-time standardized data. The electric vehicle charging market is expected to grow to $27.6 billion by 2027. The new method can give insight into consumers’ behavior, enabling rapid policy analysis and making infrastructure management easier for the government and companies. For instance, the team’s findings suggest that it may be more effective to subsidize infrastructure development as opposed to the sale of an electric car.
    While the technology still faces some limitations — like the need to reduce requirements for computer processing power — before rolling out large-scale implementation to the electric vehicle charging market, Asensio and his team hope that as the science progresses, their research can open doors to more in-depth studies about social equity on top of meeting consumer needs.
    “This is a wake-up call for us because, given the massive investment in electric vehicle infrastructure, we’re doing it in a way that is not necessarily attentive to the social equity and distributional issues of access to this enabling infrastructure,” says Asensio. “That is a topic of discussion that’s not going away and we’re only beginning to understand.”

    Story Source:
    Materials provided by Cell Press. Note: Content may be edited for style and length. More

  • in

    Defects may help scientists understand the exotic physics of topology

    Real-world materials are usually messier than the idealized scenarios found in textbooks. Imperfections can add complications and even limit a material’s usefulness. To get around this, scientists routinely strive to remove defects and dirt entirely, pushing materials closer to perfection. Now, researchers at the University of Illinois at Urbana-Champaign have turned this problem around and shown that for some materials defects could act as a probe for interesting physics, rather than a nuisance.
    The team, led by professors Gaurav Bahl and Taylor Hughes, studied artificial materials, or metamaterials, which they engineered to include defects. The team used these customizable circuits as a proxy for studying exotic topological crystals, which are often imperfect, difficult to synthesize, and notoriously tricky to probe directly. In a new study, published in the January 20th issue of Nature, the researchers showed that defects and structural deformations can provide insights into a real material’s hidden topological features.
    “Most studies in this field have focused on materials with perfect internal structure. Our team wanted to see what happens when we account for imperfections. We were surprised to discover that we could actually use defects to our advantage,” said Bahl, an associate professor in the Department of Mechanical Science and Engineering. With that unexpected assist, the team has created a practical and systematic approach for exploring the topology of unconventional materials.
    Topology is a way of mathematically classifying objects according to their overall shape, rather than every small detail of their structure. One common illustration of this is a coffee mug and a bagel, which have the same topology because both objects have only one hole that you can wrap your fingers through.
    Materials can also have topological features related to the classification of their atomic structure and energy levels. These features lead to unusual, yet possibly useful, electron behaviors. But verifying and harnessing topological effects can be tricky, especially if a material is new or unknown. In recent years, scientists have used metamaterials to study topology with a level of control that is nearly impossible to achieve with real materials.
    “Our group developed a toolkit for being able to probe and confirm topology without having any preconceived notions about a material.” says Hughes, who is a professor in the Department of Physics. “This has given us a new window into understanding the topology of materials, and how we should measure it and confirm it experimentally.”
    In an earlier study published in Science, the team established a novel technique for identifying insulators with topological features. Their findings were based on translating experimental measurements made on metamaterials into the language of electronic charge. In this new work, the team went a step further — they used an imperfection in the material’s structure to trap a feature that is equivalent to fractional charges in real materials.

    advertisement

    A single electron by itself cannot carry half a charge or some other fractional amount. But, fragmented charges can show up within crystals, where many electrons dance together in a ballroom of atoms. This choreography of interactions induces odd electronic behaviors that are otherwise disallowed. Fractional charges have not been measured in either naturally occurring or custom-grown crystals, but this team showed that analogous quantities can be measured in a metamaterial.
    The team assembled arrays of centimeter-scale microwave resonators onto a chip. “Each of these resonators plays the role of an atom in a crystal and, similar to an atom’s energy levels, has a specific frequency where it easily absorbs energy — in this case the frequency is similar that of a conventional microwave oven.” said lead author Kitt Peterson, a former graduate student in Bahl’s group.
    The resonators are arranged into squares, repeating across the metamaterial. The team included defects by disrupting this square pattern — either by removing one resonator to make a triangle or adding one to create a pentagon. Since all the resonators are connected together, these singular disclination defects ripple out, warping the overall shape of the material and its topology.
    The team injected microwaves into each resonator of the array and recorded the amount of absorption. Then, they mathematically translated their measurements to predict how electrons act in an equivalent material. From this, they concluded that fractional charges would be trapped on disclination defects in such a crystal. With further analysis, the team also demonstrated that trapped fractional charge signals the presence of certain kinds of topology.
    “In these crystals, fractional charge turns out to be the most fundamental observable signature of interesting underlying topological features” said Tianhe Li, a theoretical physics graduate student in Hughes’ research group and a co-author on the study.
    Observing fractional charges directly remains a challenge, but metamaterials offer an alternative way to test theories and learn about manipulating topological forms of matter. According to the researchers, reliable probes for topology are also critical for developing future applications for topological quantum materials.
    The connection between the topology of a material and its imperfect geometry is also broadly interesting for theoretical physics. “Engineering a perfect material does not necessarily reveal much about real materials,” says Hughes. “Thus, studying the connection between defects, like the ones in this study, and topological matter may increase our understanding of realistic materials, with all of their inherent complexities.” More

  • in

    Record-breaking laser link could help us test whether Einstein was right

    Scientists from the International Centre for Radio Astronomy Research (ICRAR) and The University of Western Australia (UWA) have set a world record for the most stable transmission of a laser signal through the atmosphere.
    In a study published today in the journal Nature Communications, Australian researchers teamed up with researchers from the French National Centre for Space Studies (CNES) and the French metrology lab Systèmes de Référence Temps-Espace (SYRTE) at Paris Observatory.
    The team set the world record for the most stable laser transmission by combining the Aussies’ ‘phase stabilisation’ technology with advanced self-guiding optical terminals.
    Together, these technologies allowed laser signals to be sent from one point to another without interference from the atmosphere.
    Lead author Benjamin Dix-Matthews, a PhD student at ICRAR and UWA, said the technique effectively eliminates atmospheric turbulence.
    “We can correct for atmospheric turbulence in 3D, that is, left-right, up-down and, critically, along the line of flight,” he said.

    advertisement

    “It’s as if the moving atmosphere has been removed and doesn’t exist.
    “It allows us to send highly-stable laser signals through the atmosphere while retaining the quality of the original signal.”
    The result is the world’s most precise method for comparing the flow of time between two separate locations using a laser system transmitted through the atmosphere.
    ICRAR-UWA senior researcher Dr Sascha Schediwy said the research has exciting applications.
    “If you have one of these optical terminals on the ground and another on a satellite in space, then you can start to explore fundamental physics,” he said.

    advertisement

    “Everything from testing Einstein’s theory of general relativity more precisely than ever before, to discovering if fundamental physical constants change over time.”
    The technology’s precise measurements also have practical uses in earth science and geophysics.
    “For instance, this technology could improve satellite-based studies of how the water table changes over time, or to look for ore deposits underground,” Dr Schediwy said.
    There are further potential benefits for optical communications, an emerging field that uses light to carry information.
    Optical communications can securely transmit data between satellites and Earth with much higher data rates than current radio communications.
    “Our technology could help us increase the data rate from satellites to ground by orders of magnitude,” Dr Schediwy said.
    “The next generation of big data-gathering satellites would be able to get critical information to the ground faster.”
    The phase stabilisation technology behind the record-breaking link was originally developed to synchronise incoming signals for the Square Kilometre Array telescope.
    The multi-billion-dollar telescope is set to be built in Western Australia and South Africa from 2021. More

  • in

    Bringing atoms to a standstill: Miniaturizing laser cooling

    It’s cool to be small. Scientists at the National Institute of Standards and Technology (NIST) have miniaturized the optical components required to cool atoms down to a few thousandths of a degree above absolute zero, the first step in employing them on microchips to drive a new generation of super-accurate atomic clocks, enable navigation without GPS, and simulate quantum systems.
    Cooling atoms is equivalent to slowing them down, which makes them a lot easier to study. At room temperature, atoms whiz through the air at nearly the speed of sound, some 343 meters per second. The rapid, randomly moving atoms have only fleeting interactions with other particles, and their motion can make it difficult to measure transitions between atomic energy levels. When atoms slow to a crawl — about 0.1 meters per second — researchers can measure the particles’ energy transitions and other quantum properties accurately enough to use as reference standards in a myriad of navigation and other devices.
    For more than two decades, scientists have cooled atoms by bombarding them with laser light, a feat for which NIST physicist Bill Phillips shared the 1997 Nobel Prize in physics. Although laser light would ordinarily energize atoms, causing them to move faster, if the frequency and other properties of the light are chosen carefully, the opposite happens. Upon striking the atoms, the laser photons reduce the atoms’ momentum until they are moving slowly enough to be trapped by a magnetic field.
    But to prepare the laser light so that it has the properties to cool atoms typically requires an optical assembly as big as a dining-room table. That’s a problem because it limits the use of these ultracold atoms outside the laboratory, where they could become a key element of highly accurate navigation sensors, magnetometers and quantum simulations.
    Now NIST researcher William McGehee and his colleagues have devised a compact optical platform, only about 15 centimeters (5.9 inches) long, that cools and traps gaseous atoms in a 1-centimeter-wide region. Although other miniature cooling systems have been built, this is the first one that relies solely on flat, or planar, optics, which are easy to mass produce.
    “This is important as it demonstrates a pathway for making real devices and not just small versions of laboratory experiments,” said McGehee. The new optical system, while still about 10 times too big to fit on a microchip, is a key step toward employing ultracold atoms in a host of compact, chip-based navigation and quantum devices outside a laboratory setting. Researchers from the Joint Quantum Institute, a collaboration between NIST and the University of Maryland in College Park, along with scientists from the University of Maryland’s Institute for Research in Electronics and Applied Physics, also contributed to the study.

    advertisement

    The apparatus, described online in the New Journal of Physics, consists of three optical elements. First, light is launched from an optical integrated circuit using a device called an extreme mode converter. The converter enlarges the narrow laser beam, initially about 500 nanometers (nm) in diameter (about five thousandths the thickness of a human hair), to 280 times that width. The enlarged beam then strikes a carefully engineered, ultrathin film known as a “metasurface” that’s studded with tiny pillars, about 600 nm in length and 100 nm wide.
    The nanopillars act to further widen the laser beam by another factor of 100. The dramatic widening is necessary for the beam to efficiently interact with and cool a large collection of atoms. Moreover, by accomplishing that feat within a small region of space, the metasurface miniaturizes the cooling process.
    The metasurface reshapes the light in two other important ways, simultaneously altering the intensity and polarization (direction of vibration) of the light waves. Ordinarily, the intensity follows a bell-shaped curve, in which the light is brightest at the center of the beam, with a gradual falloff on either side. The NIST researchers designed the nanopillars so that the tiny structures modify the intensity, creating a beam that has a uniform brightness across its entire width. The uniform brightness allows more efficient use of the available light. Polarization of the light is also critical for laser cooling.
    The expanding, reshaped beam then strikes a diffraction grating that splits the single beam into three pairs of equal and oppositely directed beams. Combined with an applied magnetic field, the four beams, pushing on the atoms in opposing directions, serve to trap the cooled atoms.
    Each component of the optical system — the converter, the metasurface and the grating — had been developed at NIST but was in operation at separate laboratories on the two NIST campuses, in Gaithersburg, Maryland and Boulder, Colorado. McGehee and his team brought the disparate components together to build the new system.
    “That’s the fun part of this story,” he said. “I knew all the NIST scientists who had independently worked on these different components, and I realized the elements could be put together to create a miniaturized laser cooling system.”
    Although the optical system will have to be 10 times smaller to laser-cool atoms on a chip, the experiment “is proof of principle that it can be done,” McGehee added.
    “Ultimately, making the light preparation smaller and less complicated will enable laser-cooling based technologies to exist outside of laboratories,” he said. More

  • in

    Designing customized 'brains' for robots

    Contemporary robots can move quickly. “The motors are fast, and they’re powerful,” says Sabrina Neuman.
    Yet in complex situations, like interactions with people, robots often don’t move quickly. “The hang up is what’s going on in the robot’s head,” she adds.
    Perceiving stimuli and calculating a response takes a “boatload of computation,” which limits reaction time, says Neuman, who recently graduated with a PhD from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Neuman has found a way to fight this mismatch between a robot’s “mind” and body. The method, called robomorphic computing, uses a robot’s physical layout and intended applications to generate a customized computer chip that minimizes the robot’s response time.
    The advance could fuel a variety of robotics applications, including, potentially, frontline medical care of contagious patients. “It would be fantastic if we could have robots that could help reduce risk for patients and hospital workers,” says Neuman.
    Neuman will present the research at this April’s International Conference on Architectural Support for Programming Languages and Operating Systems. MIT co-authors include graduate student Thomas Bourgeat and Srini Devadas, the Edwin Sibley Webster Professor of Electrical Engineering and Neuman’s PhD advisor. Other co-authors include Brian Plancher, Thierry Tambe, and Vijay Janapa Reddi, all of Harvard University. Neuman is now a postdoctoral NSF Computing Innovation Fellow at Harvard’s School of Engineering and Applied Sciences.
    There are three main steps in a robot’s operation, according to Neuman. The first is perception, which includes gathering data using sensors or cameras. The second is mapping and localization: “Based on what they’ve seen, they have to construct a map of the world around them and then localize themselves within that map,” says Neuman. The third step is motion planning and control — in other words, plotting a course of action.

    advertisement

    These steps can take time and an awful lot of computing power. “For robots to be deployed into the field and safely operate in dynamic environments around humans, they need to be able to think and react very quickly,” says Plancher. “Current algorithms cannot be run on current CPU hardware fast enough.”
    Neuman adds that researchers have been investigating better algorithms, but she thinks software improvements alone aren’t the answer. “What’s relatively new is the idea that you might also explore better hardware.” That means moving beyond a standard-issue CPU processing chip that comprises a robot’s brain — with the help of hardware acceleration.
    Hardware acceleration refers to the use of a specialized hardware unit to perform certain computing tasks more efficiently. A commonly used hardware accelerator is the graphics processing unit (GPU), a chip specialized for parallel processing. These devices are handy for graphics because their parallel structure allows them to simultaneously process thousands of pixels. “A GPU is not the best at everything, but it’s the best at what it’s built for,” says Neuman. “You get higher performance for a particular application.” Most robots are designed with an intended set of applications and could therefore benefit from hardware acceleration. That’s why Neuman’s team developed robomorphic computing.
    The system creates a customized hardware design to best serve a particular robot’s computing needs. The user inputs the parameters of a robot, like its limb layout and how its various joints can move. Neuman’s system translates these physical properties into mathematical matrices. These matrices are “sparse,” meaning they contain many zero values that roughly correspond to movements that are impossible given a robot’s particular anatomy. (Similarly, your arm’s movements are limited because it can only bend at certain joints — it’s not an infinitely pliable spaghetti noodle.)
    The system then designs a hardware architecture specialized to run calculations only on the non-zero values in the matrices. The resulting chip design is therefore tailored to maximize efficiency for the robot’s computing needs. And that customization paid off in testing.

    advertisement

    Hardware architecture designed using this method for a particular application outperformed off-the-shelf CPU and GPU units. While Neuman’s team didn’t fabricate a specialized chip from scratch, they programmed a customizable field-programmable gate array (FPGA) chip according to their system’s suggestions. Despite operating at a slower clock rate, that chip performed eight times faster than the CPU and 86 times faster than the GPU.
    “I was thrilled with those results,” says Neuman. “Even though we were hamstrung by the lower clock speed, we made up for it by just being more efficient.”
    Plancher sees widespread potential for robomorphic computing. “Ideally we can eventually fabricate a custom motion-planning chip for every robot, allowing them to quickly compute safe and efficient motions,” he says. “I wouldn’t be surprised if 20 years from now every robot had a handful of custom computer chips powering it, and this could be one of them.” Neuman adds that robomorphic computing might allow robots to relieve humans of risk in a range of settings, such as caring for covid-19 patients or manipulating heavy objects.
    Neuman next plans to automate the entire system of robomorphic computing. Users will simply drag and drop their robot’s parameters, and “out the other end comes the hardware description. I think that’s the thing that’ll push it over the edge and make it really useful.”
    This research was funded by the National Science Foundation, the Computing Research Agency, the CIFellows Project, and the Defense Advanced Research Projects Agency. More

  • in

    Why older adults must go to the front of the vaccine line

    Vaccinating older adults for COVID-19 first will save substantially more U.S. lives than prioritizing other age groups, and the slower the vaccine rollout and more widespread the virus, the more critical it is to bring them to the front of the line.
    That’s one key takeaway from a new University of Colorado Boulder paper, published today in the journal Science, which uses mathematical modeling to make projections about how different distribution strategies would play out in countries around the globe.
    The research has already informed policy recommendations by the Centers for Disease Control and the World Health Organization to prioritize older adults after medical workers.
    Now, as policymakers decide how and whether to carry out that advice, the paper — which includes an interactive tool (https://vaxfirst.colorado.edu/) — presents the numbers behind the tough decision.
    “Common sense would suggest you want to protect the older, most vulnerable people in the population first. But common sense also suggests you want to first protect front-line essential workers (like grocery store clerks and teachers) who are at higher risk of exposure,” said senior author Daniel Larremore, a computational biologist in the Department of Computer Science and CU Boulder’s BioFrontiers Institute. “When common sense leads you in two different directions, math can help you decide.”
    For the study, Larremore and lead author Kate Bubar, a graduate student in the Department of Applied Mathematics, teamed up with colleagues at the Harvard T.H. Chan School of Public Health and the University of Chicago.

    advertisement

    They drew on demographic information from different countries, as well as up-to-date data on how many people have already tested positive for COVID-19, how quickly the virus is spreading, how fast vaccines are rolling out and their estimated efficacy.
    Then they modeled what would happen in five different scenarios in which a different group got vaccinated first: Children and teenagers; adults ages 20 to 49; adults 20 or older; or adults 60 or older (considering that about 30% of those eligible might decline). In the fifth scenario, anyone who wanted a vaccine got one while supplies lasted.
    Results from the United States, Belgium, Brazil, China, India, Poland, South Africa, Spain and Zimbabwe are included in the paper, with more countries included in the online tool.
    Different strategies worked better or worse, depending on local circumstances, but a few key findings jumped out.
    In most scenarios, across countries, prioritizing adults 60+ saved the most lives.

    advertisement

    “Age is the strongest predictor of vulnerability,” said Larremore, noting that while pre-existing conditions like asthma boost risk of severe illness or death, age boosts vulnerability more. “You have an exponentially higher likelihood of dying from COVID-19 as you get older.”
    The authors also note that, while the vaccines being distributed now are believed to have about a 90 to 95% chance of protecting against severe disease, researchers don’t yet know how well they block infection and transmission. If they don’t block it well and asymptomatic spreaders abound, it again makes the most sense to vaccinate older adults. If nothing else, they’ll be personally protected against grave disease.
    Only in scenarios where the virus is under control and the vaccine is known to block infection and transmission well does it make sense to move younger adults to the front of the line. That is not the situation in the United States right now.
    “For essential workers who might be frustrated that they are not first, we hope this study offers some clarity,” said Bubar. “We realize it is a big sacrifice for them to make but our study shows it will save lives.”
    So will a faster rollout, they found.
    For instance, all other things being equal, if the rollout speed was to be doubled from current rates under current transmission conditions, COVID-19 mortality could be reduced by about 23%, or 65,000 lives, over the next three months.
    The paper also suggests that in some situations where COVID has already infected large swaths of the population and vaccine is in short supply, it might make sense to ask younger adults who have already tested positive to step to the back of the line.
    “Our research suggests that prioritizing people who have not yet had COVID could allow hard-hit communities to stretch those first doses further and possibly get to some of the herd immunity effects sooner,” said Larremore.
    The authors stress that vaccines alone are not the only tactic for helping win the race against COVID.
    “To allow the vaccine to get to folks before the virus does, we need to not only roll out the vaccine quickly and get it to the most vulnerable people. We have to also keep our foot on the virus brake with masks, distancing and smart policies,” said Larremore. More

  • in

    Mathematical framework enables accurate characterization of shapes

    In nature, many things have evolved that differ in size, color and, above all, in shape. While the color or size of an object can be easily described, the description of a shape is more complicated. In a study now published in Nature Communications, Jacqueline Nowak of the Max Planck Institute of Molecular Plant Physiology and her colleagues have outlined a new and improved way to describe shapes based on a network representation that can also be used to reassemble and compare shapes.
    Jacqueline Nowak designed a novel approach that relies on a network-based shape representation, named visibility graph, along with a tool for analyzing shapes, termed GraVis. The visibility graph represents the shape of an object that is defined by its surrounding contour and the mathematical structure behind GraVis is specified by a set of nodes equidistantly placed around the contour. The nodes are then connected with each other by edges, that do not cross or align with the shape boundary.As a result, testing the connection between all pairs of nodes specifies the visibility graph for the analyzed shape.
    In this study, Jacqueline Nowak used the visibility graphs and the GraVis tool to compare different shapes. To test the power of the new approach, visibility graphs of simple triangular, rectangular and circular shapes, but also complex shapes of sand grains, fish shapes and leaf shapes were compared with each other.
    By using different machine learning approaches, they demonstrated that the approach can be used to distinguish shapes according to their complexity. Furthermore, visibility graphs enable to distinguish the complexity of shapes as it was shown for epidermal pavement cells in plants, which have a similar shape to pieces of jigsaw puzzle. For these cells, distinct shape parameters like lobe length, neck width or cell area can be accurately quantified with GraVis. “The quantification of the lobe number of epidermal cells with GraVis outperforms existing tools, showing that it is a powerful tool to address particular questions relevant to shape analysis,” says Zoran Nikoloski, GraVis project leader, head of the research group “Systems biology and Mathematical Modelling” at the Max Planck Institute of Molecular Plant Physiology and Professor of Bioinformatics at University of Potsdam.
    In future, the scientists want to apply visibility graphs of epidermal cells and entire leaves to gain biological insights of key cellular processes that impact shape. In addition, shape features of different plant cells quantified by GraVis can facilitate genetic screens to determine the genetic basis of morphogenesis. Finally, the application of GraVis will help to gain deeper understanding of the interrelation between cells and organ shapes in nature.

    Story Source:
    Materials provided by Max-Planck-Gesellschaft. Note: Content may be edited for style and length. More

  • in

    Using VR training to boost our sense of agency and improve motor control

    With Japan’s society rapidly aging, there has been a sharp increase in patients who experience motor dysfunctions. Rehabilitation is key to overcoming such ailments.
    A researcher from Tohoku University has developed a new virtual reality (VR) based method that can benefit rehabilitation and sports training by increasing bodily awareness and?improving motor control.
    His research was published in the Journal Scientific Report.
    Not only can we see and touch our body, but we can sense it too. Our body is constantly firing off information to our brains that tell us where our limbs are in real-time. This process makes us aware of our body and gives us ownership over it. Meanwhile, our ability to control the movement and actions of our body parts voluntarily affords us agency over our body.
    Ownership and agency are highly integrated and are related to our motor control. However, separating our sense of body ownership from our sense of agency has long evaded researchers, making it difficult to ascertain whether both ownership and agency truly affect motor control.
    Professor Kazumichi Matsumiya from the Graduate School of Information Sciences at Tohoku University could isolate these two senses by using VR. Participants viewed a computer-generated hand, and Matsumiya independently measured their sense of ownership and agency over the hand.
    “I found that motor control is improved when participants experienced a sense of agency over the artificial body, regardless of their sense of body ownership,” said Matsumiya. “Our findings suggest that artificial manipulation of agency will enhance the effectiveness of rehabilitation and aid sports training techniques to improve overall motor control.”

    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More