More stories

  • in

    Sounds of action: Using ears, not just eyes, improves robot perception

    People rarely use just one sense to understand the world, but robots usually only rely on vision and, increasingly, touch. Carnegie Mellon University researchers find that robot perception could improve markedly by adding another sense: hearing.
    In what they say is the first large-scale study of the interactions between sound and robotic action, researchers at CMU’s Robotics Institute found that sounds could help a robot differentiate between objects, such as a metal screwdriver and a metal wrench. Hearing also could help robots determine what type of action caused a sound and help them use sounds to predict the physical properties of new objects.
    “A lot of preliminary work in other fields indicated that sound could be useful, but it wasn’t clear how useful it would be in robotics,” said Lerrel Pinto, who recently earned his Ph.D. in robotics at CMU and will join the faculty of New York University this fall. He and his colleagues found the performance rate was quite high, with robots that used sound successfully classifying objects 76 percent of the time.
    The results were so encouraging, he added, that it might prove useful to equip future robots with instrumented canes, enabling them to tap on objects they want to identify.
    The researchers presented their findings last month during the virtual Robotics Science and Systems conference. Other team members included Abhinav Gupta, associate professor of robotics, and Dhiraj Gandhi, a former master’s student who is now a research scientist at Facebook Artificial Intelligence Research’s Pittsburgh lab.
    To perform their study, the researchers created a large dataset, simultaneously recording video and audio of 60 common objects — such as toy blocks, hand tools, shoes, apples and tennis balls — as they slid or rolled around a tray and crashed into its sides. They have since released this dataset, cataloging 15,000 interactions, for use by other researchers.
    The team captured these interactions using an experimental apparatus they called Tilt-Bot — a square tray attached to the arm of a Sawyer robot. It was an efficient way to build a large dataset; they could place an object in the tray and let Sawyer spend a few hours moving the tray in random directions with varying levels of tilt as cameras and microphones recorded each action.
    They also collected some data beyond the tray, using Sawyer to push objects on a surface.
    Though the size of this dataset is unprecedented, other researchers have also studied how intelligent agents can glean information from sound. For instance, Oliver Kroemer, assistant professor of robotics, led research into using sound to estimate the amount of granular materials, such as rice or pasta, by shaking a container, or estimating the flow of those materials from a scoop.
    Pinto said the usefulness of sound for robots was therefore not surprising, though he and the others were surprised at just how useful it proved to be. They found, for instance, that a robot could use what it learned about the sound of one set of objects to make predictions about the physical properties of previously unseen objects.
    “I think what was really exciting was that when it failed, it would fail on things you expect it to fail on,” he said. For instance, a robot couldn’t use sound to tell the difference between a red block or a green block. “But if it was a different object, such as a block versus a cup, it could figure that out.”
    The Defense Advanced Research Projects Agency and the Office of Naval Research supported this research.

    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Byron Spice. Note: Content may be edited for style and length. More

  • in

    Many medical 'rainy day' accounts aren't getting opened or filled

    One-third of the people who could benefit from a special type of savings account to cushion the blow of their health plan deductible aren’t doing so, according to a new study.
    And even among people who do open a health savings account (HSA), half haven’t put any money into it in the past year. This means they may be missing a chance to avoid taxes on money that they can use to pay for their health insurance deductible and other health costs.
    The study also finds that those who buy their health insurance themselves, and select a high-deductible plan on an exchange such as www.healthcare.gov, are less likely to open an HSA than those who get their insurance from employers who offer only a high-deductible option.
    HSAs are different from the flexible spending accounts that some employers offer. HSAs can only be opened by people in health plans that require them to pay a deductible of $1,400 for an individual or $2,800 for a family before their insurance benefits kick in.
    In a new paper in JAMA Network Open, a team led by researchers at the University of Michigan and VA Ann Arbor Healthcare System reports results from a national survey of more than 1,600 participants in high-deductible health plans.
    Key findings
    They note that half of those who had an HSA and put money into their account in the past year had socked away $2,000 or more. And among those who hadn’t put money in, 40% said it was because they already had enough savings to cover their costs.

    advertisement

    But those with lower levels of education were much less likely to have opened an HSA, and to have contributed to it even if they did open one. Those with lower levels of understanding of health insurance concepts, called health insurance literacy, were also less likely to put money in their HSA if they had one.
    And one-third of those who didn’t put money into their HSA said it was because they couldn’t afford to save up for health costs.
    “These findings are concerning, given that nearly half of Americans with private insurance now have high-deductible plans,” says Jeffrey Kullgren, M.D., M.S., MPH, who led the study and has done other research on HDHPs and health care consumerism. “While policymakers have focused on expanding availability and permitted uses of HSAs, and increasing how much money they can hold, work is needed to help eligible enrollees open them and use them to get the care they need at a price they can afford.”
    Change needed
    In the new paper and in a report from the U-M Institute for Healthcare Policy and Innovation, Kullgren and his colleagues call for more efforts to increase uptake of HSAs, and contributions to HSAs, by employers, health insurers and the health systems that provide care and bill insurers for that care.

    advertisement

    Targeted interventions, especially those aimed at people with lower levels of education or health insurance literacy, should be developed.
    The researchers note that as federal and state exchanges prepare to open for 2021 enrollment, expanding the types of exchange plans that are eligible to be linked to an HSA will be important. Currently, just 7% of the plans bought on exchanges are eligible for an HSA, even though many exchange plans come with a high deductible.
    The study data come from an online survey of English-speaking adults under age 65; the study population was weighted to include a higher proportion of people with chronic health conditions than the national population.
    For the survey, the researchers asked respondents about HSAs using the National Health Interview Survey definition of an HSA as “a special account or fund that can be used to pay for medical expenses” that are “sometimes referred to as Health Savings Accounts (HSAs), Health Reimbursement Accounts (HRAs), Personal Care accounts, Personal Medical funds, or Choice funds, and are different from Flexible Spending Accounts.” More

  • in

    Academia from home

    As the uncertainty around reopening college and university campuses this fall continues, those who work, study, teach and conduct research are navigating the uncertain terrain of the “new normal.” They are balancing physical distancing and other COVID-19 prevention practices with productivity, creating home workspaces and mastering communications and teamwork across time and space.
    Turns out, there’s a group of people for whom these challenges are not new. Postdoctoral researchers — people in the critical phase between graduate school and permanent academic positions — are part of a small but growing cohort that has been turning to remote work to meet the challenges of their young careers. Often called upon to relocate multiple times for short-term, full-time appointments, postdocs and their families have to endure heightened financial costs, sacrificed career opportunities and separations from their support communities.
    But with the right practices and perspectives, remote work can level the playing field, especially for those in underrepresented groups, according to Kurt Ingeman, a postdoctoral researcher in UC Santa Barbara’s Department of Ecology, Evolution and Marine Biology. And, like it or not, with COVID-19 factoring into virtually every decision we now make, he noted, it’s an idea whose time has come.
    “We started this project in the pre-pandemic times but it seems more relevant than ever as academics are forced to embrace work-from-home,” said Ingeman, who makes the case for embracing remote postdoctoral work in the journal PLOS Computational Biology. Family and financial considerations drove his own decision to design a remote position; many early-career researchers face the same concerns, he said.
    It takes a shift in perspective to overcome resistance to having remote research teammates. Principal investigators often don’t perceive the remote postdoc as a fully functional member of the lab and worry about the loss of spontaneous informal actions, and interactions, that can generate new ideas, Ingeman said.
    “These are totally valid concerns,” he said. “We suggest (in the paper) ways to use digital tools to fully integrate remote postdocs into lab activities, like mentoring graduate students or coding and writing together. These same spaces are valuable for virtual coffee chats and other informal interactions.”
    Communication enabled by technology is in fact foundational to a good remote postdoc experience, according to Ingeman and co-authors, who advocate for investment in and use of reliable videoconferencing tools that can help create rapport between team members, and the creation of digital spaces to share documents and files. Transparency and early expectation setting are keys to a good start. In situations where proximity would have naturally led to interaction, the researchers recommend having a robust communications plan. Additionally, postdocs would benefit from establishing academic connections within their local community to combat isolation.

    advertisement

    There are benefits to reap from such arrangements and practices, the researchers continued. For the postdoc, it could mean less stress and hardship, and more focus on work. For the team, it could mean a wider network overall.
    “For me, remote postdoc work was a real bridge to becoming an independent researcher,” said Ingeman, who “struggled with isolation early on,” but has since gained a local academic community, resulting in productive new research collaborations.
    Additionally, opening the postdoc pool to remote researchers can result in a more diverse set of applicants.
    “The burdens of relocating for a temporary postdoc position often fall hardest on members of underrepresented groups,” Ingeman added. “So the idea of supporting remote work really stand out to me as an equity issue.”
    Of course, not all postdoc positions can be remote; lab and field work still require a presence. But as social distancing protocols and pandemic safety measures are forcing research teams to minimize in-person contact or undergo quarantine at a moment’s notice, developing remote research skills may well become a valuable part of any early-career researcher’s toolkit.
    “Even labs and research groups that are returning to campus in a limited way may face periodic campus closures, so it makes sense to integrate remote tools now,” Ingeman said. “Our suggestions for remote postdocs are absolutely applicable to other lab members working from home during closures.” More

  • in

    Simple mod makes quantum states last 10,000 times longer

    If we can harness it, quantum technology promises fantastic new possibilities. But first, scientists need to coax quantum systems to stay yoked for longer than a few millionths of a second.
    A team of scientists at the University of Chicago’s Pritzker School of Molecular Engineering announced the discovery of a simple modification that allows quantum systems to stay operational — or “coherent” — 10,000 times longer than before. Though the scientists tested their technique on a particular class of quantum systems called solid-state qubits, they think it should be applicable to many other kinds of quantum systems and could thus revolutionize quantum communication, computing and sensing.
    The study was published Aug. 13 in Science.
    “This breakthrough lays the groundwork for exciting new avenues of research in quantum science,” said study lead author David Awschalom, the Liew Family Professor in Molecular Engineering, senior scientist at Argonne National Laboratory and director of the Chicago Quantum Exchange. “The broad applicability of this discovery, coupled with a remarkably simple implementation, allows this robust coherence to impact many aspects of quantum engineering. It enables new research opportunities previously thought impractical.”
    Down at the level of atoms, the world operates according to the rules of quantum mechanics — very different from what we see around us in our daily lives. These different rules could translate into technology like virtually unhackable networks or extremely powerful computers; the U.S. Department of Energy released a blueprint for the future quantum internet in an event at UChicago on July 23. But fundamental engineering challenges remain: Quantum states need an extremely quiet, stable space to operate, as they are easily disturbed by background noise coming from vibrations, temperature changes or stray electromagnetic fields.
    Thus, scientists try to find ways to keep the system coherent as long as possible. One common approach is physically isolating the system from the noisy surroundings, but this can be unwieldy and complex. Another technique involves making all of the materials as pure as possible, which can be costly. The scientists at UChicago took a different tack.

    advertisement

    “With this approach, we don’t try to eliminate noise in the surroundings; instead, we “trick” the system into thinking it doesn’t experience the noise,” said postdoctoral researcher Kevin Miao, the first author of the paper.
    In tandem with the usual electromagnetic pulses used to control quantum systems, the team applied an additional continuous alternating magnetic field. By precisely tuning this field, the scientists could rapidly rotate the electron spins and allow the system to “tune out” the rest of the noise.
    “To get a sense of the principle, it’s like sitting on a merry-go-round with people yelling all around you,” Miao explained. “When the ride is still, you can hear them perfectly, but if you’re rapidly spinning, the noise blurs into a background.”
    This small change allowed the system to stay coherent up to 22 milliseconds, four orders of magnitude higher than without the modification — and far longer than any previously reported electron spin system. (For comparison, a blink of an eye takes about 350 milliseconds). The system is able to almost completely tune out some forms of temperature fluctuations, physical vibrations, and electromagnetic noise, all of which usually destroy quantum coherence.
    The simple fix could unlock discoveries in virtually every area of quantum technology, the scientists said.
    “This approach creates a pathway to scalability,” said Awschalom. “It should make storing quantum information in electron spin practical. Extended storage times will enable more complex operations in quantum computers and allow quantum information transmitted from spin-based devices to travel longer distances in networks.”
    Though their tests were run in a solid-state quantum system using silicon carbide, the scientists believe the technique should have similar effects in other types of quantum systems, such as superconducting quantum bits and molecular quantum systems. This level of versatility is unusual for such an engineering breakthrough.
    “There are a lot of candidates for quantum technology that were pushed aside because they couldn’t maintain quantum coherence for long periods of time,” Miao said. “Those could be re-evaluated now that we have this way to massively improve coherence.
    “The best part is, it’s incredibly easy to do,” he added. “The science behind it is intricate, but the logistics of adding an alternating magnetic field are very straightforward.”

    Story Source:
    Materials provided by University of Chicago. Original written by Louise Lerner. Note: Content may be edited for style and length. More

  • in

    COVID-19 symptom tracker ensures privacy during isolation

    An online COVID-19 symptom tracking tool developed by researchers at Georgetown University Medical Center ensures a person’s confidentiality while being able to actively monitor their symptoms. The tool is not proprietary and can be used by entities that are not able to develop their own tracking systems.
    Identifying and monitoring people infected with COVID-19, or exposed to people with infection, is critical to preventing widespread transmission of the disease. Details of the COVID19 Symptom Tracker and a pilot study were published August 13, 2020, in the Journal of Medical Information Research (JMIR).
    “One of the major impediments to tracking people with, or at risk of, COVID-19 has been an assurance of privacy and confidentiality,” says infectious disease expert Seble G. Kassaye, MD, MS, lead author and associate professor of medicine at Georgetown University Medical Center. “Our online system provides a method for efficient, active monitoring of large numbers of individuals under quarantine or home isolation, while maintaining privacy.”
    The Georgetown internet tool assigns a unique identifier as people enter their symptoms and other relevant demographic data. One function in the system allows institutions to generate reports about items on which people can act, such as symptoms that might require medical attention. Additionally, people using the system are provided with information and links to Centers for Disease Control and Prevention COVID-19 recommendations and instructions for how people with symptoms should seek care.
    Development of the system was rapid — it took five days to design. The joint project included Georgetown University’s J.C. Smart, PhD, chief scientist of AvesTerra, a knowledge management environment that supports data integration and synthesis to identify actionable events and maintain privacy, and Georgetown’s vice president for research and chief technology officer, Spiros Dimolitsas, PhD.
    “We knew that time was of the essence and the challenges of traditional contact tracing became very clear to us based on one of our first patients who had over 500 exposures,” says Kassaye. “This was what motivated us to work on this, essentially day and night.”
    The tool launched on March 20, followed by initial testing of the system with the voluntary participation of 48 Georgetown University School of Medicine students or their social contacts. Participants were asked to enter data twice daily for three days between March 31 and April 5, 2020.
    “The lack of identifying data being collected in the system should reassure individual users and alleviate personal inhibitions that appear to be the Achille’s heel of other digital contact tracing apps that require identifying information,” says Kassaye. She also noted that this system could be used by health-related organizations during the re-opening of business to provide reassurance to their users that the enterprise is actively, rather than passively, monitoring its staff.
    Feedback from healthcare groups using the platform led to the release of a Spanish language version. As the data currently needs to be entered through the website, development of an app for cellphone use could greatly enhance the usability of the tool, said the investigators. For places where internet access is problematic, the researchers are also pursuing development of a voice activated version.
    The tracker can be view at: https://www.covidgu.org
    In addition to Kassaye, Amanda B. Spence of Georgetown University Medical Center contributed to this work. Other authors include Edwin Lau and John Cederholm, LEDR Technologies Inc.; and David M. Bridgeland, Hanging Steel Productions LLC.
    This work was partially supported by a National Institutes of Health grant UL1TR001409. The authors report no conflicts of interest. More

  • in

    Task force examines role of mobile health technology in COVID-19 pandemic

    An international task force, including two University of Massachusetts Amherst computer scientists, concludes in new research that mobile health (mHealth) technologies are a viable option to monitor COVID-19 patients at home and predict which ones will need medical intervention.
    The technologies — including wearable sensors, electronic patient-reported data and digital contact tracing — also could be used to monitor and predict coronavirus exposure in people presumed to be free of infection, providing information that could help prioritize diagnostic testing.
    The 60-member panel, with members from Australia, Germany, Ireland, Italy, Switzerland and across the U.S., was led by Harvard Medical School associate professor Paolo Bonato, director of the Motion Analysis Lab at Spaulding Rehabilitation Hospital in Boston. UMass Amherst task force members Sunghoon Ivan Lee and Tauhidur Rahman, both assistant professors in the College of Information and Computer Sciences, focused their review on mobile health sensors, their area of expertise.
    The team’s study, “Can mHealth Technology Help Mitigate the Effects of the COVID 19 Pandemic?” was published Wednesday in the IEEE Open Journal of Engineering in Medicine and Biology.
    “To be able to activate a diverse group of experts with such a singular focus speaks to the commitment the entire research and science community has in addressing this pandemic,” Bonato says. “Our goal is to quickly get important findings into the hands of the clinical community so we continue to build effective interventions.”
    The task force brought together researchers and experts from a range of fields, including computer science, biomedical engineering, medicine and health sciences. “A large number of researchers and experts around the world dedicated months of efforts to carefully reviewing technologies in eight different areas,” Lee says.

    advertisement

    “I hope that the paper will enable current and future researchers to understand the complex problems and the limitations and potential solutions of these state-of-the-art mobile health systems,” Rahman adds.
    The task force review found that smartphone applications enabling self-reports and wearable sensors enabling physiological data collection could be used to monitor clinical workers and detect early signs of an outbreak in hospital or healthcare settings.
    Similarly, in the community, early detection of COVID-19 cases could be achieved by building on research that showed it is possible to predict influenza-like illness rates, as well as COVID-19 epidemic trends, by using wearable sensors to capture heart rate and sleep duration, among other data.
    Lee and Rahman, inventors of mobile health sensors themselves, reviewed 27 commercially available remote monitoring technologies that could be immediately used in clinical practices to help patients and frontline healthcare workers monitor symptoms of COVID-19.
    “We carefully investigated whether the technologies could ‘monitor’ a number of obvious indicators and symptoms of COVID-19 and whether any clearance or certification from health authorities was needed,” Lee says. “We considered ease of use and integration flexibility with existing hospital electronic systems. Then we identified 12 examples of technologies that could potentially be used to monitor patients and healthcare workers.”
    Bonato says additional research will help expand the understanding of how best to use and develop the technologies. “The better data and tracking we can collect using mHealth technologies can help public health experts understand the scope and spread of this virus and, most importantly, hopefully help more people get the care they need earlier,” he says.
    The paper concludes, “When combined with diagnostic and immune status testing, mHealth technology could be a valuable tool to help mitigate, if not prevent, the next surge of COVID-19 cases.” More

  • in

    Artificial intelligence recognizes deteriorating photoreceptors

    A software based on artificial intelligence (AI), which was developed by researchers at the Eye Clinic of the University Hospital Bonn, Stanford University and University of Utah, enables the precise assessment of the progression of geographic atrophy (GA), a disease of the light sensitive retina caused by age-related macular degeneration (AMD). This innovative approach permits the fully automated measurement of the main atrophic lesions using data from optical coherence tomography, which provides three-dimensional visualization of the structure of the retina. In addition, the research team can precisely determine the integrity of light sensitive cells of the entire central retina and also detect progressive degenerative changes of the so-called photoreceptors beyond the main lesions. The findings will be used to assess the effectiveness of new innovative therapeutic approaches. The study has now been published in the journal “JAMA Ophthalmology.”
    There is no effective treatment for geographic atrophy, one of the most common causes of blindness in industrialized nations. The disease damages cells of the retina and causes them to die. The main lesions, areas of degenerated retina, also known as “geographic atrophy,” expand as the disease progresses and result in blind spots in the affected person’s visual field. A major challenge for evaluating therapies is that these lesions progress slowly, which means that intervention studies require a long follow-up period. “When evaluating therapeutic approaches, we have so far concentrated primarily on the main lesions of the disease. However, in addition to central visual field loss, patients also suffer from symptoms such as a reduced light sensitivity in the surrounding retina,” explains Prof. Dr. Frank G. Holz, Director of the Eye Clinic at the University Hospital Bonn. “Preserving the microstructure of the retina outside the main lesions would therefore already be an important achievement, which could be used to verify the effectiveness of future therapeutic approaches.”
    Integrity of light sensitive cells predicts disease progression
    The researchers were furthermore able to show that the integrity of light sensitive cells outside areas of geographic atrophy is a predictor of the future progression of the disease. “It may therefore be possible to slow down the progression of the main atrophic lesions by using therapeutic approaches that protect the surrounding light sensitive cells,” says Prof. Monika Fleckenstein of the Moran Eye Center at the University of Utah in the USA, initiator of the Bonn-based natural history study on geographic atrophy, on which the current publication is based.
    “Research in ophthalmology is increasingly data-driven. The fully automated, precise analysis of the finest, microstructural changes in optical coherence tomography data using AI represents an important step towards personalized medicine for patients with age-related macular degeneration,” explains lead author Dr. Maximilian Pfau from the Eye Clinic at the University Hospital Bonn, who is currently working as a fellow of the German Research Foundation (DFG) and postdoctoral fellow at Stanford University in the Department of Biomedical Data Science. “It would also be useful to re-evaluate older treatment studies with the new methods in order to assess possible effects on photoreceptor integrity.”

    Story Source:
    Materials provided by University of Bonn. Note: Content may be edited for style and length. More

  • in

    AI system for high precision recognition of hand gestures

    Scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed an Artificial Intelligence (AI) system that recognises hand gestures by combining skin-like electronics with computer vision.
    The recognition of human hand gestures by AI systems has been a valuable development over the last decade and has been adopted in high-precision surgical robots, health monitoring equipment and in gaming systems.
    AI gesture recognition systems that were initially visual-only have been improved upon by integrating inputs from wearable sensors, an approach known as ‘data fusion’. The wearable sensors recreate the skin’s sensing ability, one of which is known as ‘somatosensory’.
    However, gesture recognition precision is still hampered by the low quality of data arriving from wearable sensors, typically due to their bulkiness and poor contact with the user, and the effects of visually blocked objects and poor lighting. Further challenges arise from the integration of visual and sensory data as they represent mismatched datasets that must be processed separately and then merged at the end, which is inefficient and leads to slower response times.
    To tackle these challenges, the NTU team created a ‘bioinspired’ data fusion system that uses skin-like stretchable strain sensors made from single-walled carbon nanotubes, and an AI approach that resembles the way that the skin senses and vision are handled together in the brain.
    The NTU scientists developed their bio-inspired AI system by combining three neural network approaches in one system: they used a ‘convolutional neural network’, which is a machine learning method for early visual processing, a multilayer neural network for early somatosensory information processing, and a ‘sparse neural network’ to ‘fuse’ the visual and somatosensory information together.

    advertisement

    The result is a system that can recognise human gestures more accurately and efficiently than existing methods.
    Lead author of the study, Professor Chen Xiaodong, from the School of Materials Science and Engineering at NTU, said, “Our data fusion architecture has its own unique bioinspired features which include a human-made system resembling the somatosensory-visual fusion hierarchy in the brain. We believe such features make our architecture unique to existing approaches.”
    “Compared to rigid wearable sensors that do not form an intimate enough contact with the user for accurate data collection, our innovation uses stretchable strain sensors that comfortably attaches onto the human skin. This allows for high-quality signal acquisition, which is vital to high-precision recognition tasks,” added Prof Chen, who is also Director of the Innovative Centre for Flexible Devices (iFLEX) at NTU.
    The team comprising scientists from NTU Singapore and the University of Technology Sydney (UTS) published their findings in the scientific journal Nature Electronics in June.
    High recognition accuracy even in poor environmental conditions
    To capture reliable sensory data from hand gestures, the research team fabricated a transparent, stretchable strain sensor that adheres to the skin but cannot be seen in camera images.

    advertisement

    As a proof of concept, the team tested their bio-inspired AI system using a robot controlled through hand gestures and guided it through a maze.
    Results showed that hand gesture recognition powered by the bio-inspired AI system was able to guide the robot through the maze with zero errors, compared to six recognition errors made by a visual-based recognition system.
    High accuracy was also maintained when the new AI system was tested under poor conditions including noise and unfavourable lighting. The AI system worked effectively in the dark, achieving a recognition accuracy of over 96.7 per cent.
    First author of the study, Dr Wang Ming from the School of Materials Science & Engineering at NTU Singapore, said, “The secret behind the high accuracy in our architecture lies in the fact that the visual and somatosensory information can interact and complement each other at an early stage before carrying out complex interpretation. As a result, the system can rationally collect coherent information with less redundant data and less perceptual ambiguity, resulting in better accuracy.”
    Providing an independent view, Professor Markus Antonietti, Director of Max Planck Institute of Colloids and Interfaces in Germany said, “The findings from this paper bring us another step forward to a smarter and more machine-supported world. Much like the invention of the smartphone which has revolutionised society, this work gives us hope that we could one day physically control all of our surrounding world with great reliability and precision through a gesture.”
    “There are simply endless applications for such technology in the marketplace to support this future. For example, from a remote robot control over smart workplaces to exoskeletons for the elderly.”
    The NTU research team is now looking to build a VR and AR system based on the AI system developed, for use in areas where high-precision recognition and control are desired, such as entertainment technologies and rehabilitation in the home. More