More stories

  • in

    Task force examines role of mobile health technology in COVID-19 pandemic

    An international task force, including two University of Massachusetts Amherst computer scientists, concludes in new research that mobile health (mHealth) technologies are a viable option to monitor COVID-19 patients at home and predict which ones will need medical intervention.
    The technologies — including wearable sensors, electronic patient-reported data and digital contact tracing — also could be used to monitor and predict coronavirus exposure in people presumed to be free of infection, providing information that could help prioritize diagnostic testing.
    The 60-member panel, with members from Australia, Germany, Ireland, Italy, Switzerland and across the U.S., was led by Harvard Medical School associate professor Paolo Bonato, director of the Motion Analysis Lab at Spaulding Rehabilitation Hospital in Boston. UMass Amherst task force members Sunghoon Ivan Lee and Tauhidur Rahman, both assistant professors in the College of Information and Computer Sciences, focused their review on mobile health sensors, their area of expertise.
    The team’s study, “Can mHealth Technology Help Mitigate the Effects of the COVID 19 Pandemic?” was published Wednesday in the IEEE Open Journal of Engineering in Medicine and Biology.
    “To be able to activate a diverse group of experts with such a singular focus speaks to the commitment the entire research and science community has in addressing this pandemic,” Bonato says. “Our goal is to quickly get important findings into the hands of the clinical community so we continue to build effective interventions.”
    The task force brought together researchers and experts from a range of fields, including computer science, biomedical engineering, medicine and health sciences. “A large number of researchers and experts around the world dedicated months of efforts to carefully reviewing technologies in eight different areas,” Lee says.

    advertisement

    “I hope that the paper will enable current and future researchers to understand the complex problems and the limitations and potential solutions of these state-of-the-art mobile health systems,” Rahman adds.
    The task force review found that smartphone applications enabling self-reports and wearable sensors enabling physiological data collection could be used to monitor clinical workers and detect early signs of an outbreak in hospital or healthcare settings.
    Similarly, in the community, early detection of COVID-19 cases could be achieved by building on research that showed it is possible to predict influenza-like illness rates, as well as COVID-19 epidemic trends, by using wearable sensors to capture heart rate and sleep duration, among other data.
    Lee and Rahman, inventors of mobile health sensors themselves, reviewed 27 commercially available remote monitoring technologies that could be immediately used in clinical practices to help patients and frontline healthcare workers monitor symptoms of COVID-19.
    “We carefully investigated whether the technologies could ‘monitor’ a number of obvious indicators and symptoms of COVID-19 and whether any clearance or certification from health authorities was needed,” Lee says. “We considered ease of use and integration flexibility with existing hospital electronic systems. Then we identified 12 examples of technologies that could potentially be used to monitor patients and healthcare workers.”
    Bonato says additional research will help expand the understanding of how best to use and develop the technologies. “The better data and tracking we can collect using mHealth technologies can help public health experts understand the scope and spread of this virus and, most importantly, hopefully help more people get the care they need earlier,” he says.
    The paper concludes, “When combined with diagnostic and immune status testing, mHealth technology could be a valuable tool to help mitigate, if not prevent, the next surge of COVID-19 cases.” More

  • in

    Artificial intelligence recognizes deteriorating photoreceptors

    A software based on artificial intelligence (AI), which was developed by researchers at the Eye Clinic of the University Hospital Bonn, Stanford University and University of Utah, enables the precise assessment of the progression of geographic atrophy (GA), a disease of the light sensitive retina caused by age-related macular degeneration (AMD). This innovative approach permits the fully automated measurement of the main atrophic lesions using data from optical coherence tomography, which provides three-dimensional visualization of the structure of the retina. In addition, the research team can precisely determine the integrity of light sensitive cells of the entire central retina and also detect progressive degenerative changes of the so-called photoreceptors beyond the main lesions. The findings will be used to assess the effectiveness of new innovative therapeutic approaches. The study has now been published in the journal “JAMA Ophthalmology.”
    There is no effective treatment for geographic atrophy, one of the most common causes of blindness in industrialized nations. The disease damages cells of the retina and causes them to die. The main lesions, areas of degenerated retina, also known as “geographic atrophy,” expand as the disease progresses and result in blind spots in the affected person’s visual field. A major challenge for evaluating therapies is that these lesions progress slowly, which means that intervention studies require a long follow-up period. “When evaluating therapeutic approaches, we have so far concentrated primarily on the main lesions of the disease. However, in addition to central visual field loss, patients also suffer from symptoms such as a reduced light sensitivity in the surrounding retina,” explains Prof. Dr. Frank G. Holz, Director of the Eye Clinic at the University Hospital Bonn. “Preserving the microstructure of the retina outside the main lesions would therefore already be an important achievement, which could be used to verify the effectiveness of future therapeutic approaches.”
    Integrity of light sensitive cells predicts disease progression
    The researchers were furthermore able to show that the integrity of light sensitive cells outside areas of geographic atrophy is a predictor of the future progression of the disease. “It may therefore be possible to slow down the progression of the main atrophic lesions by using therapeutic approaches that protect the surrounding light sensitive cells,” says Prof. Monika Fleckenstein of the Moran Eye Center at the University of Utah in the USA, initiator of the Bonn-based natural history study on geographic atrophy, on which the current publication is based.
    “Research in ophthalmology is increasingly data-driven. The fully automated, precise analysis of the finest, microstructural changes in optical coherence tomography data using AI represents an important step towards personalized medicine for patients with age-related macular degeneration,” explains lead author Dr. Maximilian Pfau from the Eye Clinic at the University Hospital Bonn, who is currently working as a fellow of the German Research Foundation (DFG) and postdoctoral fellow at Stanford University in the Department of Biomedical Data Science. “It would also be useful to re-evaluate older treatment studies with the new methods in order to assess possible effects on photoreceptor integrity.”

    Story Source:
    Materials provided by University of Bonn. Note: Content may be edited for style and length. More

  • in

    AI system for high precision recognition of hand gestures

    Scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed an Artificial Intelligence (AI) system that recognises hand gestures by combining skin-like electronics with computer vision.
    The recognition of human hand gestures by AI systems has been a valuable development over the last decade and has been adopted in high-precision surgical robots, health monitoring equipment and in gaming systems.
    AI gesture recognition systems that were initially visual-only have been improved upon by integrating inputs from wearable sensors, an approach known as ‘data fusion’. The wearable sensors recreate the skin’s sensing ability, one of which is known as ‘somatosensory’.
    However, gesture recognition precision is still hampered by the low quality of data arriving from wearable sensors, typically due to their bulkiness and poor contact with the user, and the effects of visually blocked objects and poor lighting. Further challenges arise from the integration of visual and sensory data as they represent mismatched datasets that must be processed separately and then merged at the end, which is inefficient and leads to slower response times.
    To tackle these challenges, the NTU team created a ‘bioinspired’ data fusion system that uses skin-like stretchable strain sensors made from single-walled carbon nanotubes, and an AI approach that resembles the way that the skin senses and vision are handled together in the brain.
    The NTU scientists developed their bio-inspired AI system by combining three neural network approaches in one system: they used a ‘convolutional neural network’, which is a machine learning method for early visual processing, a multilayer neural network for early somatosensory information processing, and a ‘sparse neural network’ to ‘fuse’ the visual and somatosensory information together.

    advertisement

    The result is a system that can recognise human gestures more accurately and efficiently than existing methods.
    Lead author of the study, Professor Chen Xiaodong, from the School of Materials Science and Engineering at NTU, said, “Our data fusion architecture has its own unique bioinspired features which include a human-made system resembling the somatosensory-visual fusion hierarchy in the brain. We believe such features make our architecture unique to existing approaches.”
    “Compared to rigid wearable sensors that do not form an intimate enough contact with the user for accurate data collection, our innovation uses stretchable strain sensors that comfortably attaches onto the human skin. This allows for high-quality signal acquisition, which is vital to high-precision recognition tasks,” added Prof Chen, who is also Director of the Innovative Centre for Flexible Devices (iFLEX) at NTU.
    The team comprising scientists from NTU Singapore and the University of Technology Sydney (UTS) published their findings in the scientific journal Nature Electronics in June.
    High recognition accuracy even in poor environmental conditions
    To capture reliable sensory data from hand gestures, the research team fabricated a transparent, stretchable strain sensor that adheres to the skin but cannot be seen in camera images.

    advertisement

    As a proof of concept, the team tested their bio-inspired AI system using a robot controlled through hand gestures and guided it through a maze.
    Results showed that hand gesture recognition powered by the bio-inspired AI system was able to guide the robot through the maze with zero errors, compared to six recognition errors made by a visual-based recognition system.
    High accuracy was also maintained when the new AI system was tested under poor conditions including noise and unfavourable lighting. The AI system worked effectively in the dark, achieving a recognition accuracy of over 96.7 per cent.
    First author of the study, Dr Wang Ming from the School of Materials Science & Engineering at NTU Singapore, said, “The secret behind the high accuracy in our architecture lies in the fact that the visual and somatosensory information can interact and complement each other at an early stage before carrying out complex interpretation. As a result, the system can rationally collect coherent information with less redundant data and less perceptual ambiguity, resulting in better accuracy.”
    Providing an independent view, Professor Markus Antonietti, Director of Max Planck Institute of Colloids and Interfaces in Germany said, “The findings from this paper bring us another step forward to a smarter and more machine-supported world. Much like the invention of the smartphone which has revolutionised society, this work gives us hope that we could one day physically control all of our surrounding world with great reliability and precision through a gesture.”
    “There are simply endless applications for such technology in the marketplace to support this future. For example, from a remote robot control over smart workplaces to exoskeletons for the elderly.”
    The NTU research team is now looking to build a VR and AR system based on the AI system developed, for use in areas where high-precision recognition and control are desired, such as entertainment technologies and rehabilitation in the home. More

  • in

    A novel strategy for quickly identifying twitter trolls

    Two algorithms that account for distinctive use of repeated words and word pairs require as few as 50 tweets to accurately distinguish deceptive “troll” messages from those posted by public figures. Sergei Monakhov of Friedrich Schiller University in Jena, Germany, presents these findings in the open-access journal PLOS ONE on August 12, 2020.
    Troll internet messages aim to achieve a specific purpose, while also masking that purpose. For instance, in 2018, 13 Russian nationals were accused of using false personas to interfere with the 2016 U.S. presidential election via social media posts. While previous research has investigated distinguishing characteristics of troll tweets — such as timing, hashtags, and geographical location — few studies have examined linguistic features of the tweets themselves.
    Monakhov took a sociolinguistic approach, focusing on the idea that trolls have a limited number of messages to convey, but must do so multiple times and with enough diversity of wording and topics to fool readers. Using a library of Russian troll tweets and genuine tweets from U.S. congresspeople, Monakhov showed that these troll-specific restrictions result in distinctive patterns of repeated words and word pairs that are different from patterns seen in genuine, non-troll tweets.
    Then, Monakhov tested an algorithm that uses these distinctive patterns to distinguish between genuine tweets and troll tweets. He found that the algorithm required as few as 50 tweets for accurate identification of trolls versus congresspeople. He also found that the algorithm correctly distinguished troll tweets from tweets by Donald Trump — which although provocative and “potentially misleading,” according to Twitter, are not crafted to hide his purpose.
    This new strategy for quickly identifying troll tweets could help inform efforts to combat hybrid warfare while preserving freedom of speech. Further research will be needed to determine whether it can accurately distinguish troll tweets from other types of messages that are not posted by public figures.
    Monakhov adds: “Though troll writing is usually thought of as being permeated with recurrent messages, its most characteristic trait is an anomalous distribution of repeated words and word pairs. Using the ratio of their proportions as a quantitative measure, one needs as few as 50 tweets for identifying internet troll accounts.”

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Coffee stains inspire optimal printing technique for electronics

    Have you ever spilled your coffee on your desk? You may then have observed one of the most puzzling phenomena of fluid mechanics — the coffee ring effect. This effect has hindered the industrial deployment of functional inks with graphene, 2D materials, and nanoparticles because it makes printed electronic devices behave irregularly.
    Now, after studying this process for years, a team of researchers have created a new family of inks that overcomes this problem, enabling the fabrication of new electronics such as sensors, light detectors, batteries and solar cells.
    Coffee rings form because the liquid evaporates quicker at the edges, causing an accumulation of solid particles that results in the characteristic dark ring. Inks behave like coffee — particles in the ink accumulate around the edges creating irregular shapes and uneven surfaces, especially when printing on hard surfaces like silicon wafers or plastics.
    Researchers, led by Tawfique Hasan from the Cambridge Graphene Centre of the University of Cambridge, with Colin Bain from the Department of Chemistry of Durham University, and Meng Zhang from School of Electronic and Information Engineering of Beihang University, studied the physics of ink droplets combining particle tracking in high-speed micro-photography, fluid mechanics, and different combinations of solvents.
    Their solution: alcohol, specifically a mixture of isopropyl alcohol and 2-butanol. Using these, ink particles tend to distribute evenly across the droplet, generating shapes with uniform thickness and properties. Their results are reported in the journal Science Advances.
    “The natural form of ink droplets is spherical — however, because of their composition, our ink droplets adopt pancake shapes,” said Hasan.
    While drying, the new ink droplets deform smoothly across the surface, spreading particles consistently. Using this universal formulation, manufacturers could adopt inkjet printing as a cheap, easy-to-access strategy for the fabrication of electronic devices and sensors. The new inks also avoid the use of polymers or surfactants — commercial additives used to tackle the coffee ring effect, but at the same time thwart the electronic properties of graphene and other 2D materials.
    Most importantly, the new methodology enables reproducibility and scalability — researchers managed to print 4500 nearly identical devices on a silicon wafer and plastic substrate. In particular, they printed gas sensors and photodetectors, both displaying very little variations in performance. Previously, printing a few hundred such devices was considered a success, even if they showed uneven behaviour.
    “Understanding this fundamental behaviour of ink droplets has allowed us to find this ideal solution for inkjet printing all kinds of two-dimensional crystals,” said first author Guohua Hu. “Our formulation can be easily scaled up to print new electronic devices on silicon wafers, or plastics, and even in spray painting and wearables, already matching or exceeding the manufacturability requirements for printed devices.”
    Beyond graphene, the team has optimised over a dozen ink formulations containing different materials. Some of them are graphene two-dimensional ‘cousins’ such as black phosphorus and boron nitride, others are more complex structures like heterostructures — ‘sandwiches’ of different 2D materials — and nanostructured materials. Researchers say their ink formulations can also print pure nanoparticles and organic molecules.This variety of materials could boost the manufacturing of electronic and photonic devices, as well as more efficient catalysts, solar cells, batteries and functional coatings.
    The team expects to see industrial applications of this technology very soon. Their first proofs of concept — printed sensors and photodetectors — have shown promising results in terms of sensitivity and consistency, exceeding the usual industry requirements. This should attract investors interested in printed and flexible electronics.
    “Our technology could speed up the adoption of inexpensive, low-power, ultra-connected sensors for the internet of things,” said Hasan. “The dream of smart cities will come true.” More

  • in

    Quantum researchers create an error-correcting cat

    Yale physicists have developed an error-correcting cat — a new device that combines the Schrödinger’s cat concept of superposition (a physical system existing in two states at once) with the ability to fix some of the trickiest errors in a quantum computation.
    It is Yale’s latest breakthrough in the effort to master and manipulate the physics necessary for a useful quantum computer: correcting the stream of errors that crop up among fragile bits of quantum information, called qubits, while performing a task.
    A new study reporting on the discovery appears in the journal Nature. The senior author is Michel Devoret, Yale’s F.W. Beinecke Professor of Applied Physics and Physics. The study’s co-first authors are Alexander Grimm, a former postdoctoral associate in Devoret’s lab who is now a tenure-track scientist at the Paul Scherrer Institute in Switzerland, and Nicholas Frattini, a graduate student in Devoret’s lab.
    Quantum computers have the potential to transform an array of industries, from pharmaceuticals to financial services, by enabling calculations that are orders of magnitude faster than today’s supercomputers.
    Yale — led by Devoret, Robert Schoelkopf, and Steven Girvin — continues to build upon two decades of groundbreaking quantum research. Yale’s approach to building a quantum computer is called “circuit QED” and employs particles of microwave light (photons) in a superconducting microwave resonator.
    In a traditional computer, information is encoded as either 0 or 1. The only errors that crop up during calculations are “bit-flips,” when a bit of information accidentally flips from 0 to 1 or vice versa. The way to correct it is by building in redundancy: using three “physical” bits of information to ensure one “effective” — or accurate — bit.

    advertisement

    In contrast, quantum information bits — qubits — are subject to both bit-flips and “phase-flips,” in which a qubit randomly flips between quantum superpositions (when two opposite states exist simultaneously).
    Until now, quantum researchers have tried to fix errors by adding greater redundancy, requiring an abundance of physical qubits for each effective qubit.
    Enter the cat qubit — named for Schrödinger’s cat, the famous paradox used to illustrate the concept of superposition.
    The idea is that a cat is placed in a sealed box with a radioactive source and a poison that will be triggered if an atom of the radioactive substance decays. The superposition theory of quantum physics suggests that until someone opens the box, the cat is both alive and dead, a superposition of states. Opening the box to observe the cat causes it to abruptly change its quantum state randomly, forcing it to be either alive or dead.
    “Our work flows from a new idea. Why not use a clever way to encode information in a single physical system so that one type of error is directly suppressed?” Devoret asked.

    advertisement

    Unlike the multiple physical qubits needed to maintain one effective qubit, a single cat qubit can prevent phase flips all by itself. The cat qubit encodes an effective qubit into superpositions of two states within a single electronic circuit — in this case a superconducting microwave resonator whose oscillations correspond to the two states of the cat qubit.
    “We achieve all of this by applying microwave frequency signals to a device that is not significantly more complicated than a traditional superconducting qubit,” Grimm said.
    The researchers said they are able to change their cat qubit from any one of its superposition states to any other superposition state, on command. In addition, the researchers developed a new way of reading out — or identifying — the information encoded into the qubit.
    “This makes the system we have developed a versatile new element that will hopefully find its use in many aspects of quantum computation with superconducting circuits,” Devoret said.
    Co-authors of the study are Girvin, Shruti Puri, Shantanu Mundhada, and Steven Touzard, all of Yale; Mazyar Mirrahimi of Inria Paris; and Shyam Shankar of the University of Texas-Austin.
    The United States Department of Defense, the United States Army Research Office, and the National Science Foundation funded the research.

    Story Source:
    Materials provided by Yale University. Original written by Jim Shelton. Note: Content may be edited for style and length. More

  • in

    Engaging undergrads remotely with an escape room game

    To prevent the spread of COVID-19, many universities canceled classes or held them online this spring — a change likely to continue for many this fall. As a result, hands-on chemistry labs are no longer accessible to undergraduate students. In a new study in the Journal of Chemical Education, researchers describe an alternative way to engage students: a virtual game, modeled on an escape room, in which teams solve chemistry problems to progress and “escape.”
    While some lab-related activities, such as calculations and data analysis, can be done remotely, these can feel like extra work. Faced with the cancellation of their own in-person laboratory classes during the COVID-19 pandemic, Matthew J. Vergne and colleagues looked outside-the-box. They sought to develop an online game for their students that would mimic the cooperative learning that normally accompanies a lab experience.
    To do so, they designed a virtual escape game with an abandoned chocolate factory theme. Using a survey-creation app, they set up a series of “rooms,” each containing a problem that required students to, for example, calculate the weight of theobromine, a component of chocolate. They tested the escape room game on a class of eight third- and fourth-year undergraduate chemistry and biochemistry students. The researchers randomly paired the students, who worked together over a video conferencing app. In a video call afterward, the students reported collaborating effectively and gave the game good reviews, say the researchers, who also note that it was not possible to ensure students didn’t use outside resources to solve the problems.
    Future versions of the game could potentially incorporate online simulations or remote access to computer-controlled lab instrumentation on campus, they say.

    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    Soldiers could teach future robots how to outperform humans

    In the future, a Soldier and a game controller may be all that’s needed to teach robots how to outdrive humans.
    At the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory and the University of Texas at Austin, researchers designed an algorithm that allows an autonomous ground vehicle to improve its existing navigation systems by watching a human drive. The team tested its approach — called adaptive planner parameter learning from demonstration, or APPLD — on one of the Army’s experimental autonomous ground vehicles.
    “Using approaches like APPLD, current Soldiers in existing training facilities will be able to contribute to improvements in autonomous systems simply by operating their vehicles as normal,” said Army researcher Dr. Garrett Warnell. “Techniques like these will be an important contribution to the Army’s plans to design and field next-generation combat vehicles that are equipped to navigate autonomously in off-road deployment environments.”
    The researchers fused machine learning from demonstration algorithms and more classical autonomous navigation systems. Rather than replacing a classical system altogether, APPLD learns how to tune the existing system to behave more like the human demonstration. This paradigm allows for the deployed system to retain all the benefits of classical navigation systems — such as optimality, explainability and safety — while also allowing the system to be flexible and adaptable to new environments, Warnell said.
    “A single demonstration of human driving, provided using an everyday Xbox wireless controller, allowed APPLD to learn how to tune the vehicle’s existing autonomous navigation system differently depending on the particular local environment,” Warnell said. “For example, when in a tight corridor, the human driver slowed down and drove carefully. After observing this behavior, the autonomous system learned to also reduce its maximum speed and increase its computation budget in similar environments. This ultimately allowed the vehicle to successfully navigate autonomously in other tight corridors where it had previously failed.”
    This research is part of the Army’s Open Campus initiative, through which Army scientists in Texas collaborate with academic partners at UT Austin.

    advertisement

    “APPLD is yet another example of a growing stream of research results that has been facilitated by the unique collaboration arrangement between UT Austin and the Army Research Lab,” said Dr. Peter Stone, professor and chair of the Robotics Consortium at UT Austin. “By having Dr. Warnell embedded at UT Austin full-time, we are able to quickly identify and tackle research problems that are both cutting-edge scientific advances and also immediately relevant to the Army.”
    The team’s experiments showed that, after training, the APPLD system was able to navigate the test environments more quickly and with fewer failures than with the classical system. Additionally, the trained APPLD system often navigated the environment faster than the human who trained it. The peer-reviewed journal, IEEE Robotics and Automation Letters, published the team’s work: APPLD: Adaptive Planner Parameter Learning From Demonstration .
    “From a machine learning perspective, APPLD contrasts with so called end-to-end learning systems that attempt to learn the entire navigation system from scratch,” Stone said. “These approaches tend to require a lot of data and may lead to behaviors that are neither safe nor robust. APPLD leverages the parts of the control system that have been carefully engineered, while focusing its machine learning effort on the parameter tuning process, which is often done based on a single person’s intuition.”
    APPLD represents a new paradigm in which people without expert-level knowledge in robotics can help train and improve autonomous vehicle navigation in a variety of environments. Rather than small teams of engineers trying to manually tune navigation systems in a small number of test environments, a virtually unlimited number of users would be able to provide the system the data it needs to tune itself to an unlimited number of environments.
    “Current autonomous navigation systems typically must be re-tuned by hand for each new deployment environment,” said Army researcher Dr. Jonathan Fink. “This process is extremely difficult — it must be done by someone with extensive training in robotics, and it requires a lot of trial and error until the right systems settings can be found. In contrast, APPLD tunes the system automatically by watching a human drive the system — something that anyone can do if they have experience with a video game controller. During deployment, APPLD also allows the system to re-tune itself in real-time as the environment changes.”
    The Army’s focus on modernizing the Next Generation Combat Vehicle includes designing both optionally manned fighting vehicles and robotic combat vehicles that can navigate autonomously in off-road deployment environments. While Soldiers can navigate these environments driving current combat vehicles, the environments remain too challenging for state-of-the-art autonomous navigation systems. APPLD and similar approaches provide a new potential way for the Army to improve existing autonomous navigation capabilities.
    “In addition to the immediate relevance to the Army, APPLD also creates the opportunity to bridge the gap between traditional engineering approaches and emerging machine learning techniques, to create robust, adaptive, and versatile mobile robots in the real-world,” said Dr. Xuesu Xiao, a postdoctoral researcher at UT Austin and lead author of the paper.
    To continue this research, the team will test the APPLD system in a variety of outdoor environments, employ Soldier drivers, and experiment with a wider variety of existing autonomous navigation approaches. Additionally, the researchers will investigate whether including additional sensor information such as camera images can lead to learning more complex behaviors such as tuning the navigation system to operate under varying conditions, such as on different terrain or with other objects present. More