More stories

  • in

    AI can predict early death risk

    Researchers at Geisinger have found that a computer algorithm developed using echocardiogram videos of the heart can predict mortality within a year.
    The algorithm — an example of what is known as machine learning, or artificial intelligence (AI) — outperformed other clinically used predictors, including pooled cohort equations and the Seattle Heart Failure score. The results of the study were published in Nature Biomedical Engineering.
    “We were excited to find that machine learning can leverage unstructured datasets such as medical images and videos to improve on a wide range of clinical prediction models,” said Chris Haggerty, Ph.D., co-senior author and assistant professor in the Department of Translational Data Science and Informatics at Geisinger.
    Imaging is critical to treatment decisions in most medical specialties and has become one of the most data-rich components of the electronic health record (EHR). For example, a single ultrasound of the heart yields approximately 3,000 images, and cardiologists have limited time to interpret these images within the context of numerous other diagnostic data. This creates a substantial opportunity to leverage technology, such as machine learning, to manage and analyze this data and ultimately provide intelligent computer assistance to physicians.
    For their study, the research team used specialized computational hardware to train the machine learning model on 812,278 echocardiogram videos collected from 34,362 Geisinger patients over the last ten years. The study compared the results of the model to cardiologists’ predictions based on multiple surveys. A subsequent survey showed that when assisted by the model, cardiologists’ prediction accuracy improved by 13 percent. Leveraging nearly 50 million images, this study represents one of the largest medical image datasets ever published.
    “Our goal is to develop computer algorithms to improve patient care,” said Alvaro Ulloa Cerna, Ph.D., author and senior data scientist in the Department of Translational Data Science and Informatics at Geisinger. “In this case, we’re excited that our algorithm was able to help cardiologists improve their predictions about patients, since decisions about treatment and interventions are based on these types of clinical predictions.”

    Story Source:
    Materials provided by Geisinger Health System. Note: Content may be edited for style and length. More

  • in

    School closures may not reduce coronavirus deaths as much as expected

    School closures, the loss of public spaces, and having to work remotely due to the coronavirus pandemic have caused major disruptions in people’s social lives all over the world.
    Researchers from City University of Hong Kong, the Chinese Academy of Sciences, and Rensselaer Polytechnic Institute suggest a reduction in fatal coronavirus cases can be achieved without the need for so much social disruption. They discuss the impacts of the closures of various types of facilities in the journal Chaos, from AIP Publishing.
    After running thousands of simulations of the pandemic response in New York City with variations in social distancing behavior at home, in schools, at public facilities, and in the workplace while considering differences in interactions between different age groups, the results were stunning. The researchers found school closures are not largely beneficial in preventing serious cases of COVID-19. Less surprisingly, social distancing in public places, particularly among elderly populations, is the most important.
    “School only represents a small proportion of social contact. … It is more likely that people get exposure to viruses in public facilities, like restaurants and shopping malls,” said Qingpeng Zhang, one of the authors. “Since we focus here on the severe infections and deceased cases, closing schools contributes little if the elderly citizens are not protected in public facilities and other places.”
    Because New York City is so densely populated, the effects of schools are significantly smaller than general day-to-day interactions in public, because students are generally the least vulnerable to severe infections. But keeping public spaces open allows for spread to occur from less-vulnerable young people to the more-vulnerable older population.
    “Students may bridge the connection between vulnerable people, but these people are already highly exposed in public facilities,” Zhang said. “In other cities where people are much more distanced, the results may change.”
    Though the present findings are specific to New York, replacing the age and location parameters in the model can extend its results to any city. This will help determine the ideal local control measures to contain the pandemic with minimal social disruptions.
    “These patterns are unique for different cities, and good practice in one city may not translate to another city,” said Zhang.
    The authors emphasized that while these findings have promising implications, the model is still just a model, and it cannot capture the intricacies and subtle details of real-life interactions to a perfect extent. The inclusion of mobile phone, census, transportation, or other big data in the future can help inform a more realistic decision.
    “Given the age and location mixing patterns, there are so many variables to be considered, so the optimization is challenging,” said Zhang. “Our model is an attempt.”

    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Wearable devices can detect COVID-19 symptoms and predict diagnosis, study finds

    Wearable devices can identify COVID-19 cases earlier than traditional diagnostic methods and can help track and improve management of the disease, Mount Sinai researchers report in one of the first studies on the topic. The findings were published in the Journal of Medical Internet Research on January 29.
    The Warrior Watch Study found that subtle changes in a participant’s heart rate variability (HRV) measured by an Apple Watch were able to signal the onset of COVID-19 up to seven days before the individual was diagnosed with the infection via nasal swab, and also to identify those who have symptoms.
    “This study highlights the future of digital health,” says the study’s corresponding author Robert P. Hirten, MD, Assistant Professor of Medicine (Gastroenterology) at the Icahn School of Medicine at Mount Sinai, and member of the Hasso Plattner Institute for Digital Health at Mount Sinai and the Mount Sinai Clinical Intelligence Center (MSCIC). “It shows that we can use these technologies to better address evolving health needs, which will hopefully help us improve the management of disease. Our goal is to operationalize these platforms to improve the health of our patients and this study is a significant step in that direction. Developing a way to identify people who might be sick even before they know they are infected would be a breakthrough in the management of COVID-19.”
    The researchers enrolled several hundred health care workers throughout the Mount Sinai Health System in an ongoing digital study between April and September 2020. The participants wore Apple Watches and answered daily questions through a customized app. Changes in their HRV — a measure of nervous system function detected by the wearable device — were used to identify and predict whether the workers were infected with COVID-19 or had symptoms. Other daily symptoms that were collected included fever or chills, tiredness or weakness, body aches, dry cough, sneezing, runny nose, diarrhea, sore throat, headache, shortness of breath, loss of smell or taste, and itchy eyes.
    Additionally, the researchers found that 7 to 14 days after diagnosis with COVID-19, the HRV pattern began to normalize and was no longer statistically different from the patterns of those who were not infected.
    “This technology allows us not only to track and predict health outcomes, but also to intervene in a timely and remote manner, which is essential during a pandemic that requires people to stay apart,” says the study’s co-author Zahi Fayad, PhD, Director of the BioMedical Engineering and Imaging Institute, Co-Founder of the MSCIC, and the Lucy G. Moses Professor of Medical Imaging and Bioengineering at the Icahn School of Medicine at Mount Sinai.
    The Warrior Watch Study draws on the collaborative effort of the Hasso Plattner Institute for Digital Health and the MSCIC, which represents a diverse group of data scientists, engineers, clinical physicians, and researchers across the Mount Sinai Health System who joined together in the spring of 2020 to combat COVID-19. The study will next take a closer look at biometrics including HRV, sleep disruption, and physical activity to better understand which health care workers are at risk of the psychological effects of the pandemic. More

  • in

    Robots sense human touch using camera and shadows

    Soft robots may not be in touch with human feelings, but they are getting better at feeling human touch.
    Cornell University researchers have created a low-cost method for soft, deformable robots to detect a range of physical interactions, from pats to punches to hugs, without relying on touch at all. Instead, a USB camera located inside the robot captures the shadow movements of hand gestures on the robot’s skin and classifies them with machine-learning software.
    The group’s paper, “ShadowSense: Detecting Human Touch in a Social Robot Using Shadow Image Classification,” published in the Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies. The paper’s lead author is doctoral student, Yuhan Hu.
    The new ShadowSense technology is the latest project from the Human-Robot Collaboration and Companionship Lab, led by the paper’s senior author, Guy Hoffman, associate professor in the Sibley School of Mechanical and Aerospace Engineering.
    The technology originated as part of an effort to develop inflatable robots that could guide people to safety during emergency evacuations. Such a robot would need to be able to communicate with humans in extreme conditions and environments. Imagine a robot physically leading someone down a noisy, smoke-filled corridor by detecting the pressure of the person’s hand.
    Rather than installing a large number of contact sensors — which would add weight and complex wiring to the robot, and would be difficult to embed in a deforming skin — the team took a counterintuitive approach. In order to gauge touch, they looked to sight.

    advertisement

    “By placing a camera inside the robot, we can infer how the person is touching it and what the person’s intent is just by looking at the shadow images,” Hu said. “We think there is interesting potential there, because there are lots of social robots that are not able to detect touch gestures.”
    The prototype robot consists of a soft inflatable bladder of nylon skin stretched around a cylindrical skeleton, roughly four feet in height, that is mounted on a mobile base. Under the robot’s skin is a USB camera, which connects to a laptop. The researchers developed a neural-network-based algorithm that uses previously recorded training data to distinguish between six touch gestures — touching with a palm, punching, touching with two hands, hugging, pointing and not touching at all — with an accuracy of 87.5 to 96%, depending on the lighting.
    The robot can be programmed to respond to certain touches and gestures, such as rolling away or issuing a message through a loudspeaker. And the robot’s skin has the potential to be turned into an interactive screen.
    By collecting enough data, a robot could be trained to recognize an even wider vocabulary of interactions, custom-tailored to fit the robot’s task, Hu said.
    The robot doesn’t even have to be a robot. ShadowSense technology can be incorporated into other materials, such as balloons, turning them into touch-sensitive devices.
    In addition to providing a simple solution to a complicated technical challenge, and making robots more user-friendly to boot, ShadowSense offers a comfort that is increasingly rare in these high-tech times: privacy.
    “If the robot can only see you in the form of your shadow, it can detect what you’re doing without taking high fidelity images of your appearance,” Hu said. “That gives you a physical filter and protection, and provides psychological comfort.”
    The research was supported by the National Science Foundation’s National Robotic Initiative.

    Story Source:
    Materials provided by Cornell University. Original written by David Nutt. Note: Content may be edited for style and length. More

  • in

    Deepfake detectors can be defeated, computer scientists show for the first time

    Systems designed to detect deepfakes — videos that manipulate real-life footage via artificial intelligence — can be deceived, computer scientists showed for the first time at the WACV 2021 conference which took place online Jan. 5 to 9, 2021.
    Researchers showed detectors can be defeated by inserting inputs called adversarial examples into every video frame. The adversarial examples are slightly manipulated inputs which cause artificial intelligence systems such as machine learning models to make a mistake. In addition, the team showed that the attack still works after videos are compressed.
    “Our work shows that attacks on deepfake detectors could be a real-world threat,” said Shehzeen Hussain, a UC San Diego computer engineering Ph.D. student and first co-author on the WACV paper. “More alarmingly, we demonstrate that it’s possible to craft robust adversarial deepfakes in even when an adversary may not be aware of the inner workings of the machine learning model used by the detector.”
    In deepfakes, a subject’s face is modified in order to create convincingly realistic footage of events that never actually happened. As a result, typical deepfake detectors focus on the face in videos: first tracking it and then passing on the cropped face data to a neural network that determines whether it is real or fake. For example, eye blinking is not reproduced well in deepfakes, so detectors focus on eye movements as one way to make that determination. State-of-the-art Deepfake detectors rely on machine learning models for identifying fake videos.
    The extensive spread of fake videos through social media platforms has raised significant concerns worldwide, particularly hampering the credibility of digital media, the researchers point out. “”If the attackers have some knowledge of the detection system, they can design inputs to target the blind spots of the detector and bypass it,” ” said Paarth Neekhara, the paper’s other first coauthor and a UC San Diego computer science student.
    Researchers created an adversarial example for every face in a video frame. But while standard operations such as compressing and resizing video usually remove adversarial examples from an image, these examples are built to withstand these processes. The attack algorithm does this by estimating over a set of input transformations how the model ranks images as real or fake. From there, it uses this estimation to transform images in such a way that the adversarial image remains effective even after compression and decompression.??

    advertisement

    The modified version of the face is then inserted in all the video frames. The process is then repeated for all frames in the video to create a deepfake video. The attack can also be applied on detectors that operate on entire video frames as opposed to just face crops.
    The team declined to release their code so it wouldn’t be used by hostile parties.
    High success rate
    Researchers tested their attacks in two scenarios: one where the attackers have complete access to the detector model, including the face extraction pipeline and the architecture and parameters of the classification model; and one where attackers can only query the machine 
 learning model to figure out the probabilities of a frame being classified as real or fake. In the first scenario, the attack’s success rate is above 99 percent for uncompressed videos. For compressed videos, it was 84.96 percent. In the second scenario, the success rate was 86.43 percent for uncompressed and 78.33 percent for compressed videos. This is the first work which demonstrates successful attacks on state-of-the-art deepfake detectors.
    “To use these deepfake detectors in practice, we argue that it is essential to evaluate them against an adaptive adversary who is aware of these defenses and is intentionally trying to foil these defenses,”? the researchers write. “We show that the current state of the art methods for deepfake detection can be easily bypassed if the adversary has complete or even partial knowledge of the detector.”
    To improve detectors, researchers recommend an approach similar to what is known as adversarial training: during training, an adaptive adversary continues to generate new deepfakes that can bypass the current state of the art detector; and the detector continues improving in order to detect the new deepfakes.
    Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples
    *Shehzeen Hussain, Malhar Jere, Farinaz Koushanfar, Department of Electrical and Computer Engineering, UC San Diego Paarth Neekhara, Julian McAuley, Department of Computer Science and Engineering, UC San Diego More

  • in

    MARLIT, artificial intelligence against marine litter

    Floating sea macro-litter is a threat to the conservation of marine ecosystems worldwide. The largest density of floating litter is in the great ocean gyres — systems of circular currents that spin and catch litter — but the polluting waste is abundant in coastal waters and semi closed seas such as the Mediterranean.
    MARLIT, an open access web app based on an algorithm designed with deep learning techniques, will enable the detection and quantification of floating plastics in the sea with a reliability over 80%, according to a study published in the journal Environmental Pollution and carried out by experts of the Faculty of Biology and the Biodiversity Research Institute of the University of Barcelona (IRBio).
    This methodology results from the analysis through artificial intelligence techniques of more than 3,800 aerial images of the Mediterranean coast in Catalonia, and it will allow researchers to make progress in the assessment of the presence, density and distribution of the plastic pollutants in the seas and oceans worldwide. Among the participants in the study, published in the journal Environmental Pollution, are the experts of the Consolidated Research Group on Large Marine Vertebrates of the UB and IRBio, and the Research Group on Biostatistics and Bioinformatics (GRBIO) of the UB, integrated in the Bioinformatics Barcelona platform (BIB).
    Litter that floats and pollutes the ocean
    Historically, direct observations (boats, planes, etc.) are the base for the common methodology to assess the impact of floating marine macro-litter (FMML). However, the great ocean area and the volume of data make it hard for the researchers to advance with the monitoring studies.
    “Automatic aerial photography techniques combined with analytical algorithms are more efficient protocols for the control and study of this kind of pollutants,” notes Odei Garcia-Garin, first author of the article and member of the CRG on Large Marine Mammals, led by Professor Àlex Aguilar.

    advertisement

    “However,” he continues, “automated remote sensing of these materials is at an early stage. There are several factors in the ocean (waves, wind, clouds, etc.) that harden the detection of floating litter automatically with the aerial images of the marine surface. This is why there are only a few studies that made the effort to work on algorithms to apply to this new research context.”
    The experts designed a new algorithm to automate the quantification of floating plastics in the sea through aerial photographs by applying the deep learning techniques, automatic learning methodology with artificial neuronal networks able to learn and take the learning to higher levels.
    “The great amount of images of the marine surface obtained by drones and planes in monitoring campaigns on marine litter -also in experimental studies with known floating objects- enabled us to develop and test a new algorithm that reaches a 80% of precision in the remote sensing of floating marine macro-litter,” notes Garcia-Garin, member of the Department of Evolutionary Biology, Ecology and Environmental Sciences of the UB and IRBio.
    Preservation of the oceans with deep learning techniques
    The new algorithm has been implemented to MARLIT, an open access web app described in the article and which is available to all managers and professionals in the study of the detection and quantification of floating marine macro-litter with aerial images. In particular, this is a proof of concept based on an R Shiny package, a methodological innovation with great interest to speed up the monitoring procedures of floating marine macro-litter.
    MARLIT enables the analysis of images individually, as well as to divide them into several segments, according to the user’s guidelines, identify the presence of floating litter in each certain area and estimate their density with the image metadata (height, resolution). In the future, it is expected to adapt the app to a remote sensor (for instance, a drone) to automate the remote sensing process.
    At a European level, the EU Marine Strategy Framework Directive indicates the application of FMML monitoring techniques to fulfill the continuous assessment of the environmental state of the marine environment. “Therefore, the automatization of monitoring processes and the use of apps such as MARLIT would ease the member states’ fulfilment of the directive,” conclude the authors of the study. More

  • in

    Severe undercounting of COVID-19 cases in U.S., other countries estimated via model

    A new machine-learning framework uses reported test results and death rates to calculate estimates of the actual number of current COVID-19 infections within all 50 U.S. states and 50 countries. Jungsik Noh and Gaudenz Danuser of the University of Texas Southwestern Medical Center present these findings in the open-access journal PLOS ONE on February 8, 2021.
    During the ongoing pandemic, U.S. states and many countries have reported daily counts of COVID-19 infections and deaths confirmed by testing. However, many infections have gone undetected, resulting in under-counting of the total number of people currently infected at any given point in time — an important metric to guide public health efforts.
    Now, Noh and Danuser have developed a computational model that uses machine-learning strategies to estimate the actual daily number of current infections for all 50 U.S. states and the 50 most-infected countries. To make the calculations, the model draws on previously published pandemic parameters and publicly available daily data on confirmed cases and deaths. Visualizations of these daily estimates are freely available online.
    The model’s estimates indicate severe undercounting of cases across the U.S. and worldwide. The cumulative number of actual cases in 9 out of 50 countries is estimated to be at least five times higher than confirmed cases. Within the U.S., estimates of the cumulative number of actual cases within states were in line with the results of an antibody testing study conducted in 46 states.
    For some countries, such as the U.S., Belgium, and the U.K., estimates indicate that more than 20 percent of the total population has experienced infection. As of January 31, 2021, some U.S. states — including Pennsylvania, Arizona, and Florida — have currently active cases totaling more than 5 percent of the state’s entire population. In Washington, the active cases were estimated to be one percent of the population that day.
    Looking ahead, the model has been estimating current COVID-19 case counts within communities, which could help inform contact-tracing and other public health efforts.
    The authors add: “Given that the confirmed cases only capture the tip of the iceberg in the middle of the pandemic, the estimated sizes of current infections in this study provide crucial information to determine the regional severity of COVID-19 that can be misguided by the confirmed cases.”

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    AI researchers ask: What's going on inside the black box?

    Cold Spring Harbor Laboratory (CSHL) Assistant Professor Peter Koo and collaborator Matt Ploenzke reported a way to train machines to predict the function of DNA sequences. They used “neural nets,” a type of artificial intelligence (AI) typically used to classify images. Teaching the neural net to predict the function of short stretches of DNA allowed it to work up to deciphering larger patterns. The researchers hope to analyze more complex DNA sequences that regulate gene activity critical to development and disease.
    Machine-learning researchers can train a brain-like “neural net” computer to recognize objects, such as cats or airplanes, by showing it many images of each. Testing the success of training requires showing the machine a new picture of a cat or an airplane and seeing if it classifies it correctly. But, when researchers apply this technology to analyzing DNA patterns, they have a problem. Humans can’t recognize the patterns, so they may not be able to tell if the computer identifies the right thing. Neural nets learn and make decisions independently of their human programmers. Researchers refer to this hidden process as a “black box.” It is hard to trust the machine’s outputs if we don’t know what is happening in the box.
    Koo and his team fed DNA (genomic) sequences into a specific kind of neural network called a convolutional neural network (CNN), which resembles how animal brains process images. Koo says:
    “It can be quite easy to interpret these neural networks because they’ll just point to, let’s say, whiskers of a cat. And so that’s why it’s a cat versus an airplane. In genomics, it’s not so straightforward because genomic sequences aren’t in a form where humans really understand any of the patterns that these neural networks point to.”
    Koo’s research, reported in the journal Nature Machine Intelligence, introduced a new method to teach important DNA patterns to one layer of his CNN. This allowed his neural network to build on the data to identify more complex patterns. Koo’s discovery makes it possible to peek inside the black box and identify some key features that lead to the computer’s decision-making process.
    But Koo has a larger purpose in mind for the field of artificial intelligence. There are two ways to improve a neural net: interpretability and robustness. Interpretability refers to the ability of humans to decipher why machines give a certain prediction. The ability to produce an answer even with mistakes in the data is called robustness. Usually, researchers focus on one or the other. Koo says:
    “What my research is trying to do is bridge these two together because I don’t think they’re separate entities. I think that we get better interpretability if our models are more robust.”
    Koo hopes that if a machine can find robust and interpretable DNA patterns related to gene regulation, it will help geneticists understand how mutations affect cancer and other diseases.

    Story Source:
    Materials provided by Cold Spring Harbor Laboratory. Original written by Jasmine Lee. Note: Content may be edited for style and length. More