More stories

  • in

    Contagion model predicts flooding in urban areas

    Inspired by the same modeling and mathematical laws used to predict the spread of pandemics, researchers at Texas A&M University have created a model to accurately forecast the spread and recession process of floodwaters in urban road networks. With this new approach, researchers have created a simple and powerful mathematical approach to a complex problem.
    “We were inspired by the fact that the spread of epidemics and pandemics in communities has been studied by people in health sciences and epidemiology and other fields, and they have identified some principles and rules that govern the spread process in complex social networks,” said Dr. Ali Mostafavi, associate professor in the Zachry Department of Civil and Environmental Engineering. “So we ask ourselves, are these spreading processes the same for the spread of flooding in cities? We tested that, and surprisingly, we found that the answer is yes.”
    The findings of this study were recently published in Nature Scientific Reports.
    The contagion model, Susceptible-Exposed-Infected-Recovered (SEIR), is used to mathematically model the spread of infectious diseases. In relation to flooding, Mostafavi and his team integrated the SEIR model with the network spread process in which the probability of flooding of a road segment depends on the degree to which the nearby road segments are flooded.
    In the context of flooding, susceptible is a road that can be flooded because it is in a flood plain; exposed is a road that has flooding due to rainwater or overflow from a nearby channel; infected is a road that is flooded and cannot be used; and recovered is a road where the floodwater has receded.
    The research team verified the model’s use with high-resolution historical data of road flooding in Harris County during Hurricane Harvey in 2017. The results show that the model can monitor and predict the evolution of flooded roads over time.

    advertisement

    “The power of this approach is it offers a simple and powerful mathematical approach and provides great potential to support emergency managers, public officials, residents, first responders and other decision makers for flood forecast in road networks,” Mostafavi said.
    The proposed model can achieve decent precision and recall for the spatial spread of the flooded roads.
    “If you look at the flood monitoring system of Harris County, it can show you if a channel is overflowing now, but they’re not able to predict anything about the next four hours or next eight hours. Also, the existing flood monitoring systems provide limited information about the propagation of flooding in road networks and the impacts on urban mobility. But our models, and this specific model for the road networks, is robust at predicting the future spread of flooding,” he said. “In addition to flood prediction in urban networks, the findings of this study provide very important insights about the universality of the network spread processes across various social, natural, physical and engineered systems; this is significant for better modeling and managing cities, as complex systems.”
    The only limitation to this flood prediction model is that it cannot identify where the initial flooding will begin, but Mostafavi said there are other mechanisms in place such as sensors on flood gauges that can address this.
    “As soon as flooding is reported in these areas, we can use our model, which is very simple compared to hydraulic and hydrologic models, to predict the flood propagation in future hours. The forecast of road inundations and mobility disruptions is critical to inform residents to avoid high-risk roadways and to enable emergency managers and responders to optimize relief and rescue in impacted areas based on predicted information about road access and mobility. This forecast could be the difference between life and death during crisis response,” he said.
    Civil engineering doctoral student and graduate research assistant Chao Fan led the analysis and modeling of the Hurricane Harvey data, along with Xiangqi (Alex) Jiang, a graduate student in computer science, who works in Mostafavi’s UrbanResilience.AI Lab.
    “By doing this research, I realize the power of mathematical models in addressing engineering problems and real-world challenges.
    This research expands my research capabilities and will have a long-term impact on my career,” Fan said. “In addition, I am also very excited that my research can contribute to reducing the negative impacts of natural disasters on infrastructure services.”

    Story Source:
    Materials provided by Texas A&M University. Original written by Alyson Chapman. Note: Content may be edited for style and length. More

  • in

    Beam me up: Researchers use 'behavioral teleporting' to study social interactions

    Teleporting is a science fiction trope often associated with Star Trek. But a different kind of teleporting is being explored at the NYU Tandon School of Engineering, one that could let researchers investigate the very basis of social behavior, study interactions between invasive and native species to preserve natural ecosystems, explore predator/prey relationship without posing a risk to the welfare of the animals, and even fine-tune human/robot interfaces.
    The team, led by Maurizio Porfiri, Institute Professor at NYU Tandon, devised a novel approach to getting physically separated fish to interact with each other, leading to insights about what kinds of cues influence social behavior.
    The innovative system, called “behavioral teleporting” — the transfer of the complete inventory of behaviors and actions (ethogram) of a live zebrafish onto a remotely located robotic replica — allowed the investigators to independently manipulate multiple factors underpinning social interactions in real-time. The research, “Behavioral teleporting of individual ethograms onto inanimate robots: experiments on social interactions in live zebrafish,” appears in the Cell Press journal iScience.
    The team, including Mert Karakaya, a Ph.D. candidate in the Department of Mechanical and Aerospace Engineering at NYU Tandon, and Simone Macrì of the Centre for Behavioral Sciences and Mental Health, Istituto Superiore di Sanità, Rome, devised a setup consisting of two separate tanks, each containing one fish and one robotic replica. Within each tank, the live fish of the pair swam with the zebrafish replica matching the morphology and locomotory pattern of the live fish located in the other tank.
    An automated tracking system scored each of the live subjects’ locomotory patterns, which were, in turn, used to control the robotic replica swimming in the other tank via an external manipulator. Therefore, the system allowed the transfer of the complete ethogram of each fish across tanks within a fraction of a second, establishing a complex robotics-mediated interaction between two remotely-located live animals. By independently controlling the morphology of these robots, the team explored the link between appearance and movements in social behavior.
    The investigators found that the replica teleported the fish motion in almost all trials (85% of the total experimental time), with a 95% accuracy at a maximum time lag of less than two-tenths of a second. The high accuracy in the replication of fish trajectory was confirmed by equivalent analysis on speed, turn rate, and acceleration.

    advertisement

    Porfiri explained that the behavioral teleporting system avoids the limits of typical modeling using robots.
    “Since existing approaches involve the use of a mathematical representation of social behavior for controlling the movements of the replica, they often lead to unnatural behavioral responses of live animals,” he said. “But because behavioral teleporting ‘copy/pastes’ the behavior of a live fish onto robotic proxies, it confers a high degree of precision with respect to such factors as position, speed, turn rate, and acceleration.”
    Porfiri’s previous research proving robots are viable as behavior models for zebrafish showed that schools of zebrafish could be made to follow the lead of their robotic counterparts.
    “In humans, social behavior unfolds in actions, habits, and practices that ultimately define our individual life and our society,” added Macrì. “These depend on complex processes, mediated by individual traits — baldness, height, voice pitch, and outfit, for example — and behavioral feedback, vectors that are often difficult to isolate. This new approach demonstrates that we canisolate influences on the quality of social interaction and determine which visual features really matter.”
    The research included experiments to understand the asymmetric relationship between large and small fish and identify leader/follower roles, in which a large fish swam with a small replica that mirrored the behavior of the small fish positioned in the other tank and vice-versa.

    advertisement

    Karakaya said the team was surprised to find that the smaller — not larger — fish “led” the interactions.
    “There are no strongly conclusive results on why that could be, but one reason might be due to the ‘curious’ nature of the smaller individuals to explore a novel space,” he said. “In known environments, large fish tend to lead; however, in new environments larger and older animals can be cautious in their approach, whereas the smaller and younger ones could be ‘bolder.'”
    The method also led to the discovery that interaction between fish was not determined by locomotor patterns alone, but also by appearance.
    “It is interesting to see that, as is the case with our own species, there is a relationship between appearance and social interaction,” he added.
    Karakaya added that this could serve as an important tool for human interactions in the near future, whereby, through the closed-loop teleporting, people could use robots as proxies of themselves.
    “One example would be the colonies on Mars, where experts from Earth could use humanoid robots as an extension of themselves to interact with the environment and people there. This would provide easier and more accurate medical examination, improve human contact, and reduce isolation. Detailed studies on the behavioral and psychological effects of these proxies must be completed to better understand how these techniques can be implemented into daily life.”
    This work was supported by the National Science Foundation, the National Institute on Drug Abuse, and the Office of Behavioral and Social Sciences Research. More

  • in

    Robo-teammate can detect, share 3D changes in real-time

    Something is different, and you can’t quite put your finger on it. But your robot can.
    Even small changes in your surroundings could indicate danger. Imagine a robot could detect those changes, and a warning could immediately alert you through a display in your eyeglasses. That is what U.S. Army scientists are developing with sensors, robots, real-time change detection and augmented reality wearables.
    Army researchers demonstrated in a real-world environment the first human-robot team in which the robot detects physical changes in 3D and shares that information with a human in real-time through augmented reality, who is then able to evaluate the information received and decide follow-on action.
    “This could let robots inform their Soldier teammates of changes in the environment that might be overlooked by or not perceptible to the Soldier, giving them increased situational awareness and offset from potential adversaries,” said Dr. Christopher Reardon, a researcher at the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. “This could detect anything from camouflaged enemy soldiers to IEDs.”
    Part of the lab’s effort in contextual understanding through the Artificial Intelligence for Mobility and Maneuver Essential Research Program, this research explores how to provide contextual awareness to autonomous robotic ground platforms in maneuver and mobility scenarios. Researchers also participate with international coalition partners in the Technical Cooperation Program’s Contested Urban Environment Strategic Challenge, or TTCP CUESC, events to test and evaluate human-robot teaming technologies.
    Most academic research in the use of mixed reality interfaces for human-robot teaming does not enter real-world environments, but rather uses external instrumentation in a lab to manage the calculations necessary to share information between a human and robot. Likewise, most engineering efforts to provide humans with mixed-reality interfaces do not examine teaming with autonomous mobile robots, Reardon said.
    Reardon and his colleagues from the Army and the University of California, San Diego, published their research, Enabling Situational Awareness via Augmented Reality of Autonomous Robot-Based Environmental Change Detection, at the 12th International Conference on Virtual, Augmented, and Mixed Reality, part of the International Conference on Human-Computer Interaction.
    The research paired a small autonomous mobile ground robot, equipped with laser ranging sensors, known as LIDAR, to build a representation of the environment, with a human teammate wearing augmented reality glasses. As the robot patrolled the environment, it compared its current and previous readings to detect changes in the environment. Those changes were then instantly displayed in the human’s eyewear to determine whether the human could interpret the changes in the environment.
    In studying communication between the robot and human team, the researchers tested different resolution LIDAR sensors on the robot to collect measurements of the environment and detect changes. When those changes were shared using augmented reality to the human, the researchers found that human teammates could interpret changes that even the lower-resolution LIDARs detected. This indicates that — depending on the size of the changes expected to encounter — lighter, smaller and less expensive sensors could perform just as well, and run faster in the process.
    This capability has the potential to be incorporated into future Soldier mixed-reality interfaces such as the Army’s Integrated Visual Augmentation System goggles, or IVAS.
    “Incorporating mixed reality into Soldiers’ eye protection is inevitable,” Reardon said. “This research aims to fill gaps by incorporating useful information from robot teammates into the Soldier-worn visual augmentation ecosystem, while simultaneously making the robots better teammates to the Soldier.”
    Future studies will continue to explore how to strengthen the teaming between humans and autonomous agents by allowing the human to interact with the detected changes, which will provide more information to the robot about the context of the change-for example, changes made by adversaries versus natural environmental changes or false positives, Reardon said. This will improve the autonomous context understanding and reasoning capabilities of the robotic platform, such as by enabling the robot to learn and predict what types of changes constitute a threat. In turn, providing this understanding to autonomy will help researchers learn how improve teaming of Soldiers with autonomous platforms. More

  • in

    The mathematical magic of bending grids

    How can you turn something flat into something three-dimensional? In architecture and design this question often plays an important role. A team of mathematicians from TU Wien (Vienna) has now presented a technique that solves this problem in an amazingly simple way: You choose any curved surface and from its shape you can calculate a flat grid of straight bars that can be folded out to the desired curved structure with a single movement. The result is a stable form that can even carry loads due to its mechanical tension.
    The step into the third dimension
    Suppose you screw ordinary straight bars together at right angles to form a grid, so that a completely regular pattern of small squares is created. Such a grid can be distorted: all angles of the grid change simultaneously, parallel bars remain parallel, and the squares become parallelograms. But this does not change the fact that all bars are in the same plane. The structure is still flat.
    The crucial question now is: What happens if the bars are not parallel at the beginning, but are joined together at different angles? “Such a grid can no longer be distorted within the plane,” explains Przemyslaw Musialski. “When you open it up, the bars have to bend. They move out of the plane into the third dimension and form a curved shape.”
    At the Center for Geometry and Computational Design (GCD) (Institute for Discrete Mathematics and Geometry) at TU Wien, Musialski and his team developed a method that can be used to calculate what the flat, two-dimensional grid must look like in order to produce exactly the desired three-dimensional shape when it is unfolded. “Our method is based on findings in differential geometry, it is relatively simple and does not require computationally intensive simulations,” says Stefan Pillwein, first author of the current publication, which was presented at the SIGGRAPH conference and published in the journal ACM Transactions on Graphics.
    Experiments with the laser scanner
    The team then tried out the mathematical methods in practice: The calculated grids were made of wood, screwed together and unfolded. The resulting 3D shapes were then measured with a laser scanner. This proved that the resulting 3D structures did indeed correspond excellently to the calculated shapes.
    Now even a mini pavilion roof was produced; measuring 3.1 x 2.1 x 0.9 metres. “We wanted to know whether this technology would also work on a large scale — and it worked out perfectly,” says Stefan Pillwein.
    “Transforming a simple 2D grid into a 3D form with a single opening movement not only looks amazing, it has many technical advantages,” says Przemyslaw Musialski. “Such grids are simple and inexpensive to manufacture, they are easy to transport and set up. Our method makes it possible to create even sophisticated shapes, not just simple domes.”
    The structures also have very good static properties: “The curved elements are under tension and have a natural structural stability — in architecture this is called active bending,” explains Musialski. Very large distances can be spanned with very thin rods. This is ideal for architectural applications.

    Story Source:
    Materials provided by Vienna University of Technology. Note: Content may be edited for style and length. More

  • in

    Predicting computational power of early quantum computers

    Quantum physicists at the University of Sussex have created an algorithm that speeds up the rate of calculations in the early quantum computers which are currently being developed. They have created a new way to route the ions — or charged atoms — around the quantum computer to boost the efficiency of the calculations.
    The Sussex team have shown how calculations in such a quantum computer can be done most efficiently, by using their new ‘routing algorithm’. Their paper “Efficient Qubit Routing for a Globally Connected Trapped Ion Quantum Computer” is published in the journal Advanced Quantum Technologies.
    The team working on this project was led by Professor Winfried Hensinger and included Mark Webber, Dr Steven Herbert and Dr Sebastian Weidt. The scientists have created a new algorithm which regulates traffic within the quantum computer just like managing traffic in a busy city. In the trapped ion design the qubits can be physically transported over long distances, so they can easily interact with other qubits. Their new algorithm means that data can flow through the quantum computer without any ‘traffic jams’. This in turn gives rise to a more powerful quantum computer.
    Quantum computers are expected to be able to solve problems that are too complex for classical computers. Quantum computers use quantum bits (qubits) to process information in a new and powerful way. The particular quantum computer architecture the team analysed first is a ‘trapped ion’ quantum computer, consisting of silicon microchips with individual charged atoms, or ions, levitating above the surface of the chip. These ions are used to store data, where each ion holds one quantum bit of information. Executing calculations on such a quantum computer involves moving around ions, similar to playing a game of Pacman, and the faster and more efficiently the data (the ions) can be moved around, the more powerful the quantum computer will be.
    In the global race to build a large scale quantum computer there are two leading methods, ‘superconducting’ devices which groups such as IBM and Google focus on, and ‘trapped ion’ devices which are used by the University of Sussex’s Ion Quantum Technology group, and the newly emerged company Universal Quantum, among others.
    Superconducting quantum computers have stationary qubits which are typically only able to interact with qubits that are immediately next to each other. Calculations involving distant qubits are done by communicating through a chain of adjacent qubits, a process similar to the telephone game (also referred to as ‘Chinese Whispers’), where information is whispered from one person to another along a line of people. In the same way as in the telephone game, the information tends to get more corrupted the longer the chain is. Indeed, the researchers found that this process will limit the computational power of superconducting quantum computers.
    In contrast, by deploying their new routing algorithm for their trapped ion architecture, the Sussex scientists have discovered that their quantum computing approach can achieve an impressive level of computational power. ‘Quantum Volume’ is a new benchmark which is being used to compare the computational power of near term quantum computers. They were able to use Quantum Volume to compare their architecture against a model for superconducting qubits, where they assumed similar levels of errors for both approaches. They found that the trapped-ion approach performed consistently better than the superconducting qubit approach, because their routing algorithm essentially allows qubits to directly interact with many more qubits, which in turn gives rise to a higher expected computational power.
    Mark Webber, a doctoral researcher in the Sussex Centre for Quantum technologies, at the University of Sussex, said:
    “We can now predict the computational power of the quantum computers we are constructing. Our study indicates a fundamental advantage for trapped ion devices, and the new routing algorithm will allow us to maximize the performance of early quantum computers.”
    Professor Hensinger, director of the Sussex Centre for Quantum Technologies at the University of Sussex said:
    “Indeed, this work is yet another stepping stone towards building practical quantum computers that can solve real world problems.”
    Professor Winfried Hensinger and Dr Sebastian Weidt have recently launched their spin-out company Universal Quantum which aims to build the world’s first large scale quantum computer. It has attracted backing from some of the world’s most powerful tech investors. The team was the first to publish a blue-print for how to build a large scale trapped ion quantum computer in 2017.

    Story Source:
    Materials provided by University of Sussex. Original written by Anna Ford. Note: Content may be edited for style and length. More

  • in

    Machine learning peeks into nano-aquariums

    In the nanoworld, tiny particles such as proteins appear to dance as they transform and assemble to perform various tasks while suspended in a liquid. Recently developed methods have made it possible to watch and record these otherwise-elusive tiny motions, and researchers now take a step forward by developing a machine learning workflow to streamline the process.
    The new study, led by Qian Chen, a professor of materials science and engineering at the University of Illinois, Urbana-Champaign, builds upon her past work with liquid-phase electron microscopy and is published in the journal ACS Central Science.
    Being able to see — and record — the motions of nanoparticles is essential for understanding a variety of engineering challenges. Liquid-phase electron microscopy, which allows researchers to watch nanoparticles interact inside tiny aquariumlike sample containers, is useful for research in medicine, energy and environmental sustainability and in fabrication of metamaterials, to name a few. However, it is difficult to interpret the dataset, the researchers said. The video files produced are large, filled with temporal and spatial information, and are noisy due to background signals — in other words, they require a lot of tedious image processing and analysis.
    “Developing a method even to see these particles was a huge challenge,” Chen said. “Figuring out how to efficiently get the useful data pieces from a sea of outliers and noise has become the new challenge.”
    To confront this problem, the team developed a machine learning workflow that is based upon an artificial neural network that mimics, in part, the learning potency of the human brain. The program builds off of an existing neural network, known as U-Net, that does not require handcrafted features or predetermined input and has yielded significant breakthroughs in identifying irregular cellular features using other types of microscopy, the study reports.
    “Our new program processed information for three types of nanoscale dynamics including motion, chemical reaction and self-assembly of nanoparticles,” said lead author and graduate student Lehan Yao. “These represent the scenarios and challenges we have encountered in the analysis of liquid-phase electron microscopy videos.”
    The researchers collected measurements from approximately 300,000 pairs of interacting nanoparticles, the study reports.
    As found in past studies by Chen’s group, contrast continues to be a problem while imaging certain types of nanoparticles. In their experimental work, the team used particles made out of gold, which is easy to see with an electron microscope. However, particles with lower elemental or molecular weights like proteins, plastic polymers and other organic nanoparticles show very low contrast when viewed under an electron beam, Chen said.
    “Biological applications, like the search for vaccines and drugs, underscore the urgency in our push to have our technique available for imaging biomolecules,” she said. “There are critical nanoscale interactions between viruses and our immune systems, between the drugs and the immune system, and between the drug and the virus itself that must be understood. The fact that our new processing method allows us to extract information from samples as demonstrated here gets us ready for the next step of application and model systems.”
    The team has made the source code for the machine learning program used in this study publicly available through the supplemental information section of the new paper. “We feel that making the code available to other researchers can benefit the whole nanomaterials research community,” Chen said.
    See the liquid-phase electron microscopy with combined machine learning in action: https://www.youtube.com/watch?v=0NESPF8Rwsc More

  • in

    Electronic alert reduces excessive prescribing of short-acting asthma relievers

    An automatic, electronic alert on general practitioners’ (GPs) computer screens can help to prevent excessive prescribing of short-acting asthma reliever medication, according to research presented at the ‘virtual’ European Respiratory Society International Congress.
    The alert pops up when GPs open the medical records for a patient who has been issued with three prescriptions for short-acting reliever inhalers, such as salbutamol, within a three-month period. It suggests the patient should have an asthma review to assess symptoms and improve asthma control. Short-acting beta2-agonists (SABAs), usually described as blue inhalers, afford short-term relief of asthma symptoms by expanding the airways, but do not deal with the underlying inflammatory cause.
    “Excessive use of reliever inhalers such as salbutamol is an indicator of poorly controlled asthma and a risk factor for asthma attacks. It has also been implicated in asthma-related deaths. Yet, despite national and international asthma guidelines, excessive prescribing of short-acting beta2-agonists persists,” said Dr Shauna McKibben, an honorary research fellow at the Institute of Population Health Sciences Queen Mary University of London (QMUL), UK, and clinical nurse specialist in asthma and allergy at Imperial College Healthcare NHS Trust, London, who led the research. “This research aimed to identify and target excessive SABA prescribing using an electronic alert in GPs’ computer systems to identify at-risk patients, change prescribing behaviour and improve asthma management.”
    The study of 18,244 asthma patients in 132 general practices in north-east London found a 6% reduction in the excessive prescribing of reliever inhalers in the 12 months following the alert first appearing on patients’ records. In addition, three months after the alert, asthma reviews increased by 12%, within six months after the alert, repeat prescribing of SABAs reduced by 5% and asthma exacerbations requiring treatment with oral steroids reduced by 8%.
    The alert to identify excessive SABA prescribing was introduced in 2015 on GPs’ computer systems that used EMIS clinical software. At the time of the research EMIS was used by almost all general practices in north-east London, and 56% of English practices used it by 2017.
    Dr McKibben analysed data on SABA prescribing for patients in all practices in the north-east London boroughs of City and Hackney, Tower Hamlets and Newham between 2015 and 2016. She compared these with excessive SABA prescribing between 2013 to 2014, before the alert was introduced.

    advertisement

    She said: “The most important finding is the small but potentially clinically significant reduction in SABA prescribing in the 12 months after the alert. This, combined with the other results, suggests that the alert prompts a review of patients who may have poor asthma control. An asthma review facilitates the assessment of SABA use and is an important opportunity to improve asthma management.”
    Dr McKibben also asked a sample of GPs, receptionists and nurses in general practice about their thoughts on the alert.
    “The alert was viewed as a catalyst for asthma review; however, the provision of timely review was challenging and response to the alert was dependent on local practice resources and clinical priorities,” she said.
    A limitation of the research was that the alert assumed that only one SABA inhaler was issued per prescription, when often two at a time may be issued. “Therefore, excessive SABA prescribing and the subsequent reduction in prescribing following the alert may be underestimated,” said Dr McKibben.
    She continued: “Excessive SABA use is only one indicator for poor asthma control but the risks are not well understood by patients and are often overlooked by healthcare professionals. Further research into the development and robust evaluation of tools to support primary care staff in the management of people with asthma is essential to improve asthma control and reduce hospital admissions.”
    The study’s findings are now being used to support and inform the REAL-HEALTH Respiratory initiative, a Barts Charity funded three-year programme with the clinical effectiveness group at QMUL. The initiative provides general practices with EMIS IT tools to support the identification of patients with high-risk asthma. This includes an electronic alert for excessive SABA prescribing and an asthma prescribing tool to identify patients with poor asthma control who may be at risk of hospital admission.
    Daiana Stolz, who was not involved in the research, is the European Respiratory Society Education Council Chair and Professor of Respiratory Medicine and a leading physician at the University Hospital Basel, Switzerland. She said: “This study shows how a relatively simple intervention, an electronic alert popping up on GPs’ computers when they open a patient’s records, can prompt a review of asthma medication and can lead to a reduction in excessive prescribing of short-acting asthma relievers and better asthma control. However, the fact that general practices often struggled to provide a timely asthma review in a period before the COVID-19 pandemic, suggests that far more resources need to be made available to primary care, particularly in this pandemic period.” More

  • in

    'Selfies' could be used to detect heart disease

    Sending a “selfie” to the doctor could be a cheap and simple way of detecting heart disease, according to the authors of a new study published today (Friday) in the European Heart Journal.
    The study is the first to show that it’s possible to use a deep learning computer algorithm to detect coronary artery disease (CAD) by analysing four photographs of a person’s face.
    Although the algorithm needs to be developed further and tested in larger groups of people from different ethnic backgrounds, the researchers say it has the potential to be used as a screening tool that could identify possible heart disease in people in the general population or in high-risk groups, who could be referred for further clinical investigations.
    “To our knowledge, this is the first work demonstrating that artificial intelligence can be used to analyse faces to detect heart disease. It is a step towards the development of a deep learning-based tool that could be used to assess the risk of heart disease, either in outpatient clinics or by means of patients taking ‘selfies’ to perform their own screening. This could guide further diagnostic testing or a clinical visit,” said Professor Zhe Zheng, who led the research and is vice director of the National Center for Cardiovascular Diseases and vice president of Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, People’s Republic of China.
    He continued: “Our ultimate goal is to develop a self-reported application for high risk communities to assess heart disease risk in advance of visiting a clinic. This could be a cheap, simple and effective of identifying patients who need further investigation. However, the algorithm requires further refinement and external validation in other populations and ethnicities.”
    It is known already that certain facial features are associated with an increased risk of heart disease. These include thinning or grey hair, wrinkles, ear lobe crease, xanthelasmata (small, yellow deposits of cholesterol underneath the skin, usually around the eyelids) and arcus corneae (fat and cholesterol deposits that appear as a hazy white, grey or blue opaque ring in the outer edges of the cornea). However, they are difficult for humans to use successfully to predict and quantify heart disease risk.

    advertisement

    Prof. Zheng, Professor Xiang-Yang Ji, who is director of the Brain and Cognition Institute in the Department of Automation at Tsinghua University, Beijing, and other colleagues enrolled 5,796 patients from eight hospitals in China to the study between July 2017 and March 2019. The patients were undergoing imaging procedures to investigate their blood vessels, such as coronary angiography or coronary computed tomography angiography (CCTA). They were divided randomly into training (5,216 patients, 90%) or validation (580, 10%) groups.
    Trained research nurses took four facial photos with digital cameras: one frontal, two profiles and one view of the top of the head. They also interviewed the patients to collect data on socioeconomic status, lifestyle and medical history. Radiologists reviewed the patients’ angiograms and assessed the degree of heart disease depending on how many blood vessels were narrowed by 50% or more (≥ 50% stenosis), and their location. This information was used to create, train and validate the deep learning algorithm.
    The researchers then tested the algorithm on a further 1,013 patients from nine hospitals in China, enrolled between April 2019 and July 2019. The majority of patients in all the groups were of Han Chinese ethnicity.
    They found that the algorithm out-performed existing methods of predicting heart disease risk (Diamond-Forrester model and the CAD consortium clinical score). In the validation group of patients, the algorithm correctly detected heart disease in 80% of cases (the true positive rate or ‘sensitivity’) and correctly detected heart disease was not present in 61% of cases (the true negative rate or ‘specificity’). In the test group, the sensitivity was 80% and specificity was 54%.
    Prof. Ji said: “The algorithm had a moderate performance, and additional clinical information did not improve its performance, which means it could be used easily to predict potential heart disease based on facial photos alone. The cheek, forehead and nose contributed more information to the algorithm than other facial areas. However, we need to improve the specificity as a false positive rate of as much as 46% may cause anxiety and inconvenience to patients, as well as potentially overloading clinics with patients requiring unnecessary tests.”
    As well as requiring testing in other ethnic groups, limitations of the study include the fact that only one centre in the test group was different to those centres which provided patients for developing the algorithm, which may further limit its generalisabilty to other populations.

    advertisement

    In an accompanying editorial, Charalambos Antoniades, Professor of Cardiovascular Medicine at the University of Oxford, UK, and Dr Christos Kotanidis, a DPhil student working under Prof. Antoniades at Oxford, write: “Overall, the study by Lin et al. highlights a new potential in medical diagnostics……The robustness of the approach of Lin et al. lies in the fact that their deep learning algorithm requires simply a facial image as the sole data input, rendering it highly and easily applicable at large scale.”
    They continue: “Using selfies as a screening method can enable a simple yet efficient way to filter the general population towards more comprehensive clinical evaluation. Such an approach can also be highly relevant to regions of the globe that are underfunded and have weak screening programmes for cardiovascular disease. A selection process that can be done as easily as taking a selfie will allow for a stratified flow of people that are fed into healthcare systems for first-line diagnostic testing with CCTA. Indeed, the ‘high risk’ individuals could have a CCTA, which would allow reliable risk stratification with the use of the new, AI-powered methodologies for CCTA image analysis.”
    They highlight some of the limitations that Prof. Zheng and Prof. Ji also include in their paper. These include the low specificity of the test, that the test needs to be improved and validated in larger populations, and that it raises ethical questions about “misuse of information for discriminatory purposes. Unwanted dissemination of sensitive health record data, that can easily be extracted from a facial photo, renders technologies such as that discussed here a significant threat to personal data protection, potentially affecting insurance options. Such fears have already been expressed over misuse of genetic data, and should be extensively revisited regarding the use of AI in medicine.”
    The authors of the research paper agree on this point. Prof. Zheng said: “Ethical issues in developing and applying these novel technologies is of key importance. We believe that future research on clinical tools should pay attention to the privacy, insurance and other social implications to ensure that the tool is used only for medical purposes.”
    Prof. Antoniades and Dr. Kotanidis also write in their editorial that defining CAD as ≥ 50% stenosis in one major coronary artery “may be a simplistic and rather crude classification as it pools in the non-CAD group individuals that are truly healthy, but also people who have already developed the disease but are still at early stages (which might explain the low specificity observed).” More