More stories

  • in

    'Selfies' could be used to detect heart disease

    Sending a “selfie” to the doctor could be a cheap and simple way of detecting heart disease, according to the authors of a new study published today (Friday) in the European Heart Journal.
    The study is the first to show that it’s possible to use a deep learning computer algorithm to detect coronary artery disease (CAD) by analysing four photographs of a person’s face.
    Although the algorithm needs to be developed further and tested in larger groups of people from different ethnic backgrounds, the researchers say it has the potential to be used as a screening tool that could identify possible heart disease in people in the general population or in high-risk groups, who could be referred for further clinical investigations.
    “To our knowledge, this is the first work demonstrating that artificial intelligence can be used to analyse faces to detect heart disease. It is a step towards the development of a deep learning-based tool that could be used to assess the risk of heart disease, either in outpatient clinics or by means of patients taking ‘selfies’ to perform their own screening. This could guide further diagnostic testing or a clinical visit,” said Professor Zhe Zheng, who led the research and is vice director of the National Center for Cardiovascular Diseases and vice president of Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, People’s Republic of China.
    He continued: “Our ultimate goal is to develop a self-reported application for high risk communities to assess heart disease risk in advance of visiting a clinic. This could be a cheap, simple and effective of identifying patients who need further investigation. However, the algorithm requires further refinement and external validation in other populations and ethnicities.”
    It is known already that certain facial features are associated with an increased risk of heart disease. These include thinning or grey hair, wrinkles, ear lobe crease, xanthelasmata (small, yellow deposits of cholesterol underneath the skin, usually around the eyelids) and arcus corneae (fat and cholesterol deposits that appear as a hazy white, grey or blue opaque ring in the outer edges of the cornea). However, they are difficult for humans to use successfully to predict and quantify heart disease risk.

    advertisement

    Prof. Zheng, Professor Xiang-Yang Ji, who is director of the Brain and Cognition Institute in the Department of Automation at Tsinghua University, Beijing, and other colleagues enrolled 5,796 patients from eight hospitals in China to the study between July 2017 and March 2019. The patients were undergoing imaging procedures to investigate their blood vessels, such as coronary angiography or coronary computed tomography angiography (CCTA). They were divided randomly into training (5,216 patients, 90%) or validation (580, 10%) groups.
    Trained research nurses took four facial photos with digital cameras: one frontal, two profiles and one view of the top of the head. They also interviewed the patients to collect data on socioeconomic status, lifestyle and medical history. Radiologists reviewed the patients’ angiograms and assessed the degree of heart disease depending on how many blood vessels were narrowed by 50% or more (≥ 50% stenosis), and their location. This information was used to create, train and validate the deep learning algorithm.
    The researchers then tested the algorithm on a further 1,013 patients from nine hospitals in China, enrolled between April 2019 and July 2019. The majority of patients in all the groups were of Han Chinese ethnicity.
    They found that the algorithm out-performed existing methods of predicting heart disease risk (Diamond-Forrester model and the CAD consortium clinical score). In the validation group of patients, the algorithm correctly detected heart disease in 80% of cases (the true positive rate or ‘sensitivity’) and correctly detected heart disease was not present in 61% of cases (the true negative rate or ‘specificity’). In the test group, the sensitivity was 80% and specificity was 54%.
    Prof. Ji said: “The algorithm had a moderate performance, and additional clinical information did not improve its performance, which means it could be used easily to predict potential heart disease based on facial photos alone. The cheek, forehead and nose contributed more information to the algorithm than other facial areas. However, we need to improve the specificity as a false positive rate of as much as 46% may cause anxiety and inconvenience to patients, as well as potentially overloading clinics with patients requiring unnecessary tests.”
    As well as requiring testing in other ethnic groups, limitations of the study include the fact that only one centre in the test group was different to those centres which provided patients for developing the algorithm, which may further limit its generalisabilty to other populations.

    advertisement

    In an accompanying editorial, Charalambos Antoniades, Professor of Cardiovascular Medicine at the University of Oxford, UK, and Dr Christos Kotanidis, a DPhil student working under Prof. Antoniades at Oxford, write: “Overall, the study by Lin et al. highlights a new potential in medical diagnostics……The robustness of the approach of Lin et al. lies in the fact that their deep learning algorithm requires simply a facial image as the sole data input, rendering it highly and easily applicable at large scale.”
    They continue: “Using selfies as a screening method can enable a simple yet efficient way to filter the general population towards more comprehensive clinical evaluation. Such an approach can also be highly relevant to regions of the globe that are underfunded and have weak screening programmes for cardiovascular disease. A selection process that can be done as easily as taking a selfie will allow for a stratified flow of people that are fed into healthcare systems for first-line diagnostic testing with CCTA. Indeed, the ‘high risk’ individuals could have a CCTA, which would allow reliable risk stratification with the use of the new, AI-powered methodologies for CCTA image analysis.”
    They highlight some of the limitations that Prof. Zheng and Prof. Ji also include in their paper. These include the low specificity of the test, that the test needs to be improved and validated in larger populations, and that it raises ethical questions about “misuse of information for discriminatory purposes. Unwanted dissemination of sensitive health record data, that can easily be extracted from a facial photo, renders technologies such as that discussed here a significant threat to personal data protection, potentially affecting insurance options. Such fears have already been expressed over misuse of genetic data, and should be extensively revisited regarding the use of AI in medicine.”
    The authors of the research paper agree on this point. Prof. Zheng said: “Ethical issues in developing and applying these novel technologies is of key importance. We believe that future research on clinical tools should pay attention to the privacy, insurance and other social implications to ensure that the tool is used only for medical purposes.”
    Prof. Antoniades and Dr. Kotanidis also write in their editorial that defining CAD as ≥ 50% stenosis in one major coronary artery “may be a simplistic and rather crude classification as it pools in the non-CAD group individuals that are truly healthy, but also people who have already developed the disease but are still at early stages (which might explain the low specificity observed).” More

  • in

    Skat and poker: More luck than skill?

    Chess requires playing ability and strategic thinking; in roulette, chance determines victory or defeat, gain or loss. But what about skat and poker? Are they games of chance or games of skill in game theory? This classification also determines whether play may involve money. Prof. Dr Jörg Oechssler and his team of economists at Heidelberg University studied this question, developing a rating system similar to the Elo system used for chess. According to their study, both skat and poker involve more than 50 per cent luck, yet over the long term, skill prevails.
    “Whether a game is one of skill or luck also determines whether it can be played for money. But assigning a game to these categories is difficult owing to the many shades of gradation between extremes like roulette and chess,” states Prof. Oechssler. Courts in Germany legally classify poker as a game of chance that can be played only in government-sanctioned casinos, whereas skat is considered a game of skill. This classification stems from a court decision taken in 1906. One frequently used assessment criterion is whether the outcome for one player depends more than 50 per cent on luck. But how can this be measured objectively?
    It is this question the Heidelberg researchers investigated in their game theoretic study. Using data from more than four million online games of chess, poker, and skat, they developed a rating system for poker and skat based on the Elo method for chess, which calculates the relative skill levels of individual players. “Because chess is purely a game of skill, the rating distribution is very wide, ranging from 1,000 for a novice to over 2.800 for the current world champion. So the wider the distribution, the more important skill is,” explains Dr Peter Dürsch. In a game involving more luck and chance, the numbers are therefore not likely to be so far apart.
    The Heidelberg research confirms exactly that: the distribution is much narrower in poker and skat. Whereas the standard deviation — the average deviation from the mean — for chess is over 170, the other two games did not exceed 30. To create a standard of comparison for a game involving more than 50 per cent luck, the researchers replaced every other game in their chess data set with a coin toss. This produced a deviation of 45, which is still much higher than poker and skat. “Both games fall below the 50 per cent skill level, and therefore depend mainly on luck,” states Marco Lambrecht. “Skill, however, does prevail in the long run. Our analyses show that after about one hundred games, a poker player who is one standard deviation better than his opponent is 75 per cent more likely to have won more games than his opponent.”
    In principle, the method can be applied to all games where winners are determined, report the researchers. The percentage of skill in the popular card game Mau-Mau, for example, is far less than poker, whereas the Chinese board game Go involves even more skill than chess.

    Story Source:
    Materials provided by University of Heidelberg. Note: Content may be edited for style and length. More

  • in

    Scientists slow and steer light with resonant nanoantennas

    Light is notoriously fast. Its speed is crucial for rapid information exchange, but as light zips through materials, its chances of interacting and exciting atoms and molecules can become very small. If scientists can put the brakes on light particles, or photons, it would open the door to a host of new technology applications.
    Now, in a paper published on Aug. 17, in Nature Nanotechnology, Stanford scientists demonstrate a new approach to slow light significantly, much like an echo chamber holds onto sound, and to direct it at will. Researchers in the lab of Jennifer Dionne, associate professor of materials science and engineering at Stanford, structured ultrathin silicon chips into nanoscale bars to resonantly trap light and then release or redirect it later. These “high-quality-factor” or “high-Q” resonators could lead to novel ways of manipulating and using light, including new applications for quantum computing, virtual reality and augmented reality; light-based WiFi; and even the detection of viruses like SARS-CoV-2.
    “We’re essentially trying to trap light in a tiny box that still allows the light to come and go from many different directions,” said postdoctoral fellow Mark Lawrence, who is also lead author of the paper. “It’s easy to trap light in a box with many sides, but not so easy if the sides are transparent — as is the case with many Silicon-based applications.”
    Make and manufacture
    Before they can manipulate light, the resonators need to be fabricated, and that poses a number of challenges.
    A central component of the device is an extremely thin layer of silicon, which traps light very efficiently and has low absorption in the near-infrared, the spectrum of light the scientists want to control. The silicon rests atop a wafer of transparent material (sapphire, in this case) into which the researchers direct an electron microscope “pen” to etch their nanoantenna pattern. The pattern must be drawn as smoothly as possible, as these antennas serve as the walls in the echo-chamber analogy, and imperfections inhibit the light-trapping ability.

    advertisement

    “High-Q resonances require the creation of extremely smooth sidewalls that don’t allow the light to leak out,” said Dionne, who is also Senior Associate Vice Provost of Research Platforms/Shared Facilities. “That can be achieved fairly routinely with larger micron-scale structures, but is very challenging with nanostructures which scatter light more.”
    Pattern design plays a key role in creating the high-Q nanostructures. “On a computer, I can draw ultra-smooth lines and blocks of any given geometry, but the fabrication is limited,” said Lawrence. “Ultimately, we had to find a design that gave good-light trapping performance but was within the realm of existing fabrication methods.”
    High quality (factor) applications
    Tinkering with the design has resulted in what Dionne and Lawrence describe as an important platform technology with numerous practical applications.
    The devices demonstrated so-called quality factors up to 2,500, which is two orders of magnitude (or 100 times) higher than any similar devices have previously achieved. Quality factors are a measure describing resonance behavior, which in this case is proportional to the lifetime of the light. “By achieving quality factors in the thousands, we’re already in a nice sweet spot from some very exciting technological applications,” said Dionne.

    advertisement

    For example, biosensing. A single biomolecule is so small that it is essentially invisible. But passing light over a molecule hundreds or thousands of times can greatly increase the chance of creating a detectable scattering effect.
    Dionne’s lab is working on applying this technique to detecting COVID-19 antigens — molecules that trigger an immune response — and antibodies — proteins produced by the immune system in response. “Our technology would give an optical readout like the doctors and clinicians are used to seeing,” said Dionne. “But we have the opportunity to detect a single virus or very low concentrations of a multitude of antibodies owing to the strong light-molecule interactions.” The design of the high-Q nanoresonators also allows each antenna to operate independently to detect different types of antibodies simultaneously.
    Though the pandemic spurred her interest in viral detection, Dionne is also excited about other applications, such as LIDAR — or Light Detection and Ranging, which is laser-based distance measuring technology often used in self-driving vehicles — that this new technology could contribute to. “A few years ago I couldn’t have imagined the immense application spaces that this work would touch upon,” said Dionne. “For me, this project has reinforced the importance of fundamental research — you can’t always predict where fundamental science is going to go or what it’s going to lead to, but it can provide critical solutions for future challenges.”
    This innovation could also be useful in quantum science. For example, splitting photons to create entangled photons that remain connected on a quantum level even when far apart would typically require large tabletop optical experiments with big expensive precisely polished crystals. “If we can do that, but use our nanostructures to control and shape that entangled light, maybe one day we will have an entanglement generator that you can hold in your hand,” Lawrence said. “With our results, we are excited to look at the new science that’s achievable now, but also trying to push the limits of what’s possible.”
    Additional Stanford co-authors include graduate students David Russell Barton III and Jefferson Dixon, research associate Jung-Hwan Song, former research scientist Jorik van de Groep, and Mark Brongersma, professor of materials science and engineering. This work was funded by the DOE-EFRC, “Photonics at Thermodynamic Limits” as well as by the AFOSR. Jen is also an associate professor, by courtesy, of radiology and member of the Wu Tsai Neurosciences Institute and Bio-X. More

  • in

    First daily surveillance of emerging COVID-19 hotspots

    Over the course of the coronavirus epidemic, COVID-19 outbreaks have hit communities across the United States. As clusters of infection shift over time, local officials are forced into a whack-a-mole approach to allocating resources and enacting public health policies. In a new study led by the University of Utah, geographers published the first effort to conduct daily surveillance of emerging COVID-19 hotspots for every county in the contiguous U.S. The researchers hope that timely, localized data will help inform future decisions.
    Using innovative space-time statistics, the researchers detected geographic areas where the population had an elevated risk of contracting the virus. They ran the analysis every day using daily COVID-19 case counts from Jan. 22 to June 5, 2020 to establish regional clusters, defined as a collection of disease cases closely grouped in time and space. For the first month, the clusters were very large, especially in the Midwest. Starting on April 25, the clusters become smaller and more numerous, a trend that persists until the end of the study.
    The article published online on June 27, 2020, in the journal Spatial and Spatio-temporal Epidemiology. The study builds on the team’s previous work by evaluating the characteristics of each cluster and how the characteristics change as the pandemic unfolds.
    “We applied a clustering method that identifies areas of concern, and also tracks characteristics of the clusters — are they growing or shrinking, what is the population density like, is relative risk increasing or not?” said Alexander Hohl, lead author and assistant professor at the Department of Geography at the U. “We hope this can offer insights into the best strategies for controlling the spread of COVID-19, and to potentially predict future hotspots.”
    The research team, including Michael Desjardins of Johns Hopkins Bloomberg School of Public Health’s Spatial Science for Public Health Center and Eric Delmelle and Yu Lan of the University of North Carolina at Charlotte, have created a web application of the clusters that the public can check daily at COVID19scan.net. The app is just a start, Hohl warned. State officials would need to do smaller scale analysis to identify specific locations for intervention.
    “The app is meant to show where officials should prioritize efforts — it’s not telling you where you will or will not contract the virus,” Hohl said. “I see this more as an inspiration, rather than a concrete tool, to guide authorities to prevent or respond to outbreaks. It also gives the public a way to see what we’re doing.”
    The researchers used daily case counts reported in the COVID-19 Data Repository from the Center for Systems Science and Engineering at Johns Hopkins University, which lists cases at the county level in the contiguous U.S. They used the U.S. Census website’s 2018 five-year population estimates within each county.
    To establish the clusters, they ran a space-time scan statistic that takes into account the observed number of cases and the underlying population within a given geographic area and timespan. Using SatScan, a widely used software, they identified areas of significantly elevated risk of COVID-19. Due to the large variation between counties, evaluating risk is tricky. Rural areas and small, single counties may not have large populations, therefore just a handful of cases would make risk go up significantly.
    This study is the third of the research group’s iteration using the statistical method for detecting and monitoring COVID-19 clusters in the U.S. Back in May, the group published their first geographical study to use the tracking method, which was also of the first paper published by geographers analyzing COVID-19. In June, they published an update.
    “May seems like an eternity ago because the pandemic is changing so rapidly,” Hohl said. “We continue to get feedback from the research community and are always trying to make the method better. This is just one method to zero in on communities that are at risk.”
    A big limitation of the analysis is the data itself. COVID-19 reporting is different for each state. There’s no uniform way that information flows from the labs that confirm the diagnoses, to the state health agencies to the COVID-19 Data Repository from the Center for Systems Science and Engineering at Johns Hopkins University, where the study gets its data. Also, the testing efforts are quite different between states, and the team is working to adjust the number of observed cases to reflect a state’s efforts. Hohl is also working with other U researchers to look at the relationship between social media and COVID-19 to predict the future trajectory of outbreaks.
    “We’ve been working on this since COVID-19 first started and the field is moving incredibly fast,” said Hohl. “It’s so important to get the word out and react to what else is being published so we can take the next step in the project.”

    Story Source:
    Materials provided by University of Utah. Original written by Lisa Potter. Note: Content may be edited for style and length. More

  • in

    New 'molecular computers' find the right cells

    Scientists have demonstrated a new way to precisely target cells by distinguishing them from neighboring cells that look quite similar.
    Even cells that become cancerous may differ from their healthy neighbors in only a few subtle ways. A central challenge in the treatment of cancer and many other diseases is being able to spot the right cells while sparing all others.
    In a paper published 20 August in Science, a team of researchers at the University of Washington School of Medicine and the Fred Hutchinson Cancer Research Center in Seattle describe the design of new nanoscale devices made of synthetic proteins. These target a therapeutic agent only to cells with specific, predetermined combinations of cell surface markers.
    Remarkably, these ‘molecular computers’ operate all on their own and can search out the cells that they were programmed to find.
    “We were trying to solve a key problem in medicine, which is how to target specific cells in a complex environment,” said Marc Lajoie, a lead author of the study and recent postdoctoral scholar at the UW Medicine Institute for Protein Design. “Unfortunately, most cells lack a single surface marker that is unique to just them. So, to improve cell targeting, we created a way to direct almost any biological function to any cell by going after combinations of cell surface markers.”
    The tool they created is called Co-LOCKR, or Colocalization-dependant Latching Orthogonal Cage/Key pRoteins. It consists of multiple synthetic proteins that, when separated, do nothing. But when the pieces come together on the surface of a targeted cell, they change shape, thereby activating a sort of molecular beacon.
    The presence of these beacons on a cell surface can guide a predetermined biological activity — like cell killing — to a specific, targeted cell.
    The researchers demonstrated that Co-LOCKR can focus the cell-killing activity of CAR T cells. In the lab, they mixed Co-LOCKR proteins, CAR T cells, and a soup of potential target cells. Some of these had just one marker, others had two or three. Only the cells with the predetermined marker combination were killed by the T cells. If a cell also had a predetermined “healthy marker,” then that cell was spared.
    “T cells are extremely efficient killers, so the fact that we can limit their activity on cells with the wrong combination of antigens yet still rapidly eliminate cells with the correct combination is game-changing,” said Alexander Salter, another lead author of the study and an M.D./Ph.D. student in the medical scientist program at the UW School of Medicine. He is training in Stanley Riddell’s lab at the Fred Hutchinson Cancer Research Center.
    This cell-targeting strategy relies entirely on proteins. This approach sets it apart from most other methods that rely on engineered cells and operate on slower timescales.
    “We believe Co-LOCKR will be useful in many areas where precise cell targeting is needed, including immunotherapy and gene therapy,” said David Baker, professor of biochemistry at the UW School of Medicine and director of the Institute for Protein Design.

    Story Source:
    Materials provided by University of Washington Health Sciences/UW Medicine. Original written by Ian Haydon. Note: Content may be edited for style and length. More

  • in

    Routing apps can deliver real-time insights into traffic emissions

    Routing apps such as Google Maps or Nokia’s Here platform could offer a cost-effective way of calculating emissions hotspots in real time, say researchers at the University of Birmingham.
    These apps routinely capture detailed information as motorists use the GPS technology to plan and navigate routes. This data could be invaluable for researchers and planners who need to better understand traffic flows on busy roads, according to new research published in Weather, the journal of the Royal Meteorological Society.
    Current emissions data from road transport is collated from a number of different sources by the National Atmospheric Emissions Inventory and this is fed into annual reports to demonstrate compliance with emissions targets. Many of these traditional air quality models rely on the assumption that traffic is freely flowing at the legal speed limit — whereas in many areas, traffic flow will vary through the day. These models also overlook finer-grained detail from individual roads or junctions that might be emissions hotspots at particular times of the day.
    Although more detailed information might be available to city planners when designing new road layouts or traffic improvement schemes, it requires costly modelling by consultancies.
    Making use of the crowd-sourced data from routing apps could, the researchers argue, provide a low-cost and highly effective alternative to both high level and localised modelling.
    Helen Pearce, a PhD researcher at the University of Birmingham’s School of Geography, Earth and Environmental Sciences, led the study. She says: “A lot of guidelines and policy on air quality management are based on hourly time snapshots and on the average amount of traffic on a typical day of the year. The difficulty is that traffic can vary an enormous amount within that time window and along individual roads, so in order to make decisions that really work ‘on the ground’, we need to be able to access and make use of this finer-grained detail.”
    The approach suggested by the team was tested on roads in Birmingham’s busy city centre. Information on the time taken to travel a series of road links was obtained via a map provider’s API (application programming interface). This is conceptually similar to the approach that an individual would take to calculate the time of a journey, but using the API the researchers were able to obtain information for multiple roads and at multiple times of the day.
    Following a successful preliminary study, the team scaled up their trial to include 920 major road links across Birmingham city centre, extracting information about these roads at hourly intervals. The researchers found they were able to clearly demonstrate the changes in traffic flow between typical weekdays, weekends, and also the effects of specific social events.
    Speed related emissions could then be calculated using a combination of sources including Defra’s speed-related emission function database, and traffic count data from the Department of Transport. This information also helped the researchers take into account the relative splits between petrol and diesel engines.
    “Our approach could provide significant insights into real-world vehicle behaviours,” says Dr Zhaoya Gong, corresponding author on the study. “As we start to see more electric and hybrid vehicles on the road, the emissions picture starts to get more complicated because there will be less exhaust emissions, but we will still see pollution from brakes, tyres and road surface wear — all these will vary significantly according to the speed of the vehicle so this sort of data will be vital for developing accurate emissions models.”

    Story Source:
    Materials provided by University of Birmingham. Note: Content may be edited for style and length. More

  • in

    Contact tracing apps unlikely to contain COVID-19 spread: UK researchers

    Contract tracing apps used to reduce the spread of COVID-19 are unlikely to be effective without proper uptake and support from concurrent control measures, finds a new study by UCL researchers.
    The systematic review*, published in Lancet Digital Health, shows that evidence around the effectiveness of automated contact tracing systems is currently very limited, and large-scale manual contact tracing alongside other public health control measures — such as physical distancing and closure of indoor spaces such as pubs — is likely to be required in conjunction with automated approaches.
    The team found 15 relevant studies by reviewing more than 4,000 papers on automated and partially-automated contact tracing, and analysed these to understand the potential impact these tools could have in controlling the COVID-19 pandemic.
    Lead author Dr Isobel Braithwaite (UCL Institute of Health Informatics) said: “Across a number of modelling studies, we found a consistent picture that although automated contact tracing could support manual contact tracing, the systems will require large-scale uptake by the population and strict adherence to quarantine advice by contacts notified to have a significant impact on reducing transmission.”
    The authors suggest that even under optimistic assumptions — where 75-80% of UK smartphone owners are using a contact tracing app, and 90-100% of identified potential close contacts initially adhere to quarantine advice — automated contact tracing methods would still need to be used within an integrated public health response to prevent exponential growth of the epidemic.
    In total, 4,033 papers published between 1 Jan 2000 and 14 April 2020 were reviewed, which allowed researchers to identify 15 papers with useful data. The seven studies that addressed automated contact tracing directly were modelling studies that all focused on COVID-19. Five studies of partially-automated contact tracing were descriptive observational studies or case studies, and three studies of automated contact detection looked at a similar disease context to COVID-19, but did not include subsequent tracing or contact notification.

    advertisement

    Partially-automated systems may have some automated processes, for instance in determining the duration of follow-up of contacts required, but do not use proximity of smartphones as a proxy for contact with an infected person.
    Analysis of automated contact tracing apps generally suggested that high population uptake of relevant apps is required alongside other control measures, while partially-automated systems often had better follow-up and slightly more timely intervention.
    Dr Braithwaite said: “Although automated contact tracing shows some promise in helping reduce transmission of COVID-19 within communities, our research highlighted the urgent need for further evaluation of these apps within public health practice, as none of the studies we found provided real-world evidence of their effectiveness, and to improve our understanding of how they could support manual contact tracing systems.”
    The review shows that, at present, there is insufficient evidence to justify reliance on automated contact tracing approaches without additional extensive public health control measures.
    Dr Robert Aldridge (UCL Institute of Health Informatics) added: “We currently do not have good evidence about whether a notification from a smartphone app is as effective in breaking chains of transmission by giving advice to isolate due to contact with a case of COVID-19 when compared to advice provided by a public health contact tracer. We urgently need to study this evidence gap and examine how automated approaches can be integrated with existing contact tracing and disease control strategies, and generate evidence on whether these new digital approaches are cost-effective and equitable.”
    If implemented effectively and quarantine advice is adhered to appropriately, automated contact tracing may offer benefits such as reducing reliance on human recall of close contacts, which could enable identification of additional at-risk individuals, informing potentially affected people in real-time, and saving on resources.

    advertisement

    Dr Braithwaite added: “We should be mindful that automated approaches raise potential privacy and ethics concerns, and also rely on high smartphone ownership, so they may be of very limited value in some countries. Too much reliance on automated contact tracing apps may also increase the risk of COVID-19 for vulnerable and digitally-excluded groups such as older people and people experiencing homelessness.”
    If implementing automated contact tracing technology, the authors say that decision-makers should thoroughly assess available evidence around its effectiveness, privacy and equality considerations, monitoring this as the evidence base evolves.
    They add that plans to properly integrate contact tracing apps within comprehensive outbreak response strategies are important, and their impacts should be evaluated rigorously. A combination of different approaches is needed to control COVID-19, and the review concludes that contact tracing apps have the potential to support that but they are not a panacea.
    This study is co-authored by researchers UCL Public Health Data Science Research Group, Institute of Health Informatics, Department of Applied Health Research, and Collaborative Centre for Inclusion Health.
    *A systematic review carefully identifies all the relevant published and unpublished studies, rates them for quality and synthesises the studies’ findings across the studies identified.
    Study limitations
    As part of this systematic review, researchers did not find any epidemiological studies comparing automated to manual contact tracing systems and their effectiveness in identifying contacts. Other limitations include the lack of eligible empirical studies of fully-automated contact tracing and a paucity of evidence related to ethical concerns or cost-effectiveness. More

  • in

    A how-to guide for teaching GIS courses online with hardware or software in the cloud

    In a new paper this week, geographer Forrest Bowlick at the University of Massachusetts Amherst and colleagues at Texas A&M offer first-hand accounts of what is required for GIS instructors and IT administrators to set up virtual computing specifically for providing state-of-the-art geographic information systems (GIS) instruction.
    Bowlick says, “Our research is very applicable in the current remote learning era that we’re working through, because it provides expertly driven insight into how to set up a virtual computing environment in different modes: with hardware and with software in the cloud. While tailored to those needing GIS support, it is also very applicable for other high-performance software needs.”
    “By capturing the experiences of both setting up the system and of students using the system, we provide an important resource for others needing to make this investment of time, equipment and energy,” he adds. Such technical practice is becoming required for GIS and other instruction, he points out.
    Writing in the Journal of Geography in Higher Education, the authors compare an onsite server set-up and a virtualized cloud set-up scenario and report some student feedback on using a course taught this way. The growing need for fast computers, they point out, has made it harder for everyone to build the machines they need. “Our work talks about how to build fast computers in different ways and shares what we know about making fast computers for digital geography,” Bowlick notes.
    He says, “UMass is just one of several programs nationally, but regionally it’s very attractive, especially at the graduate level, because there are not that many in New England. Ours certainly started at the right time, too. With the turn toward using more computational skills and GIS practices, how to use different computer constructs and programming language are become more fundamental needs in education.”
    Bowlick has directed a one-year M.S. geography degree program with an emphasis in GIS at UMass Amherst since 2017. He says there may be 10 or 15 students from every college on campus with different majors in the introductory course in a given semester. They need to gain fundamentals of spatial thinking, operating software and problem solving applicable to the diverse interests that students bring to the course.
    Generally, these applications involve how to think through spatial problems on such topics as political geography, for example, which might ask who is voting and where, or on gerrymandering and how to discover it. Others are creating COVID-19 virus maps and spatial data to show its prevalence for spatial epidemiology and health geography, while others are modeling ecosystems for fish and wildlife.
    Bowlick explains that geographic information science is “a bridging science” — a suite of technologies, a way of thinking and a way to store spatial data including satellite systems for navigation. GIS handles imagery, computer mapping, spatial planning, modeling land cover over time, even helping businesses decide where to open their next location.
    GIS was first developed in the late 60s when the Canada Land Inventory needed ways to store, manage and analyze land resource maps over huge areas using new computer technology, Bowlick says. His two co-authors at Texas A&M, both experienced GIS instructors, are Dan Goldberg, an associate professor in geography, and Paul Stine, an IT system administrator for geography.
    The authors describe the setup, organization and execution of teaching an introductory WebGIS course while considering student experiences in such a course.
    The paper also defines an operational set of resource metrics needed to support the computing needs of students using virtual machines for server-based CyberGIS classes, as well as comparing costs and components needed to build and support an on-premise private cloud teaching environment for a WebGIS course in an on-premise private cloud teaching environment vs. a comparable cloud-based service provider. More