More stories

  • in

    Routing apps can deliver real-time insights into traffic emissions

    Routing apps such as Google Maps or Nokia’s Here platform could offer a cost-effective way of calculating emissions hotspots in real time, say researchers at the University of Birmingham.
    These apps routinely capture detailed information as motorists use the GPS technology to plan and navigate routes. This data could be invaluable for researchers and planners who need to better understand traffic flows on busy roads, according to new research published in Weather, the journal of the Royal Meteorological Society.
    Current emissions data from road transport is collated from a number of different sources by the National Atmospheric Emissions Inventory and this is fed into annual reports to demonstrate compliance with emissions targets. Many of these traditional air quality models rely on the assumption that traffic is freely flowing at the legal speed limit — whereas in many areas, traffic flow will vary through the day. These models also overlook finer-grained detail from individual roads or junctions that might be emissions hotspots at particular times of the day.
    Although more detailed information might be available to city planners when designing new road layouts or traffic improvement schemes, it requires costly modelling by consultancies.
    Making use of the crowd-sourced data from routing apps could, the researchers argue, provide a low-cost and highly effective alternative to both high level and localised modelling.
    Helen Pearce, a PhD researcher at the University of Birmingham’s School of Geography, Earth and Environmental Sciences, led the study. She says: “A lot of guidelines and policy on air quality management are based on hourly time snapshots and on the average amount of traffic on a typical day of the year. The difficulty is that traffic can vary an enormous amount within that time window and along individual roads, so in order to make decisions that really work ‘on the ground’, we need to be able to access and make use of this finer-grained detail.”
    The approach suggested by the team was tested on roads in Birmingham’s busy city centre. Information on the time taken to travel a series of road links was obtained via a map provider’s API (application programming interface). This is conceptually similar to the approach that an individual would take to calculate the time of a journey, but using the API the researchers were able to obtain information for multiple roads and at multiple times of the day.
    Following a successful preliminary study, the team scaled up their trial to include 920 major road links across Birmingham city centre, extracting information about these roads at hourly intervals. The researchers found they were able to clearly demonstrate the changes in traffic flow between typical weekdays, weekends, and also the effects of specific social events.
    Speed related emissions could then be calculated using a combination of sources including Defra’s speed-related emission function database, and traffic count data from the Department of Transport. This information also helped the researchers take into account the relative splits between petrol and diesel engines.
    “Our approach could provide significant insights into real-world vehicle behaviours,” says Dr Zhaoya Gong, corresponding author on the study. “As we start to see more electric and hybrid vehicles on the road, the emissions picture starts to get more complicated because there will be less exhaust emissions, but we will still see pollution from brakes, tyres and road surface wear — all these will vary significantly according to the speed of the vehicle so this sort of data will be vital for developing accurate emissions models.”

    Story Source:
    Materials provided by University of Birmingham. Note: Content may be edited for style and length. More

  • in

    Contact tracing apps unlikely to contain COVID-19 spread: UK researchers

    Contract tracing apps used to reduce the spread of COVID-19 are unlikely to be effective without proper uptake and support from concurrent control measures, finds a new study by UCL researchers.
    The systematic review*, published in Lancet Digital Health, shows that evidence around the effectiveness of automated contact tracing systems is currently very limited, and large-scale manual contact tracing alongside other public health control measures — such as physical distancing and closure of indoor spaces such as pubs — is likely to be required in conjunction with automated approaches.
    The team found 15 relevant studies by reviewing more than 4,000 papers on automated and partially-automated contact tracing, and analysed these to understand the potential impact these tools could have in controlling the COVID-19 pandemic.
    Lead author Dr Isobel Braithwaite (UCL Institute of Health Informatics) said: “Across a number of modelling studies, we found a consistent picture that although automated contact tracing could support manual contact tracing, the systems will require large-scale uptake by the population and strict adherence to quarantine advice by contacts notified to have a significant impact on reducing transmission.”
    The authors suggest that even under optimistic assumptions — where 75-80% of UK smartphone owners are using a contact tracing app, and 90-100% of identified potential close contacts initially adhere to quarantine advice — automated contact tracing methods would still need to be used within an integrated public health response to prevent exponential growth of the epidemic.
    In total, 4,033 papers published between 1 Jan 2000 and 14 April 2020 were reviewed, which allowed researchers to identify 15 papers with useful data. The seven studies that addressed automated contact tracing directly were modelling studies that all focused on COVID-19. Five studies of partially-automated contact tracing were descriptive observational studies or case studies, and three studies of automated contact detection looked at a similar disease context to COVID-19, but did not include subsequent tracing or contact notification.

    advertisement

    Partially-automated systems may have some automated processes, for instance in determining the duration of follow-up of contacts required, but do not use proximity of smartphones as a proxy for contact with an infected person.
    Analysis of automated contact tracing apps generally suggested that high population uptake of relevant apps is required alongside other control measures, while partially-automated systems often had better follow-up and slightly more timely intervention.
    Dr Braithwaite said: “Although automated contact tracing shows some promise in helping reduce transmission of COVID-19 within communities, our research highlighted the urgent need for further evaluation of these apps within public health practice, as none of the studies we found provided real-world evidence of their effectiveness, and to improve our understanding of how they could support manual contact tracing systems.”
    The review shows that, at present, there is insufficient evidence to justify reliance on automated contact tracing approaches without additional extensive public health control measures.
    Dr Robert Aldridge (UCL Institute of Health Informatics) added: “We currently do not have good evidence about whether a notification from a smartphone app is as effective in breaking chains of transmission by giving advice to isolate due to contact with a case of COVID-19 when compared to advice provided by a public health contact tracer. We urgently need to study this evidence gap and examine how automated approaches can be integrated with existing contact tracing and disease control strategies, and generate evidence on whether these new digital approaches are cost-effective and equitable.”
    If implemented effectively and quarantine advice is adhered to appropriately, automated contact tracing may offer benefits such as reducing reliance on human recall of close contacts, which could enable identification of additional at-risk individuals, informing potentially affected people in real-time, and saving on resources.

    advertisement

    Dr Braithwaite added: “We should be mindful that automated approaches raise potential privacy and ethics concerns, and also rely on high smartphone ownership, so they may be of very limited value in some countries. Too much reliance on automated contact tracing apps may also increase the risk of COVID-19 for vulnerable and digitally-excluded groups such as older people and people experiencing homelessness.”
    If implementing automated contact tracing technology, the authors say that decision-makers should thoroughly assess available evidence around its effectiveness, privacy and equality considerations, monitoring this as the evidence base evolves.
    They add that plans to properly integrate contact tracing apps within comprehensive outbreak response strategies are important, and their impacts should be evaluated rigorously. A combination of different approaches is needed to control COVID-19, and the review concludes that contact tracing apps have the potential to support that but they are not a panacea.
    This study is co-authored by researchers UCL Public Health Data Science Research Group, Institute of Health Informatics, Department of Applied Health Research, and Collaborative Centre for Inclusion Health.
    *A systematic review carefully identifies all the relevant published and unpublished studies, rates them for quality and synthesises the studies’ findings across the studies identified.
    Study limitations
    As part of this systematic review, researchers did not find any epidemiological studies comparing automated to manual contact tracing systems and their effectiveness in identifying contacts. Other limitations include the lack of eligible empirical studies of fully-automated contact tracing and a paucity of evidence related to ethical concerns or cost-effectiveness. More

  • in

    A how-to guide for teaching GIS courses online with hardware or software in the cloud

    In a new paper this week, geographer Forrest Bowlick at the University of Massachusetts Amherst and colleagues at Texas A&M offer first-hand accounts of what is required for GIS instructors and IT administrators to set up virtual computing specifically for providing state-of-the-art geographic information systems (GIS) instruction.
    Bowlick says, “Our research is very applicable in the current remote learning era that we’re working through, because it provides expertly driven insight into how to set up a virtual computing environment in different modes: with hardware and with software in the cloud. While tailored to those needing GIS support, it is also very applicable for other high-performance software needs.”
    “By capturing the experiences of both setting up the system and of students using the system, we provide an important resource for others needing to make this investment of time, equipment and energy,” he adds. Such technical practice is becoming required for GIS and other instruction, he points out.
    Writing in the Journal of Geography in Higher Education, the authors compare an onsite server set-up and a virtualized cloud set-up scenario and report some student feedback on using a course taught this way. The growing need for fast computers, they point out, has made it harder for everyone to build the machines they need. “Our work talks about how to build fast computers in different ways and shares what we know about making fast computers for digital geography,” Bowlick notes.
    He says, “UMass is just one of several programs nationally, but regionally it’s very attractive, especially at the graduate level, because there are not that many in New England. Ours certainly started at the right time, too. With the turn toward using more computational skills and GIS practices, how to use different computer constructs and programming language are become more fundamental needs in education.”
    Bowlick has directed a one-year M.S. geography degree program with an emphasis in GIS at UMass Amherst since 2017. He says there may be 10 or 15 students from every college on campus with different majors in the introductory course in a given semester. They need to gain fundamentals of spatial thinking, operating software and problem solving applicable to the diverse interests that students bring to the course.
    Generally, these applications involve how to think through spatial problems on such topics as political geography, for example, which might ask who is voting and where, or on gerrymandering and how to discover it. Others are creating COVID-19 virus maps and spatial data to show its prevalence for spatial epidemiology and health geography, while others are modeling ecosystems for fish and wildlife.
    Bowlick explains that geographic information science is “a bridging science” — a suite of technologies, a way of thinking and a way to store spatial data including satellite systems for navigation. GIS handles imagery, computer mapping, spatial planning, modeling land cover over time, even helping businesses decide where to open their next location.
    GIS was first developed in the late 60s when the Canada Land Inventory needed ways to store, manage and analyze land resource maps over huge areas using new computer technology, Bowlick says. His two co-authors at Texas A&M, both experienced GIS instructors, are Dan Goldberg, an associate professor in geography, and Paul Stine, an IT system administrator for geography.
    The authors describe the setup, organization and execution of teaching an introductory WebGIS course while considering student experiences in such a course.
    The paper also defines an operational set of resource metrics needed to support the computing needs of students using virtual machines for server-based CyberGIS classes, as well as comparing costs and components needed to build and support an on-premise private cloud teaching environment for a WebGIS course in an on-premise private cloud teaching environment vs. a comparable cloud-based service provider. More

  • in

    Creating meaningful change in cities takes decades, not years, and starts from the bottom

    Newly published research in Science Advances by University of Chicago researcher Luis Bettencourt proposes a new perspective and models on several known paradoxes of cities. Namely, if cities are engines of economic growth, why do poverty and inequality persist? If cities thrive on faster activity and more diversity, why are so many things so hard to change? And if growth and innovation are so important, how can urban planners and economists get away with describing cities with Groundhog Day-style models of equilibrium?
    Developing improved collective actions and policies, and creating more equitable, prosperous and environmentally sustainable pathways requires transcending these apparent paradoxes. The paper finds it critical that societies embrace and utilize the natural tensions of cities revealed by urban science in order to advance more holistic solutions.
    “To understand how cities can be simultaneously fast and slow, rich and poor, innovative and unstable, requires reframing our fundamental understanding of what cities are and how they work,” says Bettencourt. “There is plenty of room in cities to embody all this complexity, but to harness natural urban processes for good requires that we modify current thinking and action to include different scales and diverse kinds of people in interaction.”
    This is the goal of a new paper entitled “Urban Growth and the Emergent Statistics of Cities,” by Luis Bettencourt, the Inaugural Director of the Mansueto Institute for Urban Innovation and Professor of Ecology and Evolution at the University of Chicago. In the paper, Bettencourt develops a new set of mathematical models to describe cities along a sliding scale of processes of change, starting with individuals and deriving emergent properties of cities and nations as urban systems.
    At the heart of these models is a balancing act: humans must struggle to balance their budgets over time, including incomes and costs in units of money or energy. For most people, incomes and costs vary over time in unpredictable ways that are out of their full control. In cities — where we are all part of complicated webs of interdependence for jobs, services and many forms of collective action — these challenges gain new dimensions that require both individual and collective action. Accounting for these dynamics allows us to see how meaningful change at the levels of cities and nations can emerge from the aggregate daily hustle of millions of people, but also how all this struggle can fail to add up to much.
    The paper shows that relative changes in the status of cities are exceedingly slow, tied to variations in their growth rates, which are now very small in high-income nations such as the U.S. This leads to the problem that the effects of innovation across cities are barely observable, taking place on the time scale of several decades — much slower than any mayoral term, which blunts the ability to judge positive from harmful policies.
    Of especial importance is the negative effect of uncertainty — which tends to befall people in poverty but also everyone during the current pandemic — on processes of innovation and growth. Another challenge are policies that optimize for aggregate growth (such as GDP), which the paper shows typically promotes increasing inequality and social instability. In the paper, these ideas are tested using a long time series for 382 U.S. metropolitan areas over nearly five decades.
    “Growth and change accumulate through the compounding of many small changes in how we lead our daily lives, allocate our time and effort, and interact with each other, especially in cities. Helping more people be creative and gain agency, in part by reducing crippling uncertainties, is predicted to make all the difference between a society that can face difficulties and thrive or one that becomes caught up in endless struggles and eventually decay,” says Bettencourt.

    Story Source:
    Materials provided by University of Chicago. Note: Content may be edited for style and length. More

  • in

    Deep learning will help future Mars rovers go farther, faster, and do more science

    NASA’s Mars rovers have been one of the great scientific and space successes of the past two decades.
    Four generations of rovers have traversed the red planet gathering scientific data, sending back evocative photographs, and surviving incredibly harsh conditions — all using on-board computers less powerful than an iPhone 1. The latest rover, Perseverance, was launched on July 30, 2020, and engineers are already dreaming of a future generation of rovers.
    While a major achievement, these missions have only scratched the surface (literally and figuratively) of the planet and its geology, geography, and atmosphere.
    “The surface area of Mars is approximately the same as the total area of the land on Earth,” said Masahiro (Hiro) Ono, group lead of the Robotic Surface Mobility Group at the NASA Jet Propulsion Laboratory (JPL) — which has led all the Mars rover missions — and one of the researchers who developed the software that allows the current rover to operate.
    “Imagine, you’re an alien and you know almost nothing about Earth, and you land on seven or eight points on Earth and drive a few hundred kilometers. Does that alien species know enough about Earth?” Ono asked. “No. If we want to represent the huge diversity of Mars we’ll need more measurements on the ground, and the key is substantially extended distance, hopefully covering thousands of miles.”
    Travelling across Mars’ diverse, treacherous terrain with limited computing power and a restricted energy diet — only as much sun as the rover can capture and convert to power in a single Martian day, or sol — is a huge challenge.

    advertisement

    The first rover, Sojourner, covered 330 feet over 91 sols; the second, Spirit, travelled 4.8 miles in about five years; Opportunity, travelled 28 miles over 15 years; and Curiosity has travelled more than 12 miles since it landed in 2012.
    “Our team is working on Mars robot autonomy to make future rovers more intelligent, to enhance safety, to improve productivity, and in particular to drive faster and farther,” Ono said.
    NEW HARDWARE, NEW POSSIBILITIES
    The Perseverance rover, which launched this summer, computes using RAD 750s — radiation-hardened single board computers manufactured by BAE Systems Electronics.
    Future missions, however, would potentially use new high-performance, multi-core radiation hardened processors designed through the High Performance Spaceflight Computing (HPSC) project. (Qualcomm’s Snapdragon processor is also being tested for missions.) These chips will provide about one hundred times the computational capacity of current flight processors using the same amount of power.

    advertisement

    “All of the autonomy that you see on our latest Mars rover is largely human-in-the-loop” — meaning it requires human interaction to operate, according to Chris Mattmann, the deputy chief technology and innovation officer at JPL. “Part of the reason for that is the limits of the processors that are running on them. One of the core missions for these new chips is to do deep learning and machine learning, like we do terrestrially, on board. What are the killer apps given that new computing environment?”
    The Machine Learning-based Analytics for Autonomous Rover Systems (MAARS) program — which started three years ago and will conclude this year — encompasses a range of areas where artificial intelligence could be useful. The team presented results of the MAARS project at hIEEE Aerospace Conference in March 2020. The project was a finalist for the NASA Software Award.
    “Terrestrial high performance computing has enabled incredible breakthroughs in autonomous vehicle navigation, machine learning, and data analysis for Earth-based applications,” the team wrote in their IEEE paper. “The main roadblock to a Mars exploration rollout of such advances is that the best computers are on Earth, while the most valuable data is located on Mars.”
    Training machine learning models on the Maverick2 supercomputer at the Texas Advanced Computing Center (TACC), as well as on Amazon Web Services and JPL clusters, Ono, Mattmann and their team have been developing two novel capabilities for future Mars rovers, which they call Drive-By Science and Energy-Optimal Autonomous Navigation.
    ENERGY-OPTIMAL AUTONOMOUS NAVIGATION
    Ono was part of the team that wrote the on-board pathfinding software for Perseverance. Perseverance’s software includes some machine learning abilities, but the way it does pathfinding is still fairly naïve.
    “We’d like future rovers to have a human-like ability to see and understand terrain,” Ono said. “For rovers, energy is very important. There’s no paved highway on Mars. The drivability varies substantially based on the terrain — for instance beach versus. bedrock. That is not currently considered. Coming up with a path with all of these constraints is complicated, but that’s the level of computation that we can handle with the HPSC or Snapdragon chips. But to do so we’re going to need to change the paradigm a little bit.”
    Ono explains that new paradigm as commanding by policy, a middle ground between the human-dictated: “Go from A to B and do C,” and the purely autonomous: “Go do science.”
    Commanding by policy involves pre-planning for a range of scenarios, and then allowing the rover to determine what conditions it is encountering and what it should do.
    “We use a supercomputer on the ground, where we have infinite computational resources like those at TACC, to develop a plan where a policy is: if X, then do this; if y, then do that,” Ono explained. “We’ll basically make a huge to-do list and send gigabytes of data to the rover, compressing it in huge tables. Then we’ll use the increased power of the rover to de-compress the policy and execute it.”
    The pre-planned list is generated using machine learning-derived optimizations. The on-board chip can then use those plans to perform inference: taking the inputs from its environment and plugging them into the pre-trained model. The inference tasks are computationally much easier and can be computed on a chip like those that may accompany future rovers to Mars.
    “The rover has the flexibility of changing the plan on board instead of just sticking to a sequence of pre-planned options,” Ono said. “This is important in case something bad happens or it finds something interesting.”
    DRIVE-BY SCIENCE
    Current Mars missions typically use tens of images a Sol from the rover to decide what to do the next day, according to Mattmann. “But what if in the future we could use one million image captions instead? That’s the core tenet of Drive-By Science,” he said. “If the rover can return text labels and captions that were scientifically validated, our mission team would have a lot more to go on.”
    Mattmann and the team adapted Google’s Show and Tell software — a neural image caption generator first launched in 2014 — for the rover missions, the first non-Google application of the technology.
    The algorithm takes in images and spits out human-readable captions. These include basic, but critical information, like cardinality — how many rocks, how far away? — and properties like the vein structure in outcrops near bedrock. “The types of science knowledge that we currently use images for to decide what’s interesting,” Mattmann said.
    Over the past few years, planetary geologists have labeled and curated Mars-specific image annotations to train the model.
    “We use the one million captions to find 100 more important things,” Mattmann said. “Using search and information retrieval capabilities, we can prioritize targets. Humans are still in the loop, but they’re getting much more information and are able to search it a lot faster.”
    Results of the team’s work appear in the September 2020 issue of Planetary and Space Science.
    TACC’s supercomputers proved instrumental in helping the JPL team test the system. On Maverick 2, the team trained, validated, and improved their model using 6,700 labels created by experts.
    The ability to travel much farther would be a necessity for future Mars rovers. An example is the Sample Fetch Rover, proposed to be developed by the European Space Association and launched in late 2020s, whose main task will be to pick up samples dug up by the Mars 2020 rover and collect them.
    “Those rovers in a period of years would have to drive 10 times further than previous rovers to collect all the samples and to get them to a rendezvous site,” Mattmann said. “We’ll need to be smarter about the way we drive and use energy.”
    Before the new models and algorithms are loaded onto a rover destined for space, they are tested on a dirt training ground next to JPL that serves as an Earth-based analogue for the surface of Mars.
    The team developed a demonstration that shows an overhead map, streaming images collected by the rover, and the algorithms running live on the rover, and then exposes the rover doing terrain classification and captioning on board. They had hoped to finish testing the new system this spring, but COVID-19 shuttered the lab and delayed testing.
    In the meantime, Ono and his team developed a citizen science app, AI4Mars, that allows the public to annotate more than 20,000 images taken by the Curiosity rover. These will be used to further train machine learning algorithms to identify and avoid hazardous terrains.
    The public have generated 170,000 labels so far in less than three months. “People are excited. It’s an opportunity for people to help,” Ono said. “The labels that people create will help us make the rover safer.”
    The efforts to develop a new AI-based paradigm for future autonomous missions can be applied not just to rovers but to any autonomous space mission, from orbiters to fly-bys to interstellar probes, Ono says.
    “The combination of more powerful on-board computing power, pre-planned commands computed on high performance computers like those at TACC, and new algorithms has the potential to allow future rovers to travel much further and do more science.” More

  • in

    Understanding the inner workings of the human heart

    Researchers have investigated the function of a complex mesh of muscle fibers that line the inner surface of the heart. The study, published in the journal Nature, sheds light on questions asked by Leonardo da Vinci 500 years ago, and shows how the shape of these muscles impacts heart performance and heart failure.
    In humans, the heart is the first functional organ to develop and starts beating spontaneously only four weeks after conception. Early in development, the heart grows an intricate network of muscle fibers — called trabeculae — that form geometric patterns on the heart’s inner surface. These are thought to help oxygenate the developing heart, but their function in adults has remained an unsolved puzzle since the 16th century.
    “Our work significantly advanced our understanding of the importance of myocardial trabeculae,” explains Hannah Meyer, a Cold Spring Harbor Laboratory Fellow. “Perhaps even more importantly, we also showed the value of a truly multidisciplinary team of researchers. Only the combination of genetics, clinical research, and bioengineering led us to discover the unexpected role of myocardial trabeculae in the function of the adult heart.”
    To understand the roles and development of trabeculae, an international team of researchers used artificial intelligence to analyse 25,000 magnetic resonance imaging (MRI) scans of the heart, along with associated heart morphology and genetic data. The study reveals how trabeculae work and develop, and how their shape can influence heart disease. UK Biobank has made the study data openly available.
    Leonardo da Vinci was the first to sketch trabeculae and their snowflake-like fractal patterns in the 16th century. He speculated that they warm the blood as it flows through the heart, but their true importance has not been recognized until now.
    “Our findings answer very old questions in basic human biology. As large-scale genetic analyses and artificial intelligence progress, we’re rebooting our understanding of physiology to an unprecedented scale,” says Ewan Birney, deputy director general of EMBL.
    The research suggests that the rough surface of the heart ventricles allows blood to flow more efficiently during each heartbeat, just like the dimples on a golf ball reduce air resistance and help the ball travel further.
    The study also highlights six regions in human DNA that affect how the fractal patterns in these muscle fibers develop. Intriguingly, the researchers found that two of these regions also regulate branching of nerve cells, suggesting a similar mechanism may be at work in the developing brain.
    The researchers discovered that the shape of trabeculae affects the performance of the heart, suggesting a potential link to heart disease. To confirm this, they analyzed genetic data from 50,000 patients and found that different fractal patterns in these muscle fibers affected the risk of developing heart failure. Nearly five million Americans suffer from congestive heart failure.
    Further research on trabeculae may help scientists better understand how common heart diseases develop and explore new approaches to treatment.
    “Leonardo da Vinci sketched these intricate muscles inside the heart 500 years ago, and it’s only now that we’re beginning to understand how important they are to human health. This work offers an exciting new direction for research into heart failure,” says Declan O’Regan, clinical scientist and consultant radiologist at the MRC London Institute of Medical Sciences. This project included collaborators at Cold Spring Harbor Laboratory, EMBL’s European Bioinformatics Institute (EMBL-EBI), the MRC London Institute of Medical Sciences, Heidelberg University, and the Politecnico di Milano.

    Story Source:
    Materials provided by Cold Spring Harbor Laboratory. Note: Content may be edited for style and length. More

  • in

    Digital contact tracing alone may not be miracle answer for COVID-19

    In infectious disease outbreaks, digital contact tracing alone could reduce the number of cases, but not as much as manual contract tracing, new University of Otago-led research published in the Cochrane Library reveals.
    Senior Research Fellow in the Department of Preventive and Social Medicine, Dr Andrew Anglemyer, led this systematic review of the effectiveness of digital technologies for identifying contacts of an identified positive case of an infectious disease, in order to isolate them and reduce further transmission of the disease.
    The team of researchers summarised the findings of six observational studies from outbreaks of different infectious diseases in Sierra Leone, Botswana and the USA and six studies that simulated the spread of diseases in an epidemic with mathematical models.
    The results of the review suggest the need for caution by health authorities relying heavily on digital contact tracing systems.
    “Digital technologies, combined with other public health interventions, may help to prevent the spread of infectious diseases but the technology is largely unproven in real-world, outbreak settings,” Dr Anglemyer says.
    “Modelling studies provide low certainty of evidence of a reduction in cases, and this only occurred when digital contact tracing solutions were used together with other public health measures such as self-isolation,” he says.

    advertisement

    “However, limited evidence shows that the technology itself may produce more reliable counts of contacts.”
    Overall, the team of researchers from New Zealand, the USA, the UK and Australia conclude there is a place for digital technologies in contact tracing.
    “The findings of our review suggest that to prevent the spread of infectious diseases, governments should consider digital technologies as a way to improve current contact tracing methods, not to replace them,” the researchers state.
    “In the real world, they won’t be pitted against each other, the technology would hopefully just augment the current contact tracing methods in a given country.”
    They recommend governments consider issues of privacy and equity when choosing digital contact tracing systems.

    advertisement

    “If governments implement digital contact tracing technologies, they should ensure that at-risk populations are not disadvantaged and they need to take privacy concerns into account.
    “The COVID-19 pandemic is disproportionately affecting ethnic minorities, the elderly and people living in high deprivation. These health inequities could be magnified with the introduction of digital solutions that do not consider these at-risk populations, who are likely to have poor access to smartphones with full connectivity.”
    Contact tracing teams in the studies reviewed reported that digital data entry and management systems were faster to use than paper systems for recording of new contacts and monitoring of known contacts and possibly less prone to data loss.
    But the researchers conclude there is “very low certainty evidence” that contact tracing apps could make a substantial impact on the spread of COVID-19, while issues of low adoption, technological variation and health equity persist.
    Accessibility or privacy and safety concerns were identified in some of the studies. Problems with system access included patchy network coverage, lack of data, technical problems with hardware or software that were unable to be resolved by local technical teams and higher staff training needs including the need for refresher training. Staff also noted concerns around accessibility and logistical issues in administering the systems, particularly in marginalised or under-developed areas of the world.
    The research, published today by the Cochrane Library a collection of high-quality, independent evidence to inform healthcare decision-making, has been carried out as the COVID-19 pandemic shows no signs of waning and the World Health Organization and more than 30 countries are exploring how digital technology solutions could help stop the spread of the virus.
    Senior Research Fellow Tim Chambers from the University of Otago, Wellington, and Associate Professor Matthew Parry from the Department of Statistics, were also co-authors of the paper. More

  • in

    Portrait of a virus

    More than a decade ago, electronic medical records were all the rage, promising to transform health care and help guide clinical decisions and public health response.
    With the arrival of COVID-19, researchers quickly realized that electronic medical records (EMRs) had not lived up to their full potential — largely due to widespread decentralization of records and clinical systems that cannot “talk” to one another.
    Now, in an effort to circumvent these impediments, an international group of researchers has successfully created a centralized medical records repository that, in addition to rapid data collection, can perform data analysis and visualization.
    The platform, described Aug.19 in Nature Digital Medicine, contains data from 96 hospitals in five countries and has yielded intriguing, albeit preliminary, clinical clues about how the disease presents, evolves and affects different organ systems across different categories of patients COVID-19.
    For now, the platform represents more of a proof-of-concept than a fully evolved tool, the research team cautions, adding that the initial observations enabled by the data raise more questions than they answer.
    However, as data collection grows and more institutions begin to contribute such information, the utility of the platform will evolve accordingly, the team said.

    advertisement

    “COVID-19 caught the world off guard and has exposed important deficiencies in our ability to use electronic medical records to glean telltale insights that could inform response during a shapeshifting pandemic,” said Isaac Kohane, senior author on the research and chair of the Department of Biomedical Informatics in the Blavatnik Institute at Harvard Medical School. “The new platform we have created shows that we can, in fact, overcome some of these challenges and rapidly collect critical data that can help us confront the disease at the bedside and beyond.”
    In its report, the Harvard Medical School-led multi-institutional research team provides insights from early analysis of records from 27,584 patients and 187,802 lab tests collected in the early days of epidemic, from Jan. 1 to April 11. The data came from 96 hospitals in the United States, France, Italy, Germany and Singapore, as part of the 4CE Consortium, an international research repository of electronic medical records used to inform studies of the COVID-19 pandemic.
    “Our work demonstrates that hospital systems can organize quickly to collaborate across borders, languages and different coding systems,” said study first author Gabriel Brat, HMS assistant professor of surgery at Beth Israel Deaconess Medical Center and a member of the Department of Biomedical Informatics. “I hope that our ongoing efforts to generate insights about COVID-19 and improve treatment will encourage others from around the world to join in and share data.”
    The new platform underscores the value of such agile analytics in the rapid generation of knowledge, particularly during a pandemic that places extra urgency on answering key questions, but such tools must also be approached with caution and be subject to scientific rigor, according to an accompanying editorial penned by leading experts in biomedical data science.
    “The bar for this work needs to be set high, but we must also be able to move quickly. Examples such as the 4CE Collaborative show that both can be achieved,” writes Harlan Krumholz, senior author on the accompanying editorial and professor of medicine and cardiology and director of the Center for Outcomes Research and Evaluation at Yale-New Haven Hospital.

    advertisement

    What kind of intel can EMRs provide?
    In a pandemic, particularly one involving a new pathogen, rapid assessment of clinical records can provide information not only about the rate of new infections and the prevalence of disease, but also about key clinical features that can portend good or bad outcomes, disease severity and the need for further testing or certain interventions.
    These data can also yield clues about differences in disease course across various demographic groups and indicative fluctuations in biomarkers associated with the function of the heart, kidney, liver, immune system and more. Such insights are especially critical in the early weeks and months after a novel disease emerges and public health experts, physicians and policymakers are flying blind. Such data could prove critical later: Indicative patterns can tell researchers how to design clinical trials to better understand the underlying drivers that influence observed outcomes. For example, if records are showing consistent changes in the footprints of a protein that heralds aberrant blood clotting, the researchers can choose to focus their monitoring, treatments on organ systems whose dysfunction is associated with these abnormalities or focus on organs that could be damaged by clots, notably the brain, heart and lungs.
    The analysis of the data collected in March demonstrates that it is possible to quickly create a clinical sketch of the disease that can later be filled in as more granular details emerge, the researchers said.
    In the current study, researchers tracked the following data:
    Total number of COVID-19 patients
    Number of intensive care unit admissions and discharges
    Seven-day average of new cases per 100,000 people by country
    Daily death toll
    Demographic breakdown of patients
    Laboratory tests to assess cardiac, immune and kidney and liver function, measure red and white blood cell counts, inflammatory markers such as C-reactive protein, as well as two proteins related to blood clotting (D-dimer) and cardiac muscle injury (troponin)
    Telltale patterns
    The report’s observations included:
    Demographic analyses by country showed variations in the age of hospitalized patients, with Italy having the largest proportion of elderly patients (over 70 years) diagnosed with COVID-19.
    At initial presentation to the hospital, patients showed remarkable consistency in lab tests measuring cardiac, immune, blood-clotting and kidney and liver function.
    On day one of admission, most patients had relatively moderate disease as measured by lab tests, with initial tests showing moderate abnormalities but no indication of organ failure.
    Major abnormalities were evident on day one of diagnosis for C-reactive protein — a measure of inflammation — and D-dimer protein, a chemical that measures blood clotting with test results progressively worsening in patients who went on to develop more severe disease or died.
    Levels of the liver enzyme bilirubin, which indicate liver function, were initially normal across hospitals but worsened among persistently hospitalized patients, a finding suggesting that most patients did not have liver impairment on initial presentation.
    Creatinine levels — which measure how well the kidneys are filtering waste — showed wide variations across hospitals, a finding that may reflect cross-country variations in testing, in the use of fluids to manage kidney function or differences in timing of patient presentation at various stages of the disease.
    On average, white blood cell counts — a measure of immune response — were within normal ranges for most patients but showed elevations among those who had severe disease and remained hospitalized longer.
    Even though the findings of the report are observations and cannot be used to draw conclusions, the trends they point to could provide a foundation for more focused and in-depth studies that get to the root of these observations, the team said.
    “It’s clear that amid an emerging pathogen, uncertainty far outstrips knowledge,” Kohane said. “Our efforts establish a framework to monitor the trajectory of COVID-19 across different categories of patients and help us understand response to different clinical interventions.”
    Co-investigators included Griffin Weber, Nils Gehlenborg, Paul Avillach, Nathan Palmer, Luca Chiovato, James Cimino, Lemuel Waitman, Gilbert Omenn, Alberto Malovini; Jason Moore, Brett Beaulieu-Jones; Valentina Tibollo; Shawn Murphy; Sehi L’Yi; Mark Keller; Riccardo Bellazzi; David Hanauer; Arnaud Serret-Larmande; Alba Gutierrez-Sacristan; John Holmes; Douglas Bell; Kenneth Mandl; Robert Follett; Jeffrey Klann; Douglas Murad; Luigia Scudeller; Mauro Bucalo; Katie Kirchoff; Jean Craig; Jihad Obeid; Vianney Jouhet; Romain Griffier; Sebastien Cossin; Bertrand Moal; Lav Patel; Antonio Bellasi; Hans Prokosch; Detlef Kraska; Piotr Sliz; Amelia Tan; Kee Yuan Ngiam; Alberto Zambelli; Danielle Mowery; Emily Schiver; Batsal Devkota; Robert Bradford; Mohamad Daniar; Christel Daniel; Vincent Benoit; Romain Bey; Nicolas Paris; Patricia Serre; Nina Orlova; Julien Dubiel; Martin Hilka; Anne Sophie Jannot; Stephane Breant; Judith Leblanc; Nicolas Griffon; Anita Burgun; Melodie Bernaux; Arnaud Sandrin; Elisa Salamanca; Sylvie Cormont; Thomas Ganslandt; Tobias Gradinger; Julien Champ; Martin Boeker; Patricia Martel; Loic Esteve; Alexandre Gramfort; Olivier Grisel; Damien Leprovost; Thomas Moreau; Gael Varoquaux; Jill-Jênn Vie; Demian Wassermann; Arthur Mensch; Charlotte Caucheteux; Christian Haverkamp; Guillaume Lemaitre; Silvano Bosari, Ian Krantz; Andrew South; Tianxi Cai.
    Relevant disclosures:
    Co-authors Riccardo Bellazzi of the University of Pavia and Arthur Mensch, of PSL University, are shareholders in Biomeris, a biomedical data analysis company. More