More stories

  • in

    Predicting computational power of early quantum computers

    Quantum physicists at the University of Sussex have created an algorithm that speeds up the rate of calculations in the early quantum computers which are currently being developed. They have created a new way to route the ions — or charged atoms — around the quantum computer to boost the efficiency of the calculations.
    The Sussex team have shown how calculations in such a quantum computer can be done most efficiently, by using their new ‘routing algorithm’. Their paper “Efficient Qubit Routing for a Globally Connected Trapped Ion Quantum Computer” is published in the journal Advanced Quantum Technologies.
    The team working on this project was led by Professor Winfried Hensinger and included Mark Webber, Dr Steven Herbert and Dr Sebastian Weidt. The scientists have created a new algorithm which regulates traffic within the quantum computer just like managing traffic in a busy city. In the trapped ion design the qubits can be physically transported over long distances, so they can easily interact with other qubits. Their new algorithm means that data can flow through the quantum computer without any ‘traffic jams’. This in turn gives rise to a more powerful quantum computer.
    Quantum computers are expected to be able to solve problems that are too complex for classical computers. Quantum computers use quantum bits (qubits) to process information in a new and powerful way. The particular quantum computer architecture the team analysed first is a ‘trapped ion’ quantum computer, consisting of silicon microchips with individual charged atoms, or ions, levitating above the surface of the chip. These ions are used to store data, where each ion holds one quantum bit of information. Executing calculations on such a quantum computer involves moving around ions, similar to playing a game of Pacman, and the faster and more efficiently the data (the ions) can be moved around, the more powerful the quantum computer will be.
    In the global race to build a large scale quantum computer there are two leading methods, ‘superconducting’ devices which groups such as IBM and Google focus on, and ‘trapped ion’ devices which are used by the University of Sussex’s Ion Quantum Technology group, and the newly emerged company Universal Quantum, among others.
    Superconducting quantum computers have stationary qubits which are typically only able to interact with qubits that are immediately next to each other. Calculations involving distant qubits are done by communicating through a chain of adjacent qubits, a process similar to the telephone game (also referred to as ‘Chinese Whispers’), where information is whispered from one person to another along a line of people. In the same way as in the telephone game, the information tends to get more corrupted the longer the chain is. Indeed, the researchers found that this process will limit the computational power of superconducting quantum computers.
    In contrast, by deploying their new routing algorithm for their trapped ion architecture, the Sussex scientists have discovered that their quantum computing approach can achieve an impressive level of computational power. ‘Quantum Volume’ is a new benchmark which is being used to compare the computational power of near term quantum computers. They were able to use Quantum Volume to compare their architecture against a model for superconducting qubits, where they assumed similar levels of errors for both approaches. They found that the trapped-ion approach performed consistently better than the superconducting qubit approach, because their routing algorithm essentially allows qubits to directly interact with many more qubits, which in turn gives rise to a higher expected computational power.
    Mark Webber, a doctoral researcher in the Sussex Centre for Quantum technologies, at the University of Sussex, said:
    “We can now predict the computational power of the quantum computers we are constructing. Our study indicates a fundamental advantage for trapped ion devices, and the new routing algorithm will allow us to maximize the performance of early quantum computers.”
    Professor Hensinger, director of the Sussex Centre for Quantum Technologies at the University of Sussex said:
    “Indeed, this work is yet another stepping stone towards building practical quantum computers that can solve real world problems.”
    Professor Winfried Hensinger and Dr Sebastian Weidt have recently launched their spin-out company Universal Quantum which aims to build the world’s first large scale quantum computer. It has attracted backing from some of the world’s most powerful tech investors. The team was the first to publish a blue-print for how to build a large scale trapped ion quantum computer in 2017.

    Story Source:
    Materials provided by University of Sussex. Original written by Anna Ford. Note: Content may be edited for style and length. More

  • in

    Machine learning peeks into nano-aquariums

    In the nanoworld, tiny particles such as proteins appear to dance as they transform and assemble to perform various tasks while suspended in a liquid. Recently developed methods have made it possible to watch and record these otherwise-elusive tiny motions, and researchers now take a step forward by developing a machine learning workflow to streamline the process.
    The new study, led by Qian Chen, a professor of materials science and engineering at the University of Illinois, Urbana-Champaign, builds upon her past work with liquid-phase electron microscopy and is published in the journal ACS Central Science.
    Being able to see — and record — the motions of nanoparticles is essential for understanding a variety of engineering challenges. Liquid-phase electron microscopy, which allows researchers to watch nanoparticles interact inside tiny aquariumlike sample containers, is useful for research in medicine, energy and environmental sustainability and in fabrication of metamaterials, to name a few. However, it is difficult to interpret the dataset, the researchers said. The video files produced are large, filled with temporal and spatial information, and are noisy due to background signals — in other words, they require a lot of tedious image processing and analysis.
    “Developing a method even to see these particles was a huge challenge,” Chen said. “Figuring out how to efficiently get the useful data pieces from a sea of outliers and noise has become the new challenge.”
    To confront this problem, the team developed a machine learning workflow that is based upon an artificial neural network that mimics, in part, the learning potency of the human brain. The program builds off of an existing neural network, known as U-Net, that does not require handcrafted features or predetermined input and has yielded significant breakthroughs in identifying irregular cellular features using other types of microscopy, the study reports.
    “Our new program processed information for three types of nanoscale dynamics including motion, chemical reaction and self-assembly of nanoparticles,” said lead author and graduate student Lehan Yao. “These represent the scenarios and challenges we have encountered in the analysis of liquid-phase electron microscopy videos.”
    The researchers collected measurements from approximately 300,000 pairs of interacting nanoparticles, the study reports.
    As found in past studies by Chen’s group, contrast continues to be a problem while imaging certain types of nanoparticles. In their experimental work, the team used particles made out of gold, which is easy to see with an electron microscope. However, particles with lower elemental or molecular weights like proteins, plastic polymers and other organic nanoparticles show very low contrast when viewed under an electron beam, Chen said.
    “Biological applications, like the search for vaccines and drugs, underscore the urgency in our push to have our technique available for imaging biomolecules,” she said. “There are critical nanoscale interactions between viruses and our immune systems, between the drugs and the immune system, and between the drug and the virus itself that must be understood. The fact that our new processing method allows us to extract information from samples as demonstrated here gets us ready for the next step of application and model systems.”
    The team has made the source code for the machine learning program used in this study publicly available through the supplemental information section of the new paper. “We feel that making the code available to other researchers can benefit the whole nanomaterials research community,” Chen said.
    See the liquid-phase electron microscopy with combined machine learning in action: https://www.youtube.com/watch?v=0NESPF8Rwsc More

  • in

    Electronic alert reduces excessive prescribing of short-acting asthma relievers

    An automatic, electronic alert on general practitioners’ (GPs) computer screens can help to prevent excessive prescribing of short-acting asthma reliever medication, according to research presented at the ‘virtual’ European Respiratory Society International Congress.
    The alert pops up when GPs open the medical records for a patient who has been issued with three prescriptions for short-acting reliever inhalers, such as salbutamol, within a three-month period. It suggests the patient should have an asthma review to assess symptoms and improve asthma control. Short-acting beta2-agonists (SABAs), usually described as blue inhalers, afford short-term relief of asthma symptoms by expanding the airways, but do not deal with the underlying inflammatory cause.
    “Excessive use of reliever inhalers such as salbutamol is an indicator of poorly controlled asthma and a risk factor for asthma attacks. It has also been implicated in asthma-related deaths. Yet, despite national and international asthma guidelines, excessive prescribing of short-acting beta2-agonists persists,” said Dr Shauna McKibben, an honorary research fellow at the Institute of Population Health Sciences Queen Mary University of London (QMUL), UK, and clinical nurse specialist in asthma and allergy at Imperial College Healthcare NHS Trust, London, who led the research. “This research aimed to identify and target excessive SABA prescribing using an electronic alert in GPs’ computer systems to identify at-risk patients, change prescribing behaviour and improve asthma management.”
    The study of 18,244 asthma patients in 132 general practices in north-east London found a 6% reduction in the excessive prescribing of reliever inhalers in the 12 months following the alert first appearing on patients’ records. In addition, three months after the alert, asthma reviews increased by 12%, within six months after the alert, repeat prescribing of SABAs reduced by 5% and asthma exacerbations requiring treatment with oral steroids reduced by 8%.
    The alert to identify excessive SABA prescribing was introduced in 2015 on GPs’ computer systems that used EMIS clinical software. At the time of the research EMIS was used by almost all general practices in north-east London, and 56% of English practices used it by 2017.
    Dr McKibben analysed data on SABA prescribing for patients in all practices in the north-east London boroughs of City and Hackney, Tower Hamlets and Newham between 2015 and 2016. She compared these with excessive SABA prescribing between 2013 to 2014, before the alert was introduced.

    advertisement

    She said: “The most important finding is the small but potentially clinically significant reduction in SABA prescribing in the 12 months after the alert. This, combined with the other results, suggests that the alert prompts a review of patients who may have poor asthma control. An asthma review facilitates the assessment of SABA use and is an important opportunity to improve asthma management.”
    Dr McKibben also asked a sample of GPs, receptionists and nurses in general practice about their thoughts on the alert.
    “The alert was viewed as a catalyst for asthma review; however, the provision of timely review was challenging and response to the alert was dependent on local practice resources and clinical priorities,” she said.
    A limitation of the research was that the alert assumed that only one SABA inhaler was issued per prescription, when often two at a time may be issued. “Therefore, excessive SABA prescribing and the subsequent reduction in prescribing following the alert may be underestimated,” said Dr McKibben.
    She continued: “Excessive SABA use is only one indicator for poor asthma control but the risks are not well understood by patients and are often overlooked by healthcare professionals. Further research into the development and robust evaluation of tools to support primary care staff in the management of people with asthma is essential to improve asthma control and reduce hospital admissions.”
    The study’s findings are now being used to support and inform the REAL-HEALTH Respiratory initiative, a Barts Charity funded three-year programme with the clinical effectiveness group at QMUL. The initiative provides general practices with EMIS IT tools to support the identification of patients with high-risk asthma. This includes an electronic alert for excessive SABA prescribing and an asthma prescribing tool to identify patients with poor asthma control who may be at risk of hospital admission.
    Daiana Stolz, who was not involved in the research, is the European Respiratory Society Education Council Chair and Professor of Respiratory Medicine and a leading physician at the University Hospital Basel, Switzerland. She said: “This study shows how a relatively simple intervention, an electronic alert popping up on GPs’ computers when they open a patient’s records, can prompt a review of asthma medication and can lead to a reduction in excessive prescribing of short-acting asthma relievers and better asthma control. However, the fact that general practices often struggled to provide a timely asthma review in a period before the COVID-19 pandemic, suggests that far more resources need to be made available to primary care, particularly in this pandemic period.” More

  • in

    'Selfies' could be used to detect heart disease

    Sending a “selfie” to the doctor could be a cheap and simple way of detecting heart disease, according to the authors of a new study published today (Friday) in the European Heart Journal.
    The study is the first to show that it’s possible to use a deep learning computer algorithm to detect coronary artery disease (CAD) by analysing four photographs of a person’s face.
    Although the algorithm needs to be developed further and tested in larger groups of people from different ethnic backgrounds, the researchers say it has the potential to be used as a screening tool that could identify possible heart disease in people in the general population or in high-risk groups, who could be referred for further clinical investigations.
    “To our knowledge, this is the first work demonstrating that artificial intelligence can be used to analyse faces to detect heart disease. It is a step towards the development of a deep learning-based tool that could be used to assess the risk of heart disease, either in outpatient clinics or by means of patients taking ‘selfies’ to perform their own screening. This could guide further diagnostic testing or a clinical visit,” said Professor Zhe Zheng, who led the research and is vice director of the National Center for Cardiovascular Diseases and vice president of Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, People’s Republic of China.
    He continued: “Our ultimate goal is to develop a self-reported application for high risk communities to assess heart disease risk in advance of visiting a clinic. This could be a cheap, simple and effective of identifying patients who need further investigation. However, the algorithm requires further refinement and external validation in other populations and ethnicities.”
    It is known already that certain facial features are associated with an increased risk of heart disease. These include thinning or grey hair, wrinkles, ear lobe crease, xanthelasmata (small, yellow deposits of cholesterol underneath the skin, usually around the eyelids) and arcus corneae (fat and cholesterol deposits that appear as a hazy white, grey or blue opaque ring in the outer edges of the cornea). However, they are difficult for humans to use successfully to predict and quantify heart disease risk.

    advertisement

    Prof. Zheng, Professor Xiang-Yang Ji, who is director of the Brain and Cognition Institute in the Department of Automation at Tsinghua University, Beijing, and other colleagues enrolled 5,796 patients from eight hospitals in China to the study between July 2017 and March 2019. The patients were undergoing imaging procedures to investigate their blood vessels, such as coronary angiography or coronary computed tomography angiography (CCTA). They were divided randomly into training (5,216 patients, 90%) or validation (580, 10%) groups.
    Trained research nurses took four facial photos with digital cameras: one frontal, two profiles and one view of the top of the head. They also interviewed the patients to collect data on socioeconomic status, lifestyle and medical history. Radiologists reviewed the patients’ angiograms and assessed the degree of heart disease depending on how many blood vessels were narrowed by 50% or more (≥ 50% stenosis), and their location. This information was used to create, train and validate the deep learning algorithm.
    The researchers then tested the algorithm on a further 1,013 patients from nine hospitals in China, enrolled between April 2019 and July 2019. The majority of patients in all the groups were of Han Chinese ethnicity.
    They found that the algorithm out-performed existing methods of predicting heart disease risk (Diamond-Forrester model and the CAD consortium clinical score). In the validation group of patients, the algorithm correctly detected heart disease in 80% of cases (the true positive rate or ‘sensitivity’) and correctly detected heart disease was not present in 61% of cases (the true negative rate or ‘specificity’). In the test group, the sensitivity was 80% and specificity was 54%.
    Prof. Ji said: “The algorithm had a moderate performance, and additional clinical information did not improve its performance, which means it could be used easily to predict potential heart disease based on facial photos alone. The cheek, forehead and nose contributed more information to the algorithm than other facial areas. However, we need to improve the specificity as a false positive rate of as much as 46% may cause anxiety and inconvenience to patients, as well as potentially overloading clinics with patients requiring unnecessary tests.”
    As well as requiring testing in other ethnic groups, limitations of the study include the fact that only one centre in the test group was different to those centres which provided patients for developing the algorithm, which may further limit its generalisabilty to other populations.

    advertisement

    In an accompanying editorial, Charalambos Antoniades, Professor of Cardiovascular Medicine at the University of Oxford, UK, and Dr Christos Kotanidis, a DPhil student working under Prof. Antoniades at Oxford, write: “Overall, the study by Lin et al. highlights a new potential in medical diagnostics……The robustness of the approach of Lin et al. lies in the fact that their deep learning algorithm requires simply a facial image as the sole data input, rendering it highly and easily applicable at large scale.”
    They continue: “Using selfies as a screening method can enable a simple yet efficient way to filter the general population towards more comprehensive clinical evaluation. Such an approach can also be highly relevant to regions of the globe that are underfunded and have weak screening programmes for cardiovascular disease. A selection process that can be done as easily as taking a selfie will allow for a stratified flow of people that are fed into healthcare systems for first-line diagnostic testing with CCTA. Indeed, the ‘high risk’ individuals could have a CCTA, which would allow reliable risk stratification with the use of the new, AI-powered methodologies for CCTA image analysis.”
    They highlight some of the limitations that Prof. Zheng and Prof. Ji also include in their paper. These include the low specificity of the test, that the test needs to be improved and validated in larger populations, and that it raises ethical questions about “misuse of information for discriminatory purposes. Unwanted dissemination of sensitive health record data, that can easily be extracted from a facial photo, renders technologies such as that discussed here a significant threat to personal data protection, potentially affecting insurance options. Such fears have already been expressed over misuse of genetic data, and should be extensively revisited regarding the use of AI in medicine.”
    The authors of the research paper agree on this point. Prof. Zheng said: “Ethical issues in developing and applying these novel technologies is of key importance. We believe that future research on clinical tools should pay attention to the privacy, insurance and other social implications to ensure that the tool is used only for medical purposes.”
    Prof. Antoniades and Dr. Kotanidis also write in their editorial that defining CAD as ≥ 50% stenosis in one major coronary artery “may be a simplistic and rather crude classification as it pools in the non-CAD group individuals that are truly healthy, but also people who have already developed the disease but are still at early stages (which might explain the low specificity observed).” More

  • in

    Skat and poker: More luck than skill?

    Chess requires playing ability and strategic thinking; in roulette, chance determines victory or defeat, gain or loss. But what about skat and poker? Are they games of chance or games of skill in game theory? This classification also determines whether play may involve money. Prof. Dr Jörg Oechssler and his team of economists at Heidelberg University studied this question, developing a rating system similar to the Elo system used for chess. According to their study, both skat and poker involve more than 50 per cent luck, yet over the long term, skill prevails.
    “Whether a game is one of skill or luck also determines whether it can be played for money. But assigning a game to these categories is difficult owing to the many shades of gradation between extremes like roulette and chess,” states Prof. Oechssler. Courts in Germany legally classify poker as a game of chance that can be played only in government-sanctioned casinos, whereas skat is considered a game of skill. This classification stems from a court decision taken in 1906. One frequently used assessment criterion is whether the outcome for one player depends more than 50 per cent on luck. But how can this be measured objectively?
    It is this question the Heidelberg researchers investigated in their game theoretic study. Using data from more than four million online games of chess, poker, and skat, they developed a rating system for poker and skat based on the Elo method for chess, which calculates the relative skill levels of individual players. “Because chess is purely a game of skill, the rating distribution is very wide, ranging from 1,000 for a novice to over 2.800 for the current world champion. So the wider the distribution, the more important skill is,” explains Dr Peter Dürsch. In a game involving more luck and chance, the numbers are therefore not likely to be so far apart.
    The Heidelberg research confirms exactly that: the distribution is much narrower in poker and skat. Whereas the standard deviation — the average deviation from the mean — for chess is over 170, the other two games did not exceed 30. To create a standard of comparison for a game involving more than 50 per cent luck, the researchers replaced every other game in their chess data set with a coin toss. This produced a deviation of 45, which is still much higher than poker and skat. “Both games fall below the 50 per cent skill level, and therefore depend mainly on luck,” states Marco Lambrecht. “Skill, however, does prevail in the long run. Our analyses show that after about one hundred games, a poker player who is one standard deviation better than his opponent is 75 per cent more likely to have won more games than his opponent.”
    In principle, the method can be applied to all games where winners are determined, report the researchers. The percentage of skill in the popular card game Mau-Mau, for example, is far less than poker, whereas the Chinese board game Go involves even more skill than chess.

    Story Source:
    Materials provided by University of Heidelberg. Note: Content may be edited for style and length. More

  • in

    Scientists slow and steer light with resonant nanoantennas

    Light is notoriously fast. Its speed is crucial for rapid information exchange, but as light zips through materials, its chances of interacting and exciting atoms and molecules can become very small. If scientists can put the brakes on light particles, or photons, it would open the door to a host of new technology applications.
    Now, in a paper published on Aug. 17, in Nature Nanotechnology, Stanford scientists demonstrate a new approach to slow light significantly, much like an echo chamber holds onto sound, and to direct it at will. Researchers in the lab of Jennifer Dionne, associate professor of materials science and engineering at Stanford, structured ultrathin silicon chips into nanoscale bars to resonantly trap light and then release or redirect it later. These “high-quality-factor” or “high-Q” resonators could lead to novel ways of manipulating and using light, including new applications for quantum computing, virtual reality and augmented reality; light-based WiFi; and even the detection of viruses like SARS-CoV-2.
    “We’re essentially trying to trap light in a tiny box that still allows the light to come and go from many different directions,” said postdoctoral fellow Mark Lawrence, who is also lead author of the paper. “It’s easy to trap light in a box with many sides, but not so easy if the sides are transparent — as is the case with many Silicon-based applications.”
    Make and manufacture
    Before they can manipulate light, the resonators need to be fabricated, and that poses a number of challenges.
    A central component of the device is an extremely thin layer of silicon, which traps light very efficiently and has low absorption in the near-infrared, the spectrum of light the scientists want to control. The silicon rests atop a wafer of transparent material (sapphire, in this case) into which the researchers direct an electron microscope “pen” to etch their nanoantenna pattern. The pattern must be drawn as smoothly as possible, as these antennas serve as the walls in the echo-chamber analogy, and imperfections inhibit the light-trapping ability.

    advertisement

    “High-Q resonances require the creation of extremely smooth sidewalls that don’t allow the light to leak out,” said Dionne, who is also Senior Associate Vice Provost of Research Platforms/Shared Facilities. “That can be achieved fairly routinely with larger micron-scale structures, but is very challenging with nanostructures which scatter light more.”
    Pattern design plays a key role in creating the high-Q nanostructures. “On a computer, I can draw ultra-smooth lines and blocks of any given geometry, but the fabrication is limited,” said Lawrence. “Ultimately, we had to find a design that gave good-light trapping performance but was within the realm of existing fabrication methods.”
    High quality (factor) applications
    Tinkering with the design has resulted in what Dionne and Lawrence describe as an important platform technology with numerous practical applications.
    The devices demonstrated so-called quality factors up to 2,500, which is two orders of magnitude (or 100 times) higher than any similar devices have previously achieved. Quality factors are a measure describing resonance behavior, which in this case is proportional to the lifetime of the light. “By achieving quality factors in the thousands, we’re already in a nice sweet spot from some very exciting technological applications,” said Dionne.

    advertisement

    For example, biosensing. A single biomolecule is so small that it is essentially invisible. But passing light over a molecule hundreds or thousands of times can greatly increase the chance of creating a detectable scattering effect.
    Dionne’s lab is working on applying this technique to detecting COVID-19 antigens — molecules that trigger an immune response — and antibodies — proteins produced by the immune system in response. “Our technology would give an optical readout like the doctors and clinicians are used to seeing,” said Dionne. “But we have the opportunity to detect a single virus or very low concentrations of a multitude of antibodies owing to the strong light-molecule interactions.” The design of the high-Q nanoresonators also allows each antenna to operate independently to detect different types of antibodies simultaneously.
    Though the pandemic spurred her interest in viral detection, Dionne is also excited about other applications, such as LIDAR — or Light Detection and Ranging, which is laser-based distance measuring technology often used in self-driving vehicles — that this new technology could contribute to. “A few years ago I couldn’t have imagined the immense application spaces that this work would touch upon,” said Dionne. “For me, this project has reinforced the importance of fundamental research — you can’t always predict where fundamental science is going to go or what it’s going to lead to, but it can provide critical solutions for future challenges.”
    This innovation could also be useful in quantum science. For example, splitting photons to create entangled photons that remain connected on a quantum level even when far apart would typically require large tabletop optical experiments with big expensive precisely polished crystals. “If we can do that, but use our nanostructures to control and shape that entangled light, maybe one day we will have an entanglement generator that you can hold in your hand,” Lawrence said. “With our results, we are excited to look at the new science that’s achievable now, but also trying to push the limits of what’s possible.”
    Additional Stanford co-authors include graduate students David Russell Barton III and Jefferson Dixon, research associate Jung-Hwan Song, former research scientist Jorik van de Groep, and Mark Brongersma, professor of materials science and engineering. This work was funded by the DOE-EFRC, “Photonics at Thermodynamic Limits” as well as by the AFOSR. Jen is also an associate professor, by courtesy, of radiology and member of the Wu Tsai Neurosciences Institute and Bio-X. More

  • in

    First daily surveillance of emerging COVID-19 hotspots

    Over the course of the coronavirus epidemic, COVID-19 outbreaks have hit communities across the United States. As clusters of infection shift over time, local officials are forced into a whack-a-mole approach to allocating resources and enacting public health policies. In a new study led by the University of Utah, geographers published the first effort to conduct daily surveillance of emerging COVID-19 hotspots for every county in the contiguous U.S. The researchers hope that timely, localized data will help inform future decisions.
    Using innovative space-time statistics, the researchers detected geographic areas where the population had an elevated risk of contracting the virus. They ran the analysis every day using daily COVID-19 case counts from Jan. 22 to June 5, 2020 to establish regional clusters, defined as a collection of disease cases closely grouped in time and space. For the first month, the clusters were very large, especially in the Midwest. Starting on April 25, the clusters become smaller and more numerous, a trend that persists until the end of the study.
    The article published online on June 27, 2020, in the journal Spatial and Spatio-temporal Epidemiology. The study builds on the team’s previous work by evaluating the characteristics of each cluster and how the characteristics change as the pandemic unfolds.
    “We applied a clustering method that identifies areas of concern, and also tracks characteristics of the clusters — are they growing or shrinking, what is the population density like, is relative risk increasing or not?” said Alexander Hohl, lead author and assistant professor at the Department of Geography at the U. “We hope this can offer insights into the best strategies for controlling the spread of COVID-19, and to potentially predict future hotspots.”
    The research team, including Michael Desjardins of Johns Hopkins Bloomberg School of Public Health’s Spatial Science for Public Health Center and Eric Delmelle and Yu Lan of the University of North Carolina at Charlotte, have created a web application of the clusters that the public can check daily at COVID19scan.net. The app is just a start, Hohl warned. State officials would need to do smaller scale analysis to identify specific locations for intervention.
    “The app is meant to show where officials should prioritize efforts — it’s not telling you where you will or will not contract the virus,” Hohl said. “I see this more as an inspiration, rather than a concrete tool, to guide authorities to prevent or respond to outbreaks. It also gives the public a way to see what we’re doing.”
    The researchers used daily case counts reported in the COVID-19 Data Repository from the Center for Systems Science and Engineering at Johns Hopkins University, which lists cases at the county level in the contiguous U.S. They used the U.S. Census website’s 2018 five-year population estimates within each county.
    To establish the clusters, they ran a space-time scan statistic that takes into account the observed number of cases and the underlying population within a given geographic area and timespan. Using SatScan, a widely used software, they identified areas of significantly elevated risk of COVID-19. Due to the large variation between counties, evaluating risk is tricky. Rural areas and small, single counties may not have large populations, therefore just a handful of cases would make risk go up significantly.
    This study is the third of the research group’s iteration using the statistical method for detecting and monitoring COVID-19 clusters in the U.S. Back in May, the group published their first geographical study to use the tracking method, which was also of the first paper published by geographers analyzing COVID-19. In June, they published an update.
    “May seems like an eternity ago because the pandemic is changing so rapidly,” Hohl said. “We continue to get feedback from the research community and are always trying to make the method better. This is just one method to zero in on communities that are at risk.”
    A big limitation of the analysis is the data itself. COVID-19 reporting is different for each state. There’s no uniform way that information flows from the labs that confirm the diagnoses, to the state health agencies to the COVID-19 Data Repository from the Center for Systems Science and Engineering at Johns Hopkins University, where the study gets its data. Also, the testing efforts are quite different between states, and the team is working to adjust the number of observed cases to reflect a state’s efforts. Hohl is also working with other U researchers to look at the relationship between social media and COVID-19 to predict the future trajectory of outbreaks.
    “We’ve been working on this since COVID-19 first started and the field is moving incredibly fast,” said Hohl. “It’s so important to get the word out and react to what else is being published so we can take the next step in the project.”

    Story Source:
    Materials provided by University of Utah. Original written by Lisa Potter. Note: Content may be edited for style and length. More

  • in

    New 'molecular computers' find the right cells

    Scientists have demonstrated a new way to precisely target cells by distinguishing them from neighboring cells that look quite similar.
    Even cells that become cancerous may differ from their healthy neighbors in only a few subtle ways. A central challenge in the treatment of cancer and many other diseases is being able to spot the right cells while sparing all others.
    In a paper published 20 August in Science, a team of researchers at the University of Washington School of Medicine and the Fred Hutchinson Cancer Research Center in Seattle describe the design of new nanoscale devices made of synthetic proteins. These target a therapeutic agent only to cells with specific, predetermined combinations of cell surface markers.
    Remarkably, these ‘molecular computers’ operate all on their own and can search out the cells that they were programmed to find.
    “We were trying to solve a key problem in medicine, which is how to target specific cells in a complex environment,” said Marc Lajoie, a lead author of the study and recent postdoctoral scholar at the UW Medicine Institute for Protein Design. “Unfortunately, most cells lack a single surface marker that is unique to just them. So, to improve cell targeting, we created a way to direct almost any biological function to any cell by going after combinations of cell surface markers.”
    The tool they created is called Co-LOCKR, or Colocalization-dependant Latching Orthogonal Cage/Key pRoteins. It consists of multiple synthetic proteins that, when separated, do nothing. But when the pieces come together on the surface of a targeted cell, they change shape, thereby activating a sort of molecular beacon.
    The presence of these beacons on a cell surface can guide a predetermined biological activity — like cell killing — to a specific, targeted cell.
    The researchers demonstrated that Co-LOCKR can focus the cell-killing activity of CAR T cells. In the lab, they mixed Co-LOCKR proteins, CAR T cells, and a soup of potential target cells. Some of these had just one marker, others had two or three. Only the cells with the predetermined marker combination were killed by the T cells. If a cell also had a predetermined “healthy marker,” then that cell was spared.
    “T cells are extremely efficient killers, so the fact that we can limit their activity on cells with the wrong combination of antigens yet still rapidly eliminate cells with the correct combination is game-changing,” said Alexander Salter, another lead author of the study and an M.D./Ph.D. student in the medical scientist program at the UW School of Medicine. He is training in Stanley Riddell’s lab at the Fred Hutchinson Cancer Research Center.
    This cell-targeting strategy relies entirely on proteins. This approach sets it apart from most other methods that rely on engineered cells and operate on slower timescales.
    “We believe Co-LOCKR will be useful in many areas where precise cell targeting is needed, including immunotherapy and gene therapy,” said David Baker, professor of biochemistry at the UW School of Medicine and director of the Institute for Protein Design.

    Story Source:
    Materials provided by University of Washington Health Sciences/UW Medicine. Original written by Ian Haydon. Note: Content may be edited for style and length. More