More stories

  • in

    Scientists slow and steer light with resonant nanoantennas

    Light is notoriously fast. Its speed is crucial for rapid information exchange, but as light zips through materials, its chances of interacting and exciting atoms and molecules can become very small. If scientists can put the brakes on light particles, or photons, it would open the door to a host of new technology applications.
    Now, in a paper published on Aug. 17, in Nature Nanotechnology, Stanford scientists demonstrate a new approach to slow light significantly, much like an echo chamber holds onto sound, and to direct it at will. Researchers in the lab of Jennifer Dionne, associate professor of materials science and engineering at Stanford, structured ultrathin silicon chips into nanoscale bars to resonantly trap light and then release or redirect it later. These “high-quality-factor” or “high-Q” resonators could lead to novel ways of manipulating and using light, including new applications for quantum computing, virtual reality and augmented reality; light-based WiFi; and even the detection of viruses like SARS-CoV-2.
    “We’re essentially trying to trap light in a tiny box that still allows the light to come and go from many different directions,” said postdoctoral fellow Mark Lawrence, who is also lead author of the paper. “It’s easy to trap light in a box with many sides, but not so easy if the sides are transparent — as is the case with many Silicon-based applications.”
    Make and manufacture
    Before they can manipulate light, the resonators need to be fabricated, and that poses a number of challenges.
    A central component of the device is an extremely thin layer of silicon, which traps light very efficiently and has low absorption in the near-infrared, the spectrum of light the scientists want to control. The silicon rests atop a wafer of transparent material (sapphire, in this case) into which the researchers direct an electron microscope “pen” to etch their nanoantenna pattern. The pattern must be drawn as smoothly as possible, as these antennas serve as the walls in the echo-chamber analogy, and imperfections inhibit the light-trapping ability.

    advertisement

    “High-Q resonances require the creation of extremely smooth sidewalls that don’t allow the light to leak out,” said Dionne, who is also Senior Associate Vice Provost of Research Platforms/Shared Facilities. “That can be achieved fairly routinely with larger micron-scale structures, but is very challenging with nanostructures which scatter light more.”
    Pattern design plays a key role in creating the high-Q nanostructures. “On a computer, I can draw ultra-smooth lines and blocks of any given geometry, but the fabrication is limited,” said Lawrence. “Ultimately, we had to find a design that gave good-light trapping performance but was within the realm of existing fabrication methods.”
    High quality (factor) applications
    Tinkering with the design has resulted in what Dionne and Lawrence describe as an important platform technology with numerous practical applications.
    The devices demonstrated so-called quality factors up to 2,500, which is two orders of magnitude (or 100 times) higher than any similar devices have previously achieved. Quality factors are a measure describing resonance behavior, which in this case is proportional to the lifetime of the light. “By achieving quality factors in the thousands, we’re already in a nice sweet spot from some very exciting technological applications,” said Dionne.

    advertisement

    For example, biosensing. A single biomolecule is so small that it is essentially invisible. But passing light over a molecule hundreds or thousands of times can greatly increase the chance of creating a detectable scattering effect.
    Dionne’s lab is working on applying this technique to detecting COVID-19 antigens — molecules that trigger an immune response — and antibodies — proteins produced by the immune system in response. “Our technology would give an optical readout like the doctors and clinicians are used to seeing,” said Dionne. “But we have the opportunity to detect a single virus or very low concentrations of a multitude of antibodies owing to the strong light-molecule interactions.” The design of the high-Q nanoresonators also allows each antenna to operate independently to detect different types of antibodies simultaneously.
    Though the pandemic spurred her interest in viral detection, Dionne is also excited about other applications, such as LIDAR — or Light Detection and Ranging, which is laser-based distance measuring technology often used in self-driving vehicles — that this new technology could contribute to. “A few years ago I couldn’t have imagined the immense application spaces that this work would touch upon,” said Dionne. “For me, this project has reinforced the importance of fundamental research — you can’t always predict where fundamental science is going to go or what it’s going to lead to, but it can provide critical solutions for future challenges.”
    This innovation could also be useful in quantum science. For example, splitting photons to create entangled photons that remain connected on a quantum level even when far apart would typically require large tabletop optical experiments with big expensive precisely polished crystals. “If we can do that, but use our nanostructures to control and shape that entangled light, maybe one day we will have an entanglement generator that you can hold in your hand,” Lawrence said. “With our results, we are excited to look at the new science that’s achievable now, but also trying to push the limits of what’s possible.”
    Additional Stanford co-authors include graduate students David Russell Barton III and Jefferson Dixon, research associate Jung-Hwan Song, former research scientist Jorik van de Groep, and Mark Brongersma, professor of materials science and engineering. This work was funded by the DOE-EFRC, “Photonics at Thermodynamic Limits” as well as by the AFOSR. Jen is also an associate professor, by courtesy, of radiology and member of the Wu Tsai Neurosciences Institute and Bio-X. More

  • in

    First daily surveillance of emerging COVID-19 hotspots

    Over the course of the coronavirus epidemic, COVID-19 outbreaks have hit communities across the United States. As clusters of infection shift over time, local officials are forced into a whack-a-mole approach to allocating resources and enacting public health policies. In a new study led by the University of Utah, geographers published the first effort to conduct daily surveillance of emerging COVID-19 hotspots for every county in the contiguous U.S. The researchers hope that timely, localized data will help inform future decisions.
    Using innovative space-time statistics, the researchers detected geographic areas where the population had an elevated risk of contracting the virus. They ran the analysis every day using daily COVID-19 case counts from Jan. 22 to June 5, 2020 to establish regional clusters, defined as a collection of disease cases closely grouped in time and space. For the first month, the clusters were very large, especially in the Midwest. Starting on April 25, the clusters become smaller and more numerous, a trend that persists until the end of the study.
    The article published online on June 27, 2020, in the journal Spatial and Spatio-temporal Epidemiology. The study builds on the team’s previous work by evaluating the characteristics of each cluster and how the characteristics change as the pandemic unfolds.
    “We applied a clustering method that identifies areas of concern, and also tracks characteristics of the clusters — are they growing or shrinking, what is the population density like, is relative risk increasing or not?” said Alexander Hohl, lead author and assistant professor at the Department of Geography at the U. “We hope this can offer insights into the best strategies for controlling the spread of COVID-19, and to potentially predict future hotspots.”
    The research team, including Michael Desjardins of Johns Hopkins Bloomberg School of Public Health’s Spatial Science for Public Health Center and Eric Delmelle and Yu Lan of the University of North Carolina at Charlotte, have created a web application of the clusters that the public can check daily at COVID19scan.net. The app is just a start, Hohl warned. State officials would need to do smaller scale analysis to identify specific locations for intervention.
    “The app is meant to show where officials should prioritize efforts — it’s not telling you where you will or will not contract the virus,” Hohl said. “I see this more as an inspiration, rather than a concrete tool, to guide authorities to prevent or respond to outbreaks. It also gives the public a way to see what we’re doing.”
    The researchers used daily case counts reported in the COVID-19 Data Repository from the Center for Systems Science and Engineering at Johns Hopkins University, which lists cases at the county level in the contiguous U.S. They used the U.S. Census website’s 2018 five-year population estimates within each county.
    To establish the clusters, they ran a space-time scan statistic that takes into account the observed number of cases and the underlying population within a given geographic area and timespan. Using SatScan, a widely used software, they identified areas of significantly elevated risk of COVID-19. Due to the large variation between counties, evaluating risk is tricky. Rural areas and small, single counties may not have large populations, therefore just a handful of cases would make risk go up significantly.
    This study is the third of the research group’s iteration using the statistical method for detecting and monitoring COVID-19 clusters in the U.S. Back in May, the group published their first geographical study to use the tracking method, which was also of the first paper published by geographers analyzing COVID-19. In June, they published an update.
    “May seems like an eternity ago because the pandemic is changing so rapidly,” Hohl said. “We continue to get feedback from the research community and are always trying to make the method better. This is just one method to zero in on communities that are at risk.”
    A big limitation of the analysis is the data itself. COVID-19 reporting is different for each state. There’s no uniform way that information flows from the labs that confirm the diagnoses, to the state health agencies to the COVID-19 Data Repository from the Center for Systems Science and Engineering at Johns Hopkins University, where the study gets its data. Also, the testing efforts are quite different between states, and the team is working to adjust the number of observed cases to reflect a state’s efforts. Hohl is also working with other U researchers to look at the relationship between social media and COVID-19 to predict the future trajectory of outbreaks.
    “We’ve been working on this since COVID-19 first started and the field is moving incredibly fast,” said Hohl. “It’s so important to get the word out and react to what else is being published so we can take the next step in the project.”

    Story Source:
    Materials provided by University of Utah. Original written by Lisa Potter. Note: Content may be edited for style and length. More

  • in

    New 'molecular computers' find the right cells

    Scientists have demonstrated a new way to precisely target cells by distinguishing them from neighboring cells that look quite similar.
    Even cells that become cancerous may differ from their healthy neighbors in only a few subtle ways. A central challenge in the treatment of cancer and many other diseases is being able to spot the right cells while sparing all others.
    In a paper published 20 August in Science, a team of researchers at the University of Washington School of Medicine and the Fred Hutchinson Cancer Research Center in Seattle describe the design of new nanoscale devices made of synthetic proteins. These target a therapeutic agent only to cells with specific, predetermined combinations of cell surface markers.
    Remarkably, these ‘molecular computers’ operate all on their own and can search out the cells that they were programmed to find.
    “We were trying to solve a key problem in medicine, which is how to target specific cells in a complex environment,” said Marc Lajoie, a lead author of the study and recent postdoctoral scholar at the UW Medicine Institute for Protein Design. “Unfortunately, most cells lack a single surface marker that is unique to just them. So, to improve cell targeting, we created a way to direct almost any biological function to any cell by going after combinations of cell surface markers.”
    The tool they created is called Co-LOCKR, or Colocalization-dependant Latching Orthogonal Cage/Key pRoteins. It consists of multiple synthetic proteins that, when separated, do nothing. But when the pieces come together on the surface of a targeted cell, they change shape, thereby activating a sort of molecular beacon.
    The presence of these beacons on a cell surface can guide a predetermined biological activity — like cell killing — to a specific, targeted cell.
    The researchers demonstrated that Co-LOCKR can focus the cell-killing activity of CAR T cells. In the lab, they mixed Co-LOCKR proteins, CAR T cells, and a soup of potential target cells. Some of these had just one marker, others had two or three. Only the cells with the predetermined marker combination were killed by the T cells. If a cell also had a predetermined “healthy marker,” then that cell was spared.
    “T cells are extremely efficient killers, so the fact that we can limit their activity on cells with the wrong combination of antigens yet still rapidly eliminate cells with the correct combination is game-changing,” said Alexander Salter, another lead author of the study and an M.D./Ph.D. student in the medical scientist program at the UW School of Medicine. He is training in Stanley Riddell’s lab at the Fred Hutchinson Cancer Research Center.
    This cell-targeting strategy relies entirely on proteins. This approach sets it apart from most other methods that rely on engineered cells and operate on slower timescales.
    “We believe Co-LOCKR will be useful in many areas where precise cell targeting is needed, including immunotherapy and gene therapy,” said David Baker, professor of biochemistry at the UW School of Medicine and director of the Institute for Protein Design.

    Story Source:
    Materials provided by University of Washington Health Sciences/UW Medicine. Original written by Ian Haydon. Note: Content may be edited for style and length. More

  • in

    Routing apps can deliver real-time insights into traffic emissions

    Routing apps such as Google Maps or Nokia’s Here platform could offer a cost-effective way of calculating emissions hotspots in real time, say researchers at the University of Birmingham.
    These apps routinely capture detailed information as motorists use the GPS technology to plan and navigate routes. This data could be invaluable for researchers and planners who need to better understand traffic flows on busy roads, according to new research published in Weather, the journal of the Royal Meteorological Society.
    Current emissions data from road transport is collated from a number of different sources by the National Atmospheric Emissions Inventory and this is fed into annual reports to demonstrate compliance with emissions targets. Many of these traditional air quality models rely on the assumption that traffic is freely flowing at the legal speed limit — whereas in many areas, traffic flow will vary through the day. These models also overlook finer-grained detail from individual roads or junctions that might be emissions hotspots at particular times of the day.
    Although more detailed information might be available to city planners when designing new road layouts or traffic improvement schemes, it requires costly modelling by consultancies.
    Making use of the crowd-sourced data from routing apps could, the researchers argue, provide a low-cost and highly effective alternative to both high level and localised modelling.
    Helen Pearce, a PhD researcher at the University of Birmingham’s School of Geography, Earth and Environmental Sciences, led the study. She says: “A lot of guidelines and policy on air quality management are based on hourly time snapshots and on the average amount of traffic on a typical day of the year. The difficulty is that traffic can vary an enormous amount within that time window and along individual roads, so in order to make decisions that really work ‘on the ground’, we need to be able to access and make use of this finer-grained detail.”
    The approach suggested by the team was tested on roads in Birmingham’s busy city centre. Information on the time taken to travel a series of road links was obtained via a map provider’s API (application programming interface). This is conceptually similar to the approach that an individual would take to calculate the time of a journey, but using the API the researchers were able to obtain information for multiple roads and at multiple times of the day.
    Following a successful preliminary study, the team scaled up their trial to include 920 major road links across Birmingham city centre, extracting information about these roads at hourly intervals. The researchers found they were able to clearly demonstrate the changes in traffic flow between typical weekdays, weekends, and also the effects of specific social events.
    Speed related emissions could then be calculated using a combination of sources including Defra’s speed-related emission function database, and traffic count data from the Department of Transport. This information also helped the researchers take into account the relative splits between petrol and diesel engines.
    “Our approach could provide significant insights into real-world vehicle behaviours,” says Dr Zhaoya Gong, corresponding author on the study. “As we start to see more electric and hybrid vehicles on the road, the emissions picture starts to get more complicated because there will be less exhaust emissions, but we will still see pollution from brakes, tyres and road surface wear — all these will vary significantly according to the speed of the vehicle so this sort of data will be vital for developing accurate emissions models.”

    Story Source:
    Materials provided by University of Birmingham. Note: Content may be edited for style and length. More

  • in

    Contact tracing apps unlikely to contain COVID-19 spread: UK researchers

    Contract tracing apps used to reduce the spread of COVID-19 are unlikely to be effective without proper uptake and support from concurrent control measures, finds a new study by UCL researchers.
    The systematic review*, published in Lancet Digital Health, shows that evidence around the effectiveness of automated contact tracing systems is currently very limited, and large-scale manual contact tracing alongside other public health control measures — such as physical distancing and closure of indoor spaces such as pubs — is likely to be required in conjunction with automated approaches.
    The team found 15 relevant studies by reviewing more than 4,000 papers on automated and partially-automated contact tracing, and analysed these to understand the potential impact these tools could have in controlling the COVID-19 pandemic.
    Lead author Dr Isobel Braithwaite (UCL Institute of Health Informatics) said: “Across a number of modelling studies, we found a consistent picture that although automated contact tracing could support manual contact tracing, the systems will require large-scale uptake by the population and strict adherence to quarantine advice by contacts notified to have a significant impact on reducing transmission.”
    The authors suggest that even under optimistic assumptions — where 75-80% of UK smartphone owners are using a contact tracing app, and 90-100% of identified potential close contacts initially adhere to quarantine advice — automated contact tracing methods would still need to be used within an integrated public health response to prevent exponential growth of the epidemic.
    In total, 4,033 papers published between 1 Jan 2000 and 14 April 2020 were reviewed, which allowed researchers to identify 15 papers with useful data. The seven studies that addressed automated contact tracing directly were modelling studies that all focused on COVID-19. Five studies of partially-automated contact tracing were descriptive observational studies or case studies, and three studies of automated contact detection looked at a similar disease context to COVID-19, but did not include subsequent tracing or contact notification.

    advertisement

    Partially-automated systems may have some automated processes, for instance in determining the duration of follow-up of contacts required, but do not use proximity of smartphones as a proxy for contact with an infected person.
    Analysis of automated contact tracing apps generally suggested that high population uptake of relevant apps is required alongside other control measures, while partially-automated systems often had better follow-up and slightly more timely intervention.
    Dr Braithwaite said: “Although automated contact tracing shows some promise in helping reduce transmission of COVID-19 within communities, our research highlighted the urgent need for further evaluation of these apps within public health practice, as none of the studies we found provided real-world evidence of their effectiveness, and to improve our understanding of how they could support manual contact tracing systems.”
    The review shows that, at present, there is insufficient evidence to justify reliance on automated contact tracing approaches without additional extensive public health control measures.
    Dr Robert Aldridge (UCL Institute of Health Informatics) added: “We currently do not have good evidence about whether a notification from a smartphone app is as effective in breaking chains of transmission by giving advice to isolate due to contact with a case of COVID-19 when compared to advice provided by a public health contact tracer. We urgently need to study this evidence gap and examine how automated approaches can be integrated with existing contact tracing and disease control strategies, and generate evidence on whether these new digital approaches are cost-effective and equitable.”
    If implemented effectively and quarantine advice is adhered to appropriately, automated contact tracing may offer benefits such as reducing reliance on human recall of close contacts, which could enable identification of additional at-risk individuals, informing potentially affected people in real-time, and saving on resources.

    advertisement

    Dr Braithwaite added: “We should be mindful that automated approaches raise potential privacy and ethics concerns, and also rely on high smartphone ownership, so they may be of very limited value in some countries. Too much reliance on automated contact tracing apps may also increase the risk of COVID-19 for vulnerable and digitally-excluded groups such as older people and people experiencing homelessness.”
    If implementing automated contact tracing technology, the authors say that decision-makers should thoroughly assess available evidence around its effectiveness, privacy and equality considerations, monitoring this as the evidence base evolves.
    They add that plans to properly integrate contact tracing apps within comprehensive outbreak response strategies are important, and their impacts should be evaluated rigorously. A combination of different approaches is needed to control COVID-19, and the review concludes that contact tracing apps have the potential to support that but they are not a panacea.
    This study is co-authored by researchers UCL Public Health Data Science Research Group, Institute of Health Informatics, Department of Applied Health Research, and Collaborative Centre for Inclusion Health.
    *A systematic review carefully identifies all the relevant published and unpublished studies, rates them for quality and synthesises the studies’ findings across the studies identified.
    Study limitations
    As part of this systematic review, researchers did not find any epidemiological studies comparing automated to manual contact tracing systems and their effectiveness in identifying contacts. Other limitations include the lack of eligible empirical studies of fully-automated contact tracing and a paucity of evidence related to ethical concerns or cost-effectiveness. More

  • in

    A how-to guide for teaching GIS courses online with hardware or software in the cloud

    In a new paper this week, geographer Forrest Bowlick at the University of Massachusetts Amherst and colleagues at Texas A&M offer first-hand accounts of what is required for GIS instructors and IT administrators to set up virtual computing specifically for providing state-of-the-art geographic information systems (GIS) instruction.
    Bowlick says, “Our research is very applicable in the current remote learning era that we’re working through, because it provides expertly driven insight into how to set up a virtual computing environment in different modes: with hardware and with software in the cloud. While tailored to those needing GIS support, it is also very applicable for other high-performance software needs.”
    “By capturing the experiences of both setting up the system and of students using the system, we provide an important resource for others needing to make this investment of time, equipment and energy,” he adds. Such technical practice is becoming required for GIS and other instruction, he points out.
    Writing in the Journal of Geography in Higher Education, the authors compare an onsite server set-up and a virtualized cloud set-up scenario and report some student feedback on using a course taught this way. The growing need for fast computers, they point out, has made it harder for everyone to build the machines they need. “Our work talks about how to build fast computers in different ways and shares what we know about making fast computers for digital geography,” Bowlick notes.
    He says, “UMass is just one of several programs nationally, but regionally it’s very attractive, especially at the graduate level, because there are not that many in New England. Ours certainly started at the right time, too. With the turn toward using more computational skills and GIS practices, how to use different computer constructs and programming language are become more fundamental needs in education.”
    Bowlick has directed a one-year M.S. geography degree program with an emphasis in GIS at UMass Amherst since 2017. He says there may be 10 or 15 students from every college on campus with different majors in the introductory course in a given semester. They need to gain fundamentals of spatial thinking, operating software and problem solving applicable to the diverse interests that students bring to the course.
    Generally, these applications involve how to think through spatial problems on such topics as political geography, for example, which might ask who is voting and where, or on gerrymandering and how to discover it. Others are creating COVID-19 virus maps and spatial data to show its prevalence for spatial epidemiology and health geography, while others are modeling ecosystems for fish and wildlife.
    Bowlick explains that geographic information science is “a bridging science” — a suite of technologies, a way of thinking and a way to store spatial data including satellite systems for navigation. GIS handles imagery, computer mapping, spatial planning, modeling land cover over time, even helping businesses decide where to open their next location.
    GIS was first developed in the late 60s when the Canada Land Inventory needed ways to store, manage and analyze land resource maps over huge areas using new computer technology, Bowlick says. His two co-authors at Texas A&M, both experienced GIS instructors, are Dan Goldberg, an associate professor in geography, and Paul Stine, an IT system administrator for geography.
    The authors describe the setup, organization and execution of teaching an introductory WebGIS course while considering student experiences in such a course.
    The paper also defines an operational set of resource metrics needed to support the computing needs of students using virtual machines for server-based CyberGIS classes, as well as comparing costs and components needed to build and support an on-premise private cloud teaching environment for a WebGIS course in an on-premise private cloud teaching environment vs. a comparable cloud-based service provider. More

  • in

    Creating meaningful change in cities takes decades, not years, and starts from the bottom

    Newly published research in Science Advances by University of Chicago researcher Luis Bettencourt proposes a new perspective and models on several known paradoxes of cities. Namely, if cities are engines of economic growth, why do poverty and inequality persist? If cities thrive on faster activity and more diversity, why are so many things so hard to change? And if growth and innovation are so important, how can urban planners and economists get away with describing cities with Groundhog Day-style models of equilibrium?
    Developing improved collective actions and policies, and creating more equitable, prosperous and environmentally sustainable pathways requires transcending these apparent paradoxes. The paper finds it critical that societies embrace and utilize the natural tensions of cities revealed by urban science in order to advance more holistic solutions.
    “To understand how cities can be simultaneously fast and slow, rich and poor, innovative and unstable, requires reframing our fundamental understanding of what cities are and how they work,” says Bettencourt. “There is plenty of room in cities to embody all this complexity, but to harness natural urban processes for good requires that we modify current thinking and action to include different scales and diverse kinds of people in interaction.”
    This is the goal of a new paper entitled “Urban Growth and the Emergent Statistics of Cities,” by Luis Bettencourt, the Inaugural Director of the Mansueto Institute for Urban Innovation and Professor of Ecology and Evolution at the University of Chicago. In the paper, Bettencourt develops a new set of mathematical models to describe cities along a sliding scale of processes of change, starting with individuals and deriving emergent properties of cities and nations as urban systems.
    At the heart of these models is a balancing act: humans must struggle to balance their budgets over time, including incomes and costs in units of money or energy. For most people, incomes and costs vary over time in unpredictable ways that are out of their full control. In cities — where we are all part of complicated webs of interdependence for jobs, services and many forms of collective action — these challenges gain new dimensions that require both individual and collective action. Accounting for these dynamics allows us to see how meaningful change at the levels of cities and nations can emerge from the aggregate daily hustle of millions of people, but also how all this struggle can fail to add up to much.
    The paper shows that relative changes in the status of cities are exceedingly slow, tied to variations in their growth rates, which are now very small in high-income nations such as the U.S. This leads to the problem that the effects of innovation across cities are barely observable, taking place on the time scale of several decades — much slower than any mayoral term, which blunts the ability to judge positive from harmful policies.
    Of especial importance is the negative effect of uncertainty — which tends to befall people in poverty but also everyone during the current pandemic — on processes of innovation and growth. Another challenge are policies that optimize for aggregate growth (such as GDP), which the paper shows typically promotes increasing inequality and social instability. In the paper, these ideas are tested using a long time series for 382 U.S. metropolitan areas over nearly five decades.
    “Growth and change accumulate through the compounding of many small changes in how we lead our daily lives, allocate our time and effort, and interact with each other, especially in cities. Helping more people be creative and gain agency, in part by reducing crippling uncertainties, is predicted to make all the difference between a society that can face difficulties and thrive or one that becomes caught up in endless struggles and eventually decay,” says Bettencourt.

    Story Source:
    Materials provided by University of Chicago. Note: Content may be edited for style and length. More