More stories

  • in

    When did the first COVID-19 case arise?

    Using methods from conservation science, a new analysis suggests that the first case of COVID-19 arose between early October and mid-November, 2019 in China, with the most likely date of origin being November 17. David Roberts of the University of Kent, U.K., and colleagues present these findings in the open-access journal PLOS Pathogens.
    The origins of the ongoing COVID-19 pandemic remain unclear. The first officially identified case occurred in early December 2019. However, mounting evidence suggests that the original case may have emerged even earlier.
    To help clarify the timing of the onset of the pandemic, Roberts and colleagues repurposed a mathematical model originally developed by conservation scientists to determine the date of extinction of a species, based on recorded sightings of the species. For this analysis, they reversed the method to determine the date when COVID-19 most likely originated, according to when some of the earliest known cases occurred in 203 countries.
    The analysis suggests that the first case occurred in China between early October and mid-November of 2019. The first case most likely arose on November 17, and the disease spread globally by January 2020. These findings support growing evidence that the pandemic arose sooner and grew more rapidly than officially accepted.
    The analysis also identified when COVID-19 is likely to have spread to the first five countries outside of China, as well as other continents. For instance, it estimates that the first case outside of China occurred in Japan on January 3, 2020, the first case in Europe occurred in Spain on January 12, 2020, and the first case in North America occurred in the United States on January 16, 2020.
    The researchers note that their novel method could be applied to better understand the spread of other infectious diseases in the future. Meanwhile, better knowledge of the origins of COVID-19 could improve understanding of its continued spread.
    Roberts adds, “The method we used was originally developed by me and a colleague to date extinctions, however, here we use it to date the origination and spread of COVID-19. This novel application within the field of epidemiology offers a new opportunity to understand the emergence and spread of diseases as it only requires a small amount of data.”
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Nanotech and AI could hold key to unlocking global food security challenge

    ‘Precision agriculture’ where farmers respond in real time to changes in crop growth using nanotechnology and artificial intelligence (AI) could offer a practical solution to the challenges threatening global food security, a new study reveals.
    Climate change, increasing populations, competing demands on land for production of biofuels and declining soil quality mean it is becoming increasingly difficult to feed the world’s populations.
    The United Nations (UN) estimates that 840 million people will be affected by hunger by 2030, but researchers have developed a roadmap combining smart and nano-enabled agriculture with AI and machine learning capabilities that could help to reduce this number.
    Publishing their findings today in Nature Plants, an international team of researchers led by the University of Birmingham sets out the following steps needed to use AI to harness the power of nanomaterials safely, sustainably and responsibly: Understand the long-term fate of nanomaterials in agricultural environments — how nanomaterials can interact with roots, leaves and soil; Assess the long-term life cycle impact of nanomaterials in the agricultural ecosystem such as how how repeated application of nanomaterials will affect soils; Take a systems-level approach to nano-enabled agriculture — use existing data on soil quality, crop yield and nutrient-use efficiency (NUE) to predict how nanomaterials will behave in the environment; and Use AI and machine learning to identify key properties that will control the behaviour of nanomaterials in agricultural settings.Study co-author Iseult Lynch, Professor of Environmental Nanosciences at the University of Birmingham, commented: “Current estimates show nearly 690 million people are hungry — almost nine per cent of the planet’s population. Finding sustainable agricultural solutions to this problem requires us to take bold new approaches and integrate knowledge from diverse fields, such as materials science and informatics.
    “Precision agriculture, using nanotechnology and artificial intelligence, offers exciting opportunities for sustainable food production. We can link existing models for nutrient cycling and crop productivity with nanoinformatics approaches to help both crops and soil perform better — safely, sustainably and responsibly.”
    The main driver for innovation in agritech is the need to feed the increasing global population with a decreasing agricultural land area, whilst conserving soil health and protecting environmental quality. More

  • in

    Quantum simulation: Measurement of entanglement made easier

    Researchers have developed a method to make previously hardly accessible properties in quantum systems measurable. The new method for determining the quantum state in quantum simulators reduces the number of necessary measurements and makes work with quantum simulators much more efficient.
    In a few years, a new generation of quantum simulators could provide insights that would not be possible using simulations on conventional supercomputers. Quantum simulators are capable of processing a great amount of information since they quantum mechanically superimpose an enormously large number of bit states. For this reason, however, it also proves difficult to read this information out of the quantum simulator. In order to be able to reconstruct the quantum state, a very large number of individual measurements are necessary. The method used to read out the quantum state of a quantum simulator is called quantum state tomography. “Each measurement provides a ‘cross-sectional image’ of the quantum state. You then put these cross-sectional images together to form the complete quantum state,” explains theoretical physicist Christian Kokail from Peter Zoller’s team at the Institute of Quantum Optics and Quantum Information at the Austrian Academy of Sciences and the Department of Experimental Physics at the University of Innsbruck. The number of measurements needed in the lab increases very rapidly with the size of the system. “The number of measurements grows exponentially with the number of qubits,” the physicist says. The Innsbruck researchers have now succeeded in developing a much more efficient method for quantum simulators.
    Efficient method that delivers new insights
    Insights from quantum field theory allow quantum state tomography to be much more efficient, i.e., to be performed with significantly fewer measurements. “The fascinating thing is that it was not at all clear from the outset that the predictions from quantum field theory could be applied to our quantum simulation experiments,” says theoretical physicist Rick van Bijnen. “Studying older scientific papers from this field happened to lead us down this track.” Quantum field theory provides the basic framework of the quantum state in the quantum simulator. Only a few measurements are then needed to fit the details into this basic framework. Based on this, the Innsbruck researchers have developed a measurement protocol by which tomography of the quantum state becomes possible with a drastically reduced number of measurements. At the same time, the new method allows new insights into the structure of the quantum state to be obtained. The physicists tested the new method with experimental data from an ion trap quantum simulator of the Innsbruck research group led by Rainer Blatt and Christian Roos. “In the process, we were now able to measure properties of the quantum state that were previously not observable in this quality,” Kokail recounts.
    Verification of the result
    A verification protocol developed by the group together with Andreas Elben and Benoit Vermersch two years ago can be used to check whether the structure of the quantum state actually matches the expectations from quantum field theory. “We can use further random measurements to check whether the basic framework for tomography that we developed based on the theory actually fits or is completely wrong,” explains Christian Kokail. The protocol raises a red flag if the framework does not fit. Of course, this would also be an interesting finding for the physicists, because it would possibly provide clues for the not yet fully understood relationship with quantum field theory. At the moment, the physicists around Peter Zoller are developing quantum protocols in which the basic framework of the quantum state is not stored on a classical computer, but is realized directly on the quantum simulator.
    Story Source:
    Materials provided by University of Innsbruck. Note: Content may be edited for style and length. More

  • in

    Microspheres quiver when shocked

    A challenging frontier in science and engineering is controlling matter outside of thermodynamic equilibrium to build material systems with capabilities that rival those of living organisms. Research on active colloids aims to create micro- and nanoscale “particles” that swim through viscous fluids like primitive microorganisms. When these self-propelled particles come together, they can organize and move like schools of fish to perform robotic functions, such as navigating complex environments and delivering “cargo” to targeted locations.
    A Columbia Engineering team led by Kyle Bishop, professor of chemical engineering, is at the forefront of studying and designing the dynamics of active colloids powered by chemical reactions or by external magnetic, electric, or acoustic fields. The group is developing colloidal robots, in which active components interact and assemble to perform dynamic functions inspired by living cells.
    In a new study published today by Physical Review Letters, Bishop’s group, working with collaborators at Northwestern University’s Center for Bio-Inspired Energy Science (CBES), report that they have demonstrated the use of DC electric fields to drive back-and-forth rotation of micro-particles in electric boundary layers. These particle oscillators could be useful as clocks that coordinate the organization of active matter and even, perhaps, orchestrate the functions of micron-scale robots.
    “Tiny particle oscillators could enable new types of active matter that combine the swarming behaviors of self-propelled colloids and the synchronizing behaviors of coupled oscillators,” says Bishop. “We expect interactions among the particles to depend on their respective positions and phases, thus enabling richer collective behaviors — behaviors that can be designed and exploited for applications in swarm robotics.”
    Making a reliable clock at the micron-scale is not as simple as it may sound. As one can imagine, pendulum clocks don’t work well when immersed in honey. Their periodic motion — like that of all inertial oscillators — drags to a halt under sufficient resistance from friction. Without the help of inertia, it is similarly challenging to drive the oscillatory motion of micron-scale particles in viscous fluids.
    “Our recent observation of colloidal spheres oscillating back and forth in a DC electric field presented a bit of mystery, one we wanted to solve,” observes the paper’s lead author, Zhengyan Zhang, a PhD student in Bishop’s lab who discovered this effect. “By varying the particle size, field strength, and fluid conductivity, we identified experimental conditions needed for oscillations and uncovered the mechanism underlying the particles’ rhythmic dynamics.”
    Earlier work has demonstrated how similar particles can rotate steadily by a process known as Quincke rotation. Like a water wheel filled from above, the Quincke instability is driven by the accumulation of electric charge on the particle surface and its mechanical rotation in the electric field. However, existing models of Quincke rotation — and of overdamped water wheels — do not predict oscillatory dynamics. More

  • in

    Tree pollen carries SARS-CoV-2 particles farther, facilitates virus spread, study finds

    Most models explaining how viruses are transmitted focus on viral particles escaping one person to infect a nearby person. A study on the role of microscopic particles in how viruses are transmitted suggests pollen is nothing to sneeze at.
    In Physics of Fluids, by AIP Publishing, Talib Dbouk and Dimitris Drikakis investigate how pollen facilitates the spread of an RNA virus like the COVID-19 virus. The study draws on cutting-edge computational approaches for analyzing fluid dynamics to mimic the pollen movement from a willow tree, a prototypical pollen emitter. Airborne pollen grains contribute to the spread of airborne viruses, especially in crowded environments.
    “To our knowledge, this is the first time we show through modeling and simulation how airborne pollen micrograins are transported in a light breeze, contributing to airborne virus transmission in crowds outdoors,” Drikakis said.
    The researchers noticed a correlation between COVID-19 infection rates and the pollen concentration on the National Allergy Map. Each pollen grain can carry hundreds of virus particles at a time. Trees alone can put 1,500 grains per cubic meter into the air on heavy days.
    The researchers set to work by creating all the pollen-producing parts of their computational willow tree. They simulated outdoor gatherings of roughly 10 or 100 people, some of them shedding COVID-19 particles, and subjected the people to 10,000 pollen grains.
    “One of the significant challenges is the re-creation of an utterly realistic environment of a mature willow tree,” said Dbouk. “This included thousands of tree leaves and pollen grain particles, hundreds of stems and a realistic gathering of a crowd of about 100 individuals at about 20 meters from the tree.”
    Tuning the model to the temperature, windspeed, and humidity of a typical spring day in the U.S., the pollen passed through the crowd in less than one minute, which could significantly affect the virus load carried along and increase the risk of infection.
    The authors said the 6-foot distance often cited for COVID-19 recommendations might not be adequate for those at risk for the disease in crowded areas with high pollen. New recommendations based on local pollen levels could be used to manage the infection risk better.
    While calling attention to other forms of COVID-19 transmission, the authors hope their study stokes further interest in the fluid dynamics of plants.
    Next, they look to better understand the mechanisms underlying the interaction between airborne pollen grains and the human respiratory system under different environmental conditions.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Scientists obtain real-time look at how cancers evolve

    From amoebas to zebras, all living things evolve. They change over time as pressures from the environment cause individuals with certain traits to become more common in a population while those with other traits become less common.
    Cancer is no different. Within a growing tumor, cancer cells with the best ability to compete for resources and withstand environmental stressors will come to dominate in frequency. It’s “survival of the fittest” on a microscopic scale.
    But fitness — how well suited any particular individual is to its environment — isn’t set in stone; it can change when the environment changes. The cancer cells that might do best in an environment saturated with chemotherapy drugs are likely to be different than the ones that will thrive in an environment without those drugs. So, predicting how tumors will evolve over time, especially in response to treatment, is a major challenge for scientists.
    A new study by researchers at Memorial Sloan Kettering in collaboration with researchers at the University of British Columbia/BC Cancer in Canada suggests that one day it may be possible to make those predictions. The study, published June 23, 2021, in the journal Nature, was led by MSK computational biologist Sohrab Shah and BC Cancer breast cancer researcher Samuel Aparicio. The scientists showed that a machine-learning approach, built using principles of population genetics that describe how populations change over time, could accurately predict how human breast cancer tumors will evolve.
    “Population genetic models of evolution match up nicely to cancer, but for a number of practical reasons it’s been a challenge to apply these to the evolution of real human cancers,” says Dr. Shah, Chief of Computational Oncology at MSK. “In this study, we show it’s possible to overcome some of those barriers.”
    Ultimately, the approach could provide a means to predict whether a patient’s tumor is likely to stop responding to a particular treatment and identify the cells that are likely to be responsible for a relapse. This could mean highly tailored treatments, delivered at the optimal time, to produce better outcomes for people with cancer. More

  • in

    New algorithm helps autonomous vehicles find themselves, summer or winter

    Without GPS, autonomous systems get lost easily. Now a new algorithm developed at Caltech allows autonomous systems to recognize where they are simply by looking at the terrain around them — and for the first time, the technology works regardless of seasonal changes to that terrain.
    Details about the process were published on June 23 in the journal Science Robotics, published by the American Association for the Advancement of Science (AAAS).
    The general process, known as visual terrain-relative navigation (VTRN), was first developed in the 1960s. By comparing nearby terrain to high-resolution satellite images, autonomous systems can locate themselves.
    The problem is that, in order for it to work, the current generation of VTRN requires that the terrain it is looking at closely matches the images in its database. Anything that alters or obscures the terrain, such as snow cover or fallen leaves, causes the images to not match up and fouls up the system. So, unless there is a database of the landscape images under every conceivable condition, VTRN systems can be easily confused.
    To overcome this challenge, a team from the lab of Soon-Jo Chung, Bren Professor of Aerospace and Control and Dynamical Systems and research scientist at JPL, which Caltech manages for NASA, turned to deep learning and artificial intelligence (AI) to remove seasonal content that hinders current VTRN systems.
    “The rule of thumb is that both images — the one from the satellite and the one from the autonomous vehicle — have to have identical content for current techniques to work. The differences that they can handle are about what can be accomplished with an Instagram filter that changes an image’s hues,” says Anthony Fragoso (MS ’14, PhD ’18), lecturer and staff scientist, and lead author of the Science Robotics paper. “In real systems, however, things change drastically based on season because the images no longer contain the same objects and cannot be directly compared.”
    The process — developed by Chung and Fragoso in collaboration with graduate student Connor Lee (BS ’17, MS ’19) and undergraduate student Austin McCoy — uses what is known as “self-supervised learning.” While most computer-vision strategies rely on human annotators who carefully curate large data sets to teach an algorithm how to recognize what it is seeing, this one instead lets the algorithm teach itself. The AI looks for patterns in images by teasing out details and features that would likely be missed by humans.
    Supplementing the current generation of VTRN with the new system yields more accurate localization: in one experiment, the researchers attempted to localize images of summer foliage against winter leaf-off imagery using a correlation-based VTRN technique. They found that performance was no better than a coin flip, with 50 percent of attempts resulting in navigation failures. In contrast, insertion of the new algorithm into the VTRN worked far better: 92 percent of attempts were correctly matched, and the remaining 8 percent could be identified as problematic in advance, and then easily managed using other established navigation techniques.
    “Computers can find obscure patterns that our eyes can’t see and can pick up even the smallest trend,” says Lee. VTRN was in danger turning into an infeasible technology in common but challenging environments, he says. “We rescued decades of work in solving this problem.”
    Beyond the utility for autonomous drones on Earth, the system also has applications for space missions. The entry, descent, and landing (EDL) system on JPL’s Mars 2020 Perseverance rover mission, for example, used VTRN for the first time on the Red Planet to land at the Jezero Crater, a site that was previously considered too hazardous for a safe entry. With rovers such as Perseverance, “a certain amount of autonomous driving is necessary,” Chung says, “since transmissions take seven minutes to travel between Earth and Mars, and there is no GPS on Mars.” The team considered the Martian polar regions that also have intense seasonal changes, conditions similar to Earth, and the new system could allow for improved navigation to support scientific objectives including the search for water.
    Next, Fragoso, Lee, and Chung will expand the technology to account for changes in the weather as well: fog, rain, snow, and so on. If successful, their work could help improve navigation systems for driverless cars.
    This project was funded by the Boeing Company, and the National Science Foundation. McCoy participated though Caltech’s Summer Undergraduate Research Fellowship program. More

  • in

    Machine learning aids earthquake risk prediction

    Our homes and offices are only as solid as the ground beneath them. When that solid ground turns to liquid — as sometimes happens during earthquakes — it can topple buildings and bridges. This phenomenon is known as liquefaction, and it was a major feature of the 2011 earthquake in Christchurch, New Zealand, a magnitude 6.3 quake that killed 185 people and destroyed thousands of homes.
    An upside of the Christchurch quake was that it was one of the most well-documented in history. Because New Zealand is seismically active, the city was instrumented with numerous sensors for monitoring earthquakes. Post-event reconnaissance provided a wealth of additional data on how the soil responded across the city.
    “It’s an enormous amount of data for our field,” said post-doctoral researcher, Maria Giovanna Durante, a Marie Sklodowska Curie Fellow previously of The University of Texas at Austin (UT Austin). “We said, ‘If we have thousands of data points, maybe we can find a trend.'”
    Durante works with Prof. Ellen Rathje, Janet S. Cockrell Centennial Chair in Engineering at UT Austin and the principal investigator for the National Science Foundation-funded DesignSafe cyberinfrastructure, which supports research across the natural hazards community. Rathje’s personal research on liquefaction led her to study the Christchurch event. She had been thinking about ways to incorporate machine learning into her research and this case seemed like a great place to start.
    “For some time, I had been impressed with how machine learning was being incorporated into other fields, but it seemed we never had enough data in geotechnical engineering to utilize these methods,” Rathje said. “However, when I saw the liquefaction data coming out of New Zealand, I knew we had a unique opportunity to finally apply AI techniques to our field.”
    The two researchers developed a machine learning model that predicted the amount of lateral movement that occurred when the Christchurch earthquake caused soil to lose its strength and shift relative to its surroundings. More