More stories

  • in

    Parylene photonics enable future optical biointerfaces

    Carnegie Mellon University’s Maysam Chamanzar and his team have invented an optical platform that will likely become the new standard in optical biointerfaces. He’s labeled this new field of optical technology “Parylene photonics,” demonstrated in a recent paper in Nature Microsystems and Nanoengineering.
    There is a growing and unfulfilled demand for optical systems for biomedical applications. Miniaturized and flexible optical tools are needed to enable reliable ambulatory and on-demand imaging and manipulation of biological events in the body. Integrated photonic technology has mainly evolved around developing devices for optical communications. The advent of silicon photonics was a turning point in bringing optical functionalities to the small form-factor of a chip.
    Research in this field boomed in the past couple of decades. However, silicon is a dangerously rigid material for interacting with soft tissue in biomedical applications. This increases the risk for patients to undergo tissue damage and scarring, especially due to the undulation of soft tissue against the inflexible device caused by respiration and other processes.
    Chamanzar, an Assistant Professor of Electrical and Computer Engineering (ECE) and Biomedical Engineering, saw the pressing need for an optical platform tailored to biointerfaces with both optical capability and flexibility. His solution, Parylene photonics, is the first biocompatible and fully flexible integrated photonic platform ever made.
    To create this new photonic material class, Chamanzar’s lab designed ultracompact optical waveguides by fabricating silicone (PDMS), an organic polymer with a low refractive index, around a core of Parylene C, a polymer with a much higher refractive index. The contrast in refractive index allows the waveguide to pipe light effectively, while the materials themselves remain extremely pliant. The result is a platform that is flexible, can operate over a broad spectrum of light, and is just 10 microns thick — about 1/10 the thickness of a human hair.
    “We were using Parylene C as a biocompatible insulation coating for electrical implantable devices, when I noticed that this polymer is optically transparent. I became curious about its optical properties and did some basic measurements,” said Chamanzar. “I found that Parylene C has exceptional optical properties. This was the onset of thinking about Parylene photonics as a new research direction.”
    Chamanzar’s design was created with neural stimulation in mind, allowing for targeted stimulation and monitoring of specific neurons within the brain. Crucial to this, is the creation of 45-degree embedded micromirrors. While prior optical biointerfaces have stimulated a large swath of the brain tissue beyond what could be measured, these micromirrors create a tight overlap between the volume being stimulated and the volume recorded. These micromirrors also enable integration of external light sources with the Parylene waveguides.

    advertisement

    ECE alumna Maya Lassiter (MS, ’19), who was involved in the project, said, “Optical packaging is an interesting problem to solve because the best solutions need to be practical. We were able to package our Parylene photonic waveguides with discrete light sources using accessible packaging methods, to realize a compact device.”
    The applications for Parylene photonics range far beyond optical neural stimulation, and could one day replace current technologies in virtually every area of optical biointerfaces. These tiny flexible optical devices can be inserted into the tissue for short-term imaging or manipulation. They can also be used as permanent implantable devices for long-term monitoring and therapeutic interventions.
    Additionally, Chamanzar and his team are considering possible uses in wearables. Parylene photonic devices placed on the skin could be used to conform to difficult areas of the body and measure pulse rate, oxygen saturation, blood flow, cancer biomarkers, and other biometrics. As further options for optical therapeutics are explored, such as laser treatment for cancer cells, the applications for a more versatile optical biointerface will only continue to grow.
    “The high index contrast between Parylene C and PDMS enables a low bend loss,” said ECE Ph.D. candidate Jay Reddy, who has been working on this project. “These devices retain 90% efficiency as they are tightly bent down to a radius of almost half a millimeter, conforming tightly to anatomical features such as the cochlea and nerve bundles.”
    Another unconventional possibility for Parylene photonics is actually in communication links, bringing Chamanzar’s whole pursuit full circle. Current chip-to-chip interconnects usually use rather inflexible optical fibers, and any area in which flexibility is needed requires transferring the signals to the electrical domain, which significantly limits bandwidth. Flexible Parylene photonic cables, however, provide a promising high bandwidth solution that could replace both types of optical interconnects and enable advances in optical interconnect design.
    “So far, we have demonstrated low-loss, fully flexible Parylene photonic waveguides with embedded micromirrors that enable input/output light coupling over a broad range of optical wavelengths,” said Chamanzar. “In the future, other optical devices such as microresonators and interferometers can also be implemented in this platform to enable a whole gamut of new applications.”
    With Chamanzar’s recent publication marking the debut of Parylene photonics, it’s impossible to say just how far reaching the effects of this technology could be. However, the implications of this work are more than likely to mark a new chapter in the development of optical biointerfaces, similar to what silicon photonics enabled in optical communications and processing. More

  • in

    Who's Tweeting about scientific research? And why?

    Although Twitter is best known for its role in political and cultural discourse, it has also become an increasingly vital tool for scientific communication. The record of social media engagement by laypeople is decoded by a new study publishing in the open access journal PLOS Biology, where researchers from the University of Washington School of Medicine, Seattle, show that Twitter users can be characterized in extremely fine detail by mining a relatively untapped source of information: how those users’ followers describe themselves. This study reveals some exciting — and, at times, disturbing — patterns of how research is received and disseminated through social media.
    Scientists candidly tweet about their unpublished research not only to one another but also to a broader audience of engaged laypeople. When consumers of cutting-edge science tweet or retweet about studies they find interesting, they leave behind a real-time record of the impact that taxpayer-funded research is having within academia and beyond.
    The lead author of the study, Jedidiah Carlson at the University of Washington, explains that each user in a social network will tend to connect with other users who share similar characteristics (such as occupation, age, race, hobbies, or geographic location), a sociological concept formally known as “network homophily.” By tapping into the information embedded in the broader networks of users who tweet about a paper, Carlson and his coauthor, Kelley Harris, are able to describe the total audience of each paper as a composite of multiple interest groups that might indicate the study’s potential to produce intellectual breakthroughs as well as social, cultural, economic, or environmental impacts.
    Rather than categorizing people into coarse groups such as “scientists” and “non-scientists” that rely on Twitter users to accurately describe themselves in their platform biographies, Carlson was able to accurately segment “scientists” into their specific research disciplines (such as evolutionary biology or bioinformatics), regardless of whether they mentioned these sub-disciplines in their twitter bios.
    The broader category of “non-scientists” can be automatically segmented into a multitude of groups, such as mental health advocates, dog lovers, video game developers, vegans, bitcoin investors, journalists, religious groups, and political constituencies. However, Carlson cautions that these indicators of diverse public engagement may not always be in line with scientists’ intended goals.
    Hundreds of papers were found to have Twitter audiences that were dominated by conspiracy theorists, white nationalists, or science denialists. In extreme cases, these audience sectors comprised more than half of all tweets referencing a given study, starkly illustrating the adage that science does not exist in a cultural or political vacuum.
    Particularly in light of the rampant misappropriation and politicization of scientific research throughout the COVID-19 pandemic, Carlson hopes that the results of his study might motivate scientists to keep a closer watch on the social media pulse surrounding their publications and intervene accordingly to guide their audiences towards productive and well-informed engagement.

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    When does a second COVID-19 surge end? Look at the data

    Mathematicians have developed a framework to determine when regions enter and exit COVID-19 infection surge periods, providing a useful tool for public health policymakers to help manage the coronavirus pandemic.
    The first published paper on second-surge COVID-19 infections from US states suggests that policymakers should look for demonstrable turning points in data rather than stable or insufficiently declining infection rates before lifting restrictions.
    Mathematicians Nick James and Max Menzies have published what they believe is the first analysis of COVID-19 infection rates in US states to identify turning points in data that indicate when surges have started or ended.
    The new study by the Australian mathematicians is published today in the journal Chaos, published by the American Institute of Physics.
    “In some of the worst performing states, it seems that policymakers have looked for plateauing or slightly declining infection rates. Instead, health officials should look for identifiable local maxima and minima, showing when surges reach their peak and when they are demonstrably over,” said Nick James a PhD student in the School of Mathematics and Statistics at the University of Sydney.
    In the study, the two mathematicians report a method to analyse COVID-19 case numbers for evidence of a first or second wave. The authors studied data from all 50 US states plus the District of Columbia for the seven-month period from 21 January to 31 July 2020. They found 31 states and the District of Columbia were experiencing a second wave as of the end of July.

    advertisement

    The two mathematicians have also applied the method to analyse infection rates in eight Australian states and territories using data from covidlive.com.au. While the Australian analysis has not been peer-reviewed, it does apply the peer-reviewed methodology. The analysis clearly identified Victoria as an outlier, as expected.
    “What the Victorian data shows is that cases are still coming down and the turning point — the local minimum — has not occurred yet,” Dr Menzies said. He said from a mathematical perspective at least, Victoria should “stay the course.”
    Dr Menzies, from the Yau Mathematical Sciences Center at Tsinghua University in Beijing, said: “Our approach allows for careful identification of the most and least successful US states at managing COVID-19.”
    The results show New York and New Jersey completely flattened their infection curves by the end of July with just a single surge. Thirteen states, including Georgia, California and Texas, have a continuing and rising single infection surge. Thirty-one states had an initial surge followed by declining infection to be followed by a second surge. These states include Florida and Ohio.
    Mr James said: “This is not a predictive model. It is an analytical tool that should assist policymakers determining demonstrable turning points in COVID infections.”
    Methodology

    advertisement

    The method smoothes raw daily case count data to eliminate artificial low counts over weekends and even some negative numbers that occur when localities correct errors. After smoothing the data, a numerical technique is used to find peaks and troughs. From this, turning points can be identified.
    Dr Menzies said their analysis shows governments should try not to allow new cases to increase, nor reduce restrictions when case numbers have merely flattened.
    “A true turning point, where new cases are legitimately in downturn and not just exhibiting stable fluctuations, should be observed before relaxing any restrictions.”
    He said that the analysis wasn’t just nice mathematics, using a new measure between sets of turning points, the study also deals with a very topical problem: looking at state-by-state data.
    Mr James said that aggressively pushing infection rates down to a minimum seemed the best way to defeat a second surge.
    Peaks and Troughs
    To determine the peaks and troughs, the algorithm developed by the mathematicians determines that a turning point occurs when a falling curve surges upward or a rising curve turns downward. Only those sequences where the peak and trough amplitudes differ by a definite minimum amount are counted. Fluctuations can occur when a curve flattens for a while but continues to increase without going through a true downturn, so the method eliminates these false counts.
    Both from Australia, the two mathematicians have been best friends for 25 years. “But this year is the first time we have worked on problems together,” Mr James said.
    Mr James has a background in statistics and has worked for start-ups and hedge funds in Texas, Sydney, San Francisco and New York City. Dr Menzies is a pure mathematician, completing his PhD at Harvard in 2019 and his undergraduate mathematics at the University of Cambridge. More

  • in

    Engineers pre-train AI computers to make them even more powerful

    In 2016, a supercomputer beat the world champion in Go, a complicated board game. How? By using reinforcement learning, a type of artificial intelligence whereby computers train themselves after being programmed with simple instructions. The computers learn from their mistakes and, step by step, become highly powerful.
    The main drawback to reinforcement learning is that it can’t be used in some real-life applications. That’s because in the process of training themselves, computers initially try just about anything and everything before eventually stumbling on the right path. This initial trial-and-error phase can be problematic for certain applications, such as climate-control systems where abrupt swings in temperature wouldn’t be tolerated.
    Learning the driver’s manual before starting the engine
    The CSEM engineers have developed an approach that overcomes this problem. They showed that computers can first be trained on extremely simplified theoretical models before being set to learn on real-life systems. That means that when the computers start the machine-learning process on the real-life systems, they can draw on what they learned previously on the models. The computers can therefore get on the right path quickly without going through a period of extreme fluctuations. The engineers’ research has just been published in IEEE Transactions on Neural Networks and Learning Systems.
    “It’s like learning the driver’s manual before you start a car,” says Pierre-Jean Alet, head of smart energy systems research at CSEM and a co-author of the study. “With this pre-training step, computers build up a knowledge base they can draw on so they aren’t flying blind as they search for the right answer.”
    Slashing energy use by over 20%
    The engineers tested their approach on a heating, ventilation and air conditioning (HVAC) system for a complex 100-room building using a three-step process. First, they trained a computer on a “virtual model” built from simple equations that roughly described the building’s behavior. Then they fed actual building data (temperature, how long blinds were open, weather conditions, etc.) into the computer, to make the training more accurate. Finally, they let the computer run its reinforcement-learning algorithms to find the best way to manage the HVAC system. Broad applications
    This discovery could open up new horizons for machine learning by expanding its use to applications where large fluctuations in operating parameters would have important financial or security costs.

    Story Source:
    Materials provided by Swiss Center for Electronics and Microtechnology – CSEM. Note: Content may be edited for style and length. More

  • in

    Physicists develop printable organic transistors

    High-definition roll-up televisions or foldable smartphones may soon no longer be unaffordable luxury goods that can be admired at international electronics trade fairs. High-performance organic transistors are a key necessity for the mechanically flexible electronic circuits required for these applications. However, conventional horizontal organic thin-film transistors are very slow due to the hopping-transport in organic semiconductors, so they cannot be used for applications requiring high frequencies. Especially for logic circuits with low power consumption, such as those used for Radio Frequency Identification (RFID), it is mandatory to develop transistors enabling high operation frequency as well as adjustable device characteristics (i.e., threshold-voltage). The research group Organic Devices and Systems (ODS) at the Dresden Integrated Center for Applied Photophysics (IAPP) of the Institute of Applied Physics headed by Dr. Hans Kleemann has now succeeded in realizing such novel organic devices.
    “Up to now, vertical organic transistors have been seen as lab curiosities which were thought too difficult to be integrated in an electronic circuit. However, as shown in our publication, vertical organic transistors with two independent control electrodes are perfectly suited to realize complex logic circuits while keeping the main benefit of vertical transistors devices, namely the high switching frequency,” says Dr. Hans Kleemann.
    The vertical organic transistors with two independent control electrodes are characterized by a high switching frequency (a few nanoseconds) and an adjustable threshold voltage. Thanks to these developments, even single transistors can be used to represent different logical states (AND, NOT, NAND). Furthermore, the adjustable threshold voltage ensures signal integrity (noise margin) and low power consumption.
    With this, the research group has set a milestone with regard to the vision of flexible and printable electronics. In the future, these transistors could make it possible to realize even sophisticated electronic functions such as wireless communication (RFID) or high-resolution flexible displays completely with organic components, thus completely dispensing with silicon-based electronic components.

    Story Source:
    Materials provided by Technische Universität Dresden. Note: Content may be edited for style and length. More

  • in

    New theory predicts movement of different animals using sensing to search

    All animals great and small live every day in an uncertain world. Whether you are a human being or an insect, you rely on your senses to help you navigate and survive in your world. But what drives this essential sensing?
    Unsurprisingly, animals move their sensory organs, such as eyes, ears and noses, while they are searching. Picture a cat swiveling its ears to capture important sounds without needing to move its body. But the precise position and orientation these sense organs take over time during behavior is not intuitive, and current theories do not predict these positions and orientations well.
    Now a Northwestern University research team has developed a new theory that can predict the movement of an animal’s sensory organs while searching for something vital to its life.
    The researchers applied the theory to four different species which involved three different senses (including vision and smell) and found the theory predicted the observed sensing behavior of each animal. The theory could be used to improve the performance of robots collecting information and possibly applied to the development of autonomous vehicles where response to uncertainty is a major challenge.
    “Animals make their living through movement,” said Malcolm A. MacIver, who led the research. “To find food and mates and to identify threats, they need to move. Our theory provides insight into how animals gamble on how much energy to expend to get the useful information they need.”
    MacIver is a professor of biomedical and mechanical engineering in Northwestern’s McCormick School of Engineering and a professor of neurobiology (courtesy appointment) in the Weinberg College of Arts and Sciences.

    advertisement

    The new theory, called energy-constrained proportional betting provides a unifying explanation for many enigmatic motions of sensory organs that have been previously measured. The algorithm that follows from the theory generates simulated sensory organ movements that show good agreement to actual sensory organ movements from fish, mammals and insects.
    The study was published today (Sept. 22) by the journal eLife. The research provides a bridge between the literature on animal movement and energetics and information theory-based approaches to sensing.
    MacIver is the corresponding author. Chen Chen, a Ph.D. student in MacIver’s lab, is the first author, and Todd D. Murphey, professor of mechanical engineering at McCormick, is a co-author.
    The algorithm shows that animals trade the energetically costly operation of movement to gamble that locations in space will be informative. The amount of energy (ultimately food they need to eat) they are willing to gamble, the researchers show, is proportional to the expected informativeness of those locations.
    “While most theories predict how an animal will behave when it largely already knows where something is, ours is a prediction for when the animal knows very little — a situation common in life and critical to survival,” Murphey said.
    The study focuses on South American gymnotid electric fish, using data from experiments performed in MacIver’s lab, but also analyzes previously published datasets on the blind eastern American mole, the American cockroach and the hummingbird hawkmoth. The three senses were electrosense (electric fish), vision (moth) and smell (mole and roach).
    The theory provides a unified solution to the problem of not spending too much time and energy moving around to sample information, while getting enough information to guide movement during tracking and related exploratory behaviors.
    “When you look at a cat’s ears, you’ll often see them swiveling to sample different locations of space,” MacIver said. “This is an example of how animals are constantly positioning their sensory organs to help them absorb information from the environment. It turns out there is a lot going on below the surface in the movement of sense organs like ears and eyes and noses.”
    The algorithm is a modified version of one Murphey and MacIver developed five years ago in their bio-inspired robotics work. They took observations of animal search strategies and developed algorithms to have robots mimic those animal strategies. The resulting algorithms gave Murphey and MacIver concrete predictions for how animals might behave when searching for something, leading to the current work.

    Story Source:
    Materials provided by Northwestern University. Original written by Megan Fellman. Note: Content may be edited for style and length. More

  • in

    Personal interactions are important drivers of STEM identity in girls

    As head of the educational outreach arm of the Florida State University-headquartered National High Magnetic Field Laboratory, Roxanne Hughes has overseen dozens of science camps over the years, including numerous sessions of the successful SciGirls Summer Camp she co-organizes with WFSU .
    In a new paper published in the Journal of Research in Science Teaching, Hughes and her colleagues took a much closer look at one of those camps, a coding camp for middle school girls.
    They found that nuanced interactions between teachers and campers as well as among the girls themselves impacted how girls viewed themselves as coders.
    The MagLab offers both co-ed camps and summer camps for girls about science in general as well as about coding in particular . Hughes, director of the MagLab’s Center for Integrating Research and Learning , wanted to study the coding camp because computer science is the only STEM field where the representation of women has actually declined since 1990.
    “It’s super gendered in how it has been advertised, beginning with the personal computer,” Hughes said. “And there are stereotypes behind what is marketed to girls versus what is marketed to boys. We wanted to develop a conceptual framework focusing specifically on coding identity — how the girls see themselves as coders — to add to existing research on STEM identity more broadly.”
    This specific study focused on the disparate experiences of three girls in the camp. The researchers looked at when and how the girls were recognized for their coding successes during the camp, and how teachers and peers responded when the girls demonstrated coding skills.

    advertisement

    “Each girl received different levels of recognition, which affected their coding identity development,” Hughes said. “We found that educators play a crucial role in amplifying recognition, which then influences how those interactions reinforce their identities as coders.”
    Positive praise often resulted in a girl pursuing more challenging activities, for example, strengthening her coding identity.
    Exactly how teachers praised the campers played a role in how that recognition impacted the girls. Being praised in front of other girls, for example, had more impact than a discreet pat on the back. More public praise prompted peer recognition, which further boosted a girl’s coding identity.
    The type of behavior recognized by teachers also appeared to have different effects. A girl praised for demonstrating a skill might feel more like a coder than one lauded for her persistence, for example. Lack of encouragement was also observed: One girl who sought attention for her coding prowess went unacknowledged, while another who was assisting her peers received lots of recognition, responses that seem to play into gender stereotypes, Hughes said. Even in a camp explicitly designed to bolster girls in the sciences, prevailing stereotypes can undermine best intentions.
    “To me, the most interesting piece was the way in which educators still carry the general gender stereotypes, and how that influenced the behavior they rewarded.” Hughes said. “They recognized the girl who was being a team player, checking in on how everyone was feeling — all very stereotypically feminine traits that are not necessarily connected to or rewarded in computing fields currently.”
    Messaging about science is especially important for girls in middle school, Hughes said. At that developmental stage, their interest in STEM disciplines begins to wane as they start to get the picture that those fields clash with their other identities.

    advertisement

    The MagLab study focused on three girls — one Black, one white and one Latina — as a means to develop a framework for future researchers to understand coding identity. Hughes says this is too small a data set to tease out definitive conclusions about roles of race and gender, but the study does raise many questions for future researchers to examine with the help of these findings.
    “The questions that come out of the study to me are so fascinating,” Hughes said. “Like, how would these girls be treated differently if they were boys? How do the definitions of ‘coder’ that the girls develop in the camp open or constrain opportunities for them to continue this identity work as they move forward?”
    The study has also prompted Hughes to think about how to design more inclusive, culturally responsive camps at the MagLab.
    “Even though this is a summer camp, there is still the same carryover of stereotypes and sexism and racism from the outer world into this space,” she said. “How can we create a space where girls can behave differently from the social gendered expectations?”
    The challenge will be to show each camper that she and her culture are valued in the camp and to draw connections between home and camp that underscore that. “We need to show that each of the girls has value — in that camp space and in science in general,” Hughes said.
    Joining Hughes as co-authors on the study were Jennifer Schellinger of Florida State University and Kari Roberts of the MagLab.
    The National High Magnetic Field Laboratory is funded by the National Science Foundation and the State of Florida, and has operations at Florida State University, University of Florida and Los Alamos National Laboratory. More

  • in

    The impact of human mobility on disease spread

    Due to continual improvements in transportation technology, people travel more extensively than ever before. Although this strengthened connection between faraway countries comes with many benefits, it also poses a serious threat to disease control and prevention. When infected humans travel to regions that are free of their particular contagions, they might inadvertently transmit their infections to local residents and cause disease outbreaks. This process has occurred repeatedly throughout history; some recent examples include the SARS outbreak in 2003, the H1N1 influenza pandemic in 2009, and — most notably — the ongoing COVID-19 pandemic.
    Imported cases challenge the ability of nonendemic countries — countries where the disease in question does not occur regularly — to entirely eliminate the contagion. When combined with additional factors such as genetic mutation in pathogens, this issue makes the global eradication of many diseases exceedingly difficult, if not impossible. Therefore, reducing the number of infections is generally a more feasible goal. But to achieve control of a disease, health agencies must understand how travel between separate regions impacts its spread.
    In a paper publishing on Tuesday in the SIAM Journal of Applied Mathematics, Daozhou Gao of Shanghai Normal University investigated the way in which human dispersal affects disease control and total extent of an infection’s spread. Few previous studies have explored the impact of human movement on infection size or disease prevalence — defined as the proportion of individuals in a population that are infected with a specific pathogen — in different regions. This area of research is especially pertinent during severe disease outbreaks, when governing leaders may dramatically reduce human mobility by closing borders and restricting travel. During these times, it is essential to understand how limiting people’s movements affects the spread of disease.
    To examine the spread of disease throughout a population, researchers often use mathematical models that sort individuals into multiple distinct groups, or “compartments.” In his study, Gao utilized a particular type of compartmental model called the susceptible-infected-susceptible (SIS) patch model. He divided the population in each patch — a group of people such as a community, city, or country — into two compartments: infected people who currently have the designated illness, and people who are susceptible to catching it. Human migration then connects the patches. Gao assumed that the susceptible and infected subpopulations spread out at the same rate, which is generally true for diseases like the common cold that often only mildly affect mobility.
    Each patch in Gao’s SIS model has a certain infection risk that is represented by its basic reproduction number (R0) — the quantity that predicts how many cases will be caused by the presence of a single contagious person within a susceptible population. “The larger the reproduction number, the higher the infection risk,” Gao said. “So the patch reproduction number of a higher-risk patch is assumed to be higher than that of a lower-risk patch.” However, this number only measures the initial transmission potential; it can rarely predict the true extent of infection.
    Gao first used his model to investigate the effect of human movement on disease control by comparing the total infection sizes that resulted when individuals dispersed quickly versus slowly. He found that if all patches recover at the same rate, large dispersal results in more infections than small dispersal. Surprisingly, an increase in the amount by which people spread can actually reduce R0 while still increasing the total amount of infections.
    The SIS patch model can also help elucidate how dispersal impacts the distribution of infections and prevalence of the disease within each patch. Without diffusion between patches, a higher-risk patch will always have a higher prevalence of disease, but Gao wondered if the same was true when people can travel to and from that high-risk patch. The model revealed that diffusion can decrease infection size in the highest-risk patch since it exports more infections than it imports, but this consequently increases infections in the patch with the lowest risk. However, it is never possible for the highest-risk patch to have the lowest disease prevalence.
    Using a numerical simulation based on the common cold — the attributes of which are well-studied — Gao delved deeper into human migration’s impact on the total size of an infection. When Gao incorporated just two patches, his model exhibited a wide variety of behaviors under different environmental conditions. For example, the dispersal of humans often led to a larger total infection size than no dispersal, but rapid human scattering in one scenario actually reduced the infection size. Under different conditions, small dispersal was detrimental but large dispersal ultimately proved beneficial to disease management. Gao completely classifies the combinations of mathematical parameters for which dispersal causes more infections when compared to a lack of dispersal in a two-patch environment. However, the situation becomes more complex if the model incorporates more than two patches.
    Further investigation into Gao’s SIS patch modeling approach could reveal more nuanced information about the complexities of travel restrictions’ impact on disease spread, which is relevant to real-world situations — such as border closures during the COVID-19 pandemic. “To my knowledge, this is possibly the first theoretical work on the influence of human movement on the total number of infections and their distribution,” Gao said. “There are numerous directions to improve and extend the current work.” For example, future work could explore the outcome of a ban on only some travel routes, such as when the U.S. banned travel from China to impede the spread of COVID-19 but failed to block incoming cases from Europe. Continuing research on these complicated effects may help health agencies and governments develop informed measures to control dangerous diseases.

    Story Source:
    Materials provided by Society for Industrial and Applied Mathematics. Original written by Jillian Kunze. Note: Content may be edited for style and length. More