More stories

  • in

    Method to create colloidal diamonds developed

    The colloidal diamond has been a dream of researchers since the 1990s. These structures — stable, self-assembled formations of miniscule materials — have the potential to make light waves as useful as electrons in computing, and hold promise for a host of other applications. But while the idea of colloidal diamonds was developed decades ago, no one was able to reliably produce the structures. Until now.
    Researchers led by David Pine, professor of chemical and biomolecular engineering at the NYU Tandon School of Engineering and professor of physics at NYU, have devised a new process for the reliable self-assembly of colloids in a diamond formation that could lead to cheap, scalable fabrication of such structures. The discovery, detailed in “Colloidal Diamond,” appearing in the September 24 issue of Nature, could open the door to highly efficient optical circuits leading to advances in optical computers and lasers, light filters that are more reliable and cheaper to produce than ever before, and much more.
    Pine and his colleagues, including lead author Mingxin He, a postdoctoral researcher in the Department of Physics at NYU, and corresponding author Stefano Sacanna, associate professor of chemistry at NYU, have been studying colloids and the possible ways they can be structured for decades. These materials, made up of spheres hundreds of times smaller than the diameter of a human hair, can be arranged in different crystalline shapes depending on how the spheres are linked to one another. Each colloid attaches to another using strands of DNA glued to surfaces of the colloids that function as a kind of molecular Velcro. When colloids collide with each other in a liquid bath, the DNA snags and the colloids are linked. Depending on where the DNA is attached to the colloid, they can spontaneously create complex structures.
    This process has been used to create strings of colloids and even colloids in a cubic formation. But these structures did not produce the Holy Grail of photonics — a band gap for visible light. Much as a semiconductor filters out electrons in a circuit, a band gap filters out certain wavelengths of light. Filtering light in this way can be reliably achieved by colloids if they are arranged in a diamond formation, a process deemed too difficult and expensive to perform at commercial scale.
    “There’s been a great desire among engineers to make a diamond structure,” said Pine. “Most researchers had given up on it, to tell you the truth — we may be the only group in the world who is still working on this. So I think the publication of the paper will come as something of a surprise to the community.”
    The investigators, including Etienne Ducrot, a former postdoc at NYU Tandon, now at the Centre de Recherche Paul Pascal — CNRS, Pessac, France; and Gi-Ra Yi of Sungkyunkwan University, Suwon, South Korea, discovered that they could use a steric interlock mechanism that would spontaneously produce the necessary staggered bonds to make this structure possible. When these pyramidal colloids approached each other, they linked in the necessary orientation to generate a diamond formation. Rather than going through the painstaking and expensive process of building these structures through the use of nanomachines, this mechanism allows the colloids to structure themselves without the need for outside interference. Furthermore, the diamond structures are stable, even when the liquid they form in is removed.
    The discovery was made because He, a graduate student at NYU Tandon at the time, noticed an unusual feature of the colloids he was synthesizing in a pyramidal formation. He and his colleagues drew out all of the ways these structures could be linked. When they happened upon a particular interlinked structure, they realized they had hit upon the proper method. “After creating all these models, we saw immediately that we had created diamonds,” said He.
    “Dr. Pine’s long-sought demonstration of the first self-assembled colloidal diamond lattices will unlock new research and development opportunities for important Department of Defense technologies which could benefit from 3D photonic crystals,” said Dr. Evan Runnerstrom, program manager, Army Research Office (ARO), an element of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory.
    He explained that potential future advances include applications for high-efficiency lasers with reduced weight and energy demands for precision sensors and directed energy systems; and precise control of light for 3D integrated photonic circuits or optical signature management.
    “I am thrilled with this result because it wonderfully illustrates a central goal of ARO’s Materials Design Program — to support high-risk, high-reward research that unlocks bottom-up routes to creating extraordinary materials that were previously impossible to make.”
    The team, which also includes John Gales, a graduate student in physics at NYU, and Zhe Gong, a postdoc at the University of Pennsylvania, formerly a graduate student in chemistry at NYU, are now focused on seeing how these colloidal diamonds can be used in a practical setting. They are already creating materials using their new structures that can filter out optical wavelengths in order to prove their usefulness in future technologies.
    This research was supported by the US Army Research Office under award number W911NF-17-1-0328. Additional funding was provided by the National Science Foundation under award number DMR-1610788. More

  • in

    Global warming may lead to practically irreversible Antarctic melting

    How is melting a continent-sized ice sheet like stirring milk into coffee? Both are, for all practical purposes, irreversible.
    In a new study published in the Sept. 24 Nature, researchers outline a series of temperature-related tipping points for the Antarctic Ice Sheet. Once each tipping point is reached, changes to the ice sheet and subsequent melting can’t be truly reversed, even if temperatures drop back down to current levels, the scientists say.
    The full mass of ice sitting on top of Antarctica holds enough water to create about 58 meters of sea level rise. Although the ice sheet won’t fully collapse tomorrow or even in the next century, Antarctic ice loss is accelerating (SN: 6/13/18). So scientists are keen to understand the processes by which such a collapse might occur.
    “What we’re really interested in is the long-term stability” of the ice, says Ricarda Winkelmann, a climate scientist at Potsdam Institute for Climate Impact Research in Germany. In the new study, Winkelmann and her colleagues simulated how future temperature increases can lead to changes across Antarctica in the interplay between ice, oceans, atmosphere and land.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    In addition to direct melting due to warming, numerous processes linked to climate change can speed up overall melting, called positive feedbacks, or slow it down, known as negative feedbacks.
    For example, as the tops of the ice sheets slowly melt down to lower elevations, the air around them becomes progressively warmer, speeding up melting. Warming temperatures also soften the ice itself, so that it slides more quickly toward the sea. And ocean waters that have absorbed heat from the atmosphere can transfer that heat to the vulnerable underbellies of Antarctic glaciers jutting into the sea, eating away at the buttresses of ice that keep the glaciers from sliding into the sea (SN: 9/11/20). The West Antarctic Ice Sheet is particularly vulnerable to such ocean interactions — but warm waters are also threatening sections of the East Antarctic Ice Sheet, such as Totten Glacier (SN: 11/1/17).
    In addition to these positive feedbacks, climate change can produce some negative feedbacks that delay the loss of ice. For example, warmer atmospheric temperatures also evaporate more ocean water, adding moisture to the air and producing increased snowfall (SN: 4/30/20).
    The new study suggests that below 1 degree Celsius of warming relative to preindustrial times, increased snowfall slightly increases the mass of ice on the continent, briefly outpacing overall losses. But that’s where the good news ends. Simulations suggest that after about 2 degrees Celsius of warming, the West Antarctic Ice Sheet will become unstable and collapse, primarily due to its interactions with warm ocean waters, increasing sea levels by more than 2 meters. That’s a warming target that the signatories to the 2015 Paris Agreement pledged not to exceed, but which the world is on track to surpass by 2100 (SN: 11/26/2019).
    As the planet continues to warm, some East Antarctic glaciers will follow suit. At 6 degrees Celsius of warming, “we reach a point where surface processes become dominant,” Winkelmann says. In other words, the ice surface is now at low enough elevation to accelerate melting. Between 6 and 9 degrees of warming, more than 70 percent of the total ice mass in Antarctica is loss, corresponding to an eventual sea level rise of more than 40 meters, the team found.
    Those losses in ice can’t be regained, even if temperatures return to preindustrial levels, the study suggests. The simulations indicate that for the West Antarctic Ice Sheet to regrow to its modern extent, temperatures would need to drop to at least 1 degree Celsius below preindustrial times.
    “What we lose might be lost forever,” Winkelmann says.
    There are other possible feedback mechanisms, both positive and negative, that weren’t included in these simulations, Winklemann adds — either because the mechanisms are negligible or because their impacts aren’t yet well understood. These include interactions with ocean-climate patterns such as the El Niño Southern Oscillation and with ocean circulation patterns, including the Atlantic Meridional Overturning Circulation.
    Previous research suggested that meltwater from the Greenland and Antarctic ice sheets might also play complicated feedback roles. Nicholas Golledge, a climate scientist with Victoria University of Wellington in New Zealand, reported in Nature in 2019 that flows of Greenland meltwater can slow ocean circulation in the Atlantic, while cold, fresh Antarctic meltwater can act like a seal on the surface ocean around the continent, trapping warmer, saltier waters below, where they can continue to eat away at the underbelly of glaciers.
    In a separate study published Sept. 23 in Science Advances, Shaina Sadai, a climate scientist at the University of Massachusetts Amherst, and her colleagues also examined the impact of Antarctic meltwater. In simulations that look out to the year 2250, the researchers found that in addition to a cool meltwater layer trapping warm water below it, that surface layer of freshwater would exert a strong cooling effect that could boost the volume of sea ice around Antarctica, which would in turn also keep the air there colder.
    A large plug of such meltwater, such as due to the West Antarctic Ice Sheet’s sudden collapse, could even briefly slow global warming, the researchers found. But that boon would come at a terrible price: rapid sea level rise, Sadai says. “This is not good news,” she adds. “We do not want a delayed surface temperature rise at the cost of coastal communities.”
    Because the volume and impact of meltwater is still uncertain, Winkelmann’s team didn’t include this factor. Robert DeConto, an atmospheric scientist also at the University of Massachusetts Amherst and a coauthor on the Science Advances study, notes that the effect depends on how scientists choose to simulate how the ice breaks apart. The study’s large meltwater volumes are the result of a controversial idea known as the marine ice-cliff hypothesis, which suggests that in a few centuries, tall ice cliffs in Antarctica might become brittle enough to suddenly crumble into the ocean like dominoes, raising sea levels catastrophically (SN: 2/6/19).
    Despite lingering uncertainties over the magnitude of feedbacks, one emerging theme — highlighted by the Nature paper — is consistent, DeConto says: Once the ice is lost, we can’t go back.
    “Even if we get our act together and reduce emissions dramatically, we will have already put a lot of heat into the ocean,” he adds. For ice to begin to grow back, “we’ll have to go back to a climate that’s colder than at the beginning of the Industrial Revolution, sort of like the next ice age. And that’s sobering.” More

  • in

    Magnetic 'T-Budbots' made from tea plants kill and clean biofilm

    Biofilms — microbial communities that form slimy layers on surfaces — are difficult to treat and remove, often because the microbes release molecules that block the entry of antibiotics and other therapies. Now, researchers reporting in ACS Applied Materials & Interfaces have made magnetically propelled microbots derived from tea buds, which they call “T-Budbots,” that can dislodge biofilms, release an antibiotic to kill bacteria, and clean away the debris. Watch a video of the T-Budbots here.
    Many hospital-acquired infections involve bacterial biofilms that form on catheters, joint prostheses, pacemakers and other implanted devices. These microbial communities, which are often resistant to antibiotics, can slow healing and cause serious medical complications. Current treatment includes repeated high doses of antibiotics, which can have side effects, or in some cases, surgical replacement of the infected device, which is painful and costly. Dipankar Bandyopadhyay and colleagues wanted to develop biocompatible microbots that could be controlled with magnets to destroy biofilms and then scrub away the mess. The team chose Camellia sinensis tea buds as the raw material for their microbots because the buds are porous, non-toxic, inexpensive and biodegradable. Tea buds also contain polyphenols, which have antimicrobial properties.
    The researchers ground some tea buds and isolated porous microparticles. Then, they coated the microparticles’ surfaces with magnetite nanoparticles so that they could be controlled by a magnet. Finally, the antibiotic ciprofloxacin was embedded within the porous structures. The researchers showed that the T-Budbots released the antibiotic primarily under acidic conditions, which occur in bacterial infections. The team then added the T-Budbots to bacterial biofilms in dishes and magnetically steered them. The microbots penetrated the biofilm, killed the bacteria and cleaned the debris away, leaving a clear path in their wake. Degraded remnants of the biofilm adhered to the microbots’ surfaces. The researchers note that this was a proof-of-concept study, and further optimization is needed before the T-Budbots could be deployed to destroy biofilms in the human body.
    Video: https://www.youtube.com/watch?v=-_GxUTO0qGI&pp=QAA%3D

    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    Meditation for mind-control

    A BCI is an apparatus that allows an individual to control a machine or computer directly from their brain. Non-invasive means of control like electroencephalogram (EEG) readings taken through the skull are safe and convenient compared to more risky invasive methods using a brain implant, but they take longer to learn and users ultimately vary in proficiency.
    He and collaborators conducted a large-scale human study enrolling subjects in a weekly 8-week course in simple, widely-practiced meditation techniques, to test their effect as a potential training tool for BCI control. A total of 76 people participated in this study, each being randomly assigned to the meditation group or the control group, which had no preparation during these 8 weeks. Up to 10 sessions of BCI study were conducted with each subject. He’s work shows that humans with just eight lessons in mindfulness-based attention and training (MBAT) demonstrated significant advantages compared to those with no prior meditation training, both in their initial ability to control BCI’s and in the time it took for them to achieve full proficiency.
    After subjects in the MBAT group completed their training course they, along with a control group, were charged with learning to control a simple BCI system by navigating a cursor across a computer screen using their thought. This required them to concentrate their focus and visualize the movement of the cursor within their head. During the course of the process, He’s team monitored their performance and brain activity via EEG.
    As stated prior, the team found that those with training in MBAT were more successful in controlling the BCI, both initially and over time. Interestingly, the researchers found that differences in brain activity between the two sample groups corresponded directly with their success. The meditation group showed significantly enhanced capability of modulating their alpha rhythm, the activity pattern monitored by the BCI system to mentally control the movement of a computer cursor.
    His findings are very important for the process of BCI training and the overall feasibility of non-invasive BCI control via EEG. While prior work from his group has shown that long-term meditators were better able to overcome the difficulty of learning non-invasive mind control, this work shows that just a short period of MBAT training can significantly improve a subject’s skill with a BCI. This suggests that education in MBAT could provide a significant addition to BCI training. “Meditation has been widely practiced for well-being and improving health,” said He. Our work demonstrates that it can also enhance a person’s mental power for mind control, and may facilitate broad use of noninvasive brain-computer interface technology.”
    It could also inform neuroscientists and clinicians working in BCI design and maintenance. A thorough understanding of the brain is crucial for creating the machine learning algorithms BCI’s use to interpret brain signals. This knowledge is especially important in BCI recalibration, which can be time-consuming and frequently necessary for non-invasive BCI’s.
    The work of He and his team presents a new application for a well-known and widely practiced form of meditation, and may even offer insights into the neurological effects of meditation and how it may be adapted for better BCI training. This study offers novel information for researchers of BCI’s and presents a new tool for both understanding the brain and preparing subjects to use a BCI.

    Story Source:
    Materials provided by College of Engineering, Carnegie Mellon University. Original written by Dan Carroll. Note: Content may be edited for style and length. More

  • in

    Controlling ultrastrong light-matter coupling at room temperature

    Physicists at Chalmers University of Technology in Sweden, together with colleagues in Russia and Poland, have managed to achieve ultrastrong coupling between light and matter at room temperature. The discovery is of importance for fundamental research and might pave the way for advances within, for example, light sources, nanomachinery, and quantum technology.
    A set of two coupled oscillators is one of the most fundamental and abundant systems in physics. It is a very general toy model that describes a plethora of systems ranging from guitar strings, acoustic resonators, and the physics of children’s swings, to molecules and chemical reactions, from gravitationally bound systems to quantum cavity electrodynamics.
    The degree of coupling between the two oscillators is an important parameter that mostly determines the behaviour of the coupled system. However, the question is rarely asked about the upper limit by which two pendula can couple to each other — and what consequences such coupling can have.
    The newly presented results, published in Nature Communications, offer a glimpse into the domain of the so called ultrastrong coupling, wherein the coupling strength becomes comparable to the resonant frequency of the oscillators. The coupling in this work is realised through interaction between light and electrons in a tiny system consisting of two gold mirrors separated by a small distance and plasmonic gold nanorods. On a surface that is a hundred times smaller than the end of a human hair, the researchers have shown that it is possible to create controllable ultrastrong interaction between light and matter at ambient conditions — that is, at room temperature and atmospheric pressure.
    “We are not the first ones to realise ultrastrong coupling. But generally, strong magnetic fields, high vacuum and extremely low temperatures are required to achieve such a degree of coupling. When you can perform it in an ordinary lab, it enables more researchers to work in this field and it provides valuable knowledge in the borderland between nanotechnology and quantum optics,” says Denis Baranov, a researcher at Chalmers University of Technology and the first author of the scientific paper.
    To understand the system the authors have realised, one can imagine a resonator, in this case represented by two gold mirrors separated by a few hundred nanometers, as a single tone in music. The nanorods fabricated between the mirrors affect how light moves between the mirrors and change their resonance frequency. Instead of just sounding like a single tone, in the coupled system the tone splits into two: a lower pitch, and a higher pitch.
    The energy separation between the two new pitches represents the strength of interaction. Specifically, in the ultrastrong coupling case, the strength of interaction is so large that it becomes comparable to the frequency of the original resonator. This leads to a unique duet, where light and matter intermix into a common object, forming quasi-particles called polaritons. The hybrid character of polaritons provides a set of intriguing optical and electronic properties.
    The number of gold nanorods sandwiched between the mirrors controls how strong the interaction is. But at the same time, it controls the so-called zero-point energy of the system. By increasing or decreasing the number of rods, it is possible to supply or remove energy from the ground state of the system and thereby increase or decrease the energy stored in the resonator box.
    What makes this work particularly interesting is that the authors managed to indirectly measure how the number of nanorods changes the vacuum energy by “listening” to the tones of the coupled system (that is, looking at the light transmission spectra through the mirrors with the nanorods) and performing simple mathematics. The resulting values turned out to be comparable to the thermal energy, which may lead to observable phenomena in the future.
    “A concept for creating controllable ultrastrong coupling at room temperature in relatively simple systems can offer a testbed for fundamental physics. The fact that this ultrastrong coupling “costs” energy could lead to observable effects, for example it could modify the reactivity of chemicals or tailor van der Waals interactions. Ultrastrong coupling enables a variety of intriguing physical phenomena,” says Timur Shegai, Associate Professor at Chalmers and the last author of the scientific article.
    In other words, this discovery allows researchers to play with the laws of nature and to test the limits of coupling.
    “As the topic is quite fundamental, potential applications may range. Our system allows for reaching even stronger levels of coupling, something known as deep strong coupling. We are still not entirely sure what is the limit of coupling in our system, but it is clearly much higher than we see now. Importantly, the platform that allows studying ultrastrong coupling is now accessible at room temperature,” says Timur Shegai.

    Story Source:
    Materials provided by Chalmers University of Technology. Original written by Mia Halleröd Palmgren. Note: Content may be edited for style and length. More

  • in

    Parylene photonics enable future optical biointerfaces

    Carnegie Mellon University’s Maysam Chamanzar and his team have invented an optical platform that will likely become the new standard in optical biointerfaces. He’s labeled this new field of optical technology “Parylene photonics,” demonstrated in a recent paper in Nature Microsystems and Nanoengineering.
    There is a growing and unfulfilled demand for optical systems for biomedical applications. Miniaturized and flexible optical tools are needed to enable reliable ambulatory and on-demand imaging and manipulation of biological events in the body. Integrated photonic technology has mainly evolved around developing devices for optical communications. The advent of silicon photonics was a turning point in bringing optical functionalities to the small form-factor of a chip.
    Research in this field boomed in the past couple of decades. However, silicon is a dangerously rigid material for interacting with soft tissue in biomedical applications. This increases the risk for patients to undergo tissue damage and scarring, especially due to the undulation of soft tissue against the inflexible device caused by respiration and other processes.
    Chamanzar, an Assistant Professor of Electrical and Computer Engineering (ECE) and Biomedical Engineering, saw the pressing need for an optical platform tailored to biointerfaces with both optical capability and flexibility. His solution, Parylene photonics, is the first biocompatible and fully flexible integrated photonic platform ever made.
    To create this new photonic material class, Chamanzar’s lab designed ultracompact optical waveguides by fabricating silicone (PDMS), an organic polymer with a low refractive index, around a core of Parylene C, a polymer with a much higher refractive index. The contrast in refractive index allows the waveguide to pipe light effectively, while the materials themselves remain extremely pliant. The result is a platform that is flexible, can operate over a broad spectrum of light, and is just 10 microns thick — about 1/10 the thickness of a human hair.
    “We were using Parylene C as a biocompatible insulation coating for electrical implantable devices, when I noticed that this polymer is optically transparent. I became curious about its optical properties and did some basic measurements,” said Chamanzar. “I found that Parylene C has exceptional optical properties. This was the onset of thinking about Parylene photonics as a new research direction.”
    Chamanzar’s design was created with neural stimulation in mind, allowing for targeted stimulation and monitoring of specific neurons within the brain. Crucial to this, is the creation of 45-degree embedded micromirrors. While prior optical biointerfaces have stimulated a large swath of the brain tissue beyond what could be measured, these micromirrors create a tight overlap between the volume being stimulated and the volume recorded. These micromirrors also enable integration of external light sources with the Parylene waveguides.

    advertisement

    ECE alumna Maya Lassiter (MS, ’19), who was involved in the project, said, “Optical packaging is an interesting problem to solve because the best solutions need to be practical. We were able to package our Parylene photonic waveguides with discrete light sources using accessible packaging methods, to realize a compact device.”
    The applications for Parylene photonics range far beyond optical neural stimulation, and could one day replace current technologies in virtually every area of optical biointerfaces. These tiny flexible optical devices can be inserted into the tissue for short-term imaging or manipulation. They can also be used as permanent implantable devices for long-term monitoring and therapeutic interventions.
    Additionally, Chamanzar and his team are considering possible uses in wearables. Parylene photonic devices placed on the skin could be used to conform to difficult areas of the body and measure pulse rate, oxygen saturation, blood flow, cancer biomarkers, and other biometrics. As further options for optical therapeutics are explored, such as laser treatment for cancer cells, the applications for a more versatile optical biointerface will only continue to grow.
    “The high index contrast between Parylene C and PDMS enables a low bend loss,” said ECE Ph.D. candidate Jay Reddy, who has been working on this project. “These devices retain 90% efficiency as they are tightly bent down to a radius of almost half a millimeter, conforming tightly to anatomical features such as the cochlea and nerve bundles.”
    Another unconventional possibility for Parylene photonics is actually in communication links, bringing Chamanzar’s whole pursuit full circle. Current chip-to-chip interconnects usually use rather inflexible optical fibers, and any area in which flexibility is needed requires transferring the signals to the electrical domain, which significantly limits bandwidth. Flexible Parylene photonic cables, however, provide a promising high bandwidth solution that could replace both types of optical interconnects and enable advances in optical interconnect design.
    “So far, we have demonstrated low-loss, fully flexible Parylene photonic waveguides with embedded micromirrors that enable input/output light coupling over a broad range of optical wavelengths,” said Chamanzar. “In the future, other optical devices such as microresonators and interferometers can also be implemented in this platform to enable a whole gamut of new applications.”
    With Chamanzar’s recent publication marking the debut of Parylene photonics, it’s impossible to say just how far reaching the effects of this technology could be. However, the implications of this work are more than likely to mark a new chapter in the development of optical biointerfaces, similar to what silicon photonics enabled in optical communications and processing. More

  • in

    Who's Tweeting about scientific research? And why?

    Although Twitter is best known for its role in political and cultural discourse, it has also become an increasingly vital tool for scientific communication. The record of social media engagement by laypeople is decoded by a new study publishing in the open access journal PLOS Biology, where researchers from the University of Washington School of Medicine, Seattle, show that Twitter users can be characterized in extremely fine detail by mining a relatively untapped source of information: how those users’ followers describe themselves. This study reveals some exciting — and, at times, disturbing — patterns of how research is received and disseminated through social media.
    Scientists candidly tweet about their unpublished research not only to one another but also to a broader audience of engaged laypeople. When consumers of cutting-edge science tweet or retweet about studies they find interesting, they leave behind a real-time record of the impact that taxpayer-funded research is having within academia and beyond.
    The lead author of the study, Jedidiah Carlson at the University of Washington, explains that each user in a social network will tend to connect with other users who share similar characteristics (such as occupation, age, race, hobbies, or geographic location), a sociological concept formally known as “network homophily.” By tapping into the information embedded in the broader networks of users who tweet about a paper, Carlson and his coauthor, Kelley Harris, are able to describe the total audience of each paper as a composite of multiple interest groups that might indicate the study’s potential to produce intellectual breakthroughs as well as social, cultural, economic, or environmental impacts.
    Rather than categorizing people into coarse groups such as “scientists” and “non-scientists” that rely on Twitter users to accurately describe themselves in their platform biographies, Carlson was able to accurately segment “scientists” into their specific research disciplines (such as evolutionary biology or bioinformatics), regardless of whether they mentioned these sub-disciplines in their twitter bios.
    The broader category of “non-scientists” can be automatically segmented into a multitude of groups, such as mental health advocates, dog lovers, video game developers, vegans, bitcoin investors, journalists, religious groups, and political constituencies. However, Carlson cautions that these indicators of diverse public engagement may not always be in line with scientists’ intended goals.
    Hundreds of papers were found to have Twitter audiences that were dominated by conspiracy theorists, white nationalists, or science denialists. In extreme cases, these audience sectors comprised more than half of all tweets referencing a given study, starkly illustrating the adage that science does not exist in a cultural or political vacuum.
    Particularly in light of the rampant misappropriation and politicization of scientific research throughout the COVID-19 pandemic, Carlson hopes that the results of his study might motivate scientists to keep a closer watch on the social media pulse surrounding their publications and intervene accordingly to guide their audiences towards productive and well-informed engagement.

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    When does a second COVID-19 surge end? Look at the data

    Mathematicians have developed a framework to determine when regions enter and exit COVID-19 infection surge periods, providing a useful tool for public health policymakers to help manage the coronavirus pandemic.
    The first published paper on second-surge COVID-19 infections from US states suggests that policymakers should look for demonstrable turning points in data rather than stable or insufficiently declining infection rates before lifting restrictions.
    Mathematicians Nick James and Max Menzies have published what they believe is the first analysis of COVID-19 infection rates in US states to identify turning points in data that indicate when surges have started or ended.
    The new study by the Australian mathematicians is published today in the journal Chaos, published by the American Institute of Physics.
    “In some of the worst performing states, it seems that policymakers have looked for plateauing or slightly declining infection rates. Instead, health officials should look for identifiable local maxima and minima, showing when surges reach their peak and when they are demonstrably over,” said Nick James a PhD student in the School of Mathematics and Statistics at the University of Sydney.
    In the study, the two mathematicians report a method to analyse COVID-19 case numbers for evidence of a first or second wave. The authors studied data from all 50 US states plus the District of Columbia for the seven-month period from 21 January to 31 July 2020. They found 31 states and the District of Columbia were experiencing a second wave as of the end of July.

    advertisement

    The two mathematicians have also applied the method to analyse infection rates in eight Australian states and territories using data from covidlive.com.au. While the Australian analysis has not been peer-reviewed, it does apply the peer-reviewed methodology. The analysis clearly identified Victoria as an outlier, as expected.
    “What the Victorian data shows is that cases are still coming down and the turning point — the local minimum — has not occurred yet,” Dr Menzies said. He said from a mathematical perspective at least, Victoria should “stay the course.”
    Dr Menzies, from the Yau Mathematical Sciences Center at Tsinghua University in Beijing, said: “Our approach allows for careful identification of the most and least successful US states at managing COVID-19.”
    The results show New York and New Jersey completely flattened their infection curves by the end of July with just a single surge. Thirteen states, including Georgia, California and Texas, have a continuing and rising single infection surge. Thirty-one states had an initial surge followed by declining infection to be followed by a second surge. These states include Florida and Ohio.
    Mr James said: “This is not a predictive model. It is an analytical tool that should assist policymakers determining demonstrable turning points in COVID infections.”
    Methodology

    advertisement

    The method smoothes raw daily case count data to eliminate artificial low counts over weekends and even some negative numbers that occur when localities correct errors. After smoothing the data, a numerical technique is used to find peaks and troughs. From this, turning points can be identified.
    Dr Menzies said their analysis shows governments should try not to allow new cases to increase, nor reduce restrictions when case numbers have merely flattened.
    “A true turning point, where new cases are legitimately in downturn and not just exhibiting stable fluctuations, should be observed before relaxing any restrictions.”
    He said that the analysis wasn’t just nice mathematics, using a new measure between sets of turning points, the study also deals with a very topical problem: looking at state-by-state data.
    Mr James said that aggressively pushing infection rates down to a minimum seemed the best way to defeat a second surge.
    Peaks and Troughs
    To determine the peaks and troughs, the algorithm developed by the mathematicians determines that a turning point occurs when a falling curve surges upward or a rising curve turns downward. Only those sequences where the peak and trough amplitudes differ by a definite minimum amount are counted. Fluctuations can occur when a curve flattens for a while but continues to increase without going through a true downturn, so the method eliminates these false counts.
    Both from Australia, the two mathematicians have been best friends for 25 years. “But this year is the first time we have worked on problems together,” Mr James said.
    Mr James has a background in statistics and has worked for start-ups and hedge funds in Texas, Sydney, San Francisco and New York City. Dr Menzies is a pure mathematician, completing his PhD at Harvard in 2019 and his undergraduate mathematics at the University of Cambridge. More