More stories

  • in

    Computer model seeks to explain the spread of misinformation, and suggest counter measures

    It starts with a superspreader, and winds its way through a network of interactions, eventually leaving no one untouched. Those who have been exposed previously may only experience mild effects.
    No, it’s not a virus. It’s the contagious spread of misinformation and disinformation — misinformation that’s fully intended to deceive.
    Now Tufts University researchers have come up with a computer model that remarkably mirrors the way misinformation spreads in real life. The work might provide insight on how to protect people from the current contagion of misinformation that threatens public health and the health of democracy, the researchers say.
    “Our society has been grappling with widespread beliefs in conspiracies, increasing political polarization, and distrust in scientific findings,” said Nicholas Rabb, a Ph.D. computer science student at Tufts School of Engineering and lead author of the study, which came out January 7 in the journal PLOS ONE. “This model could help us get a handle on how misinformation and conspiracy theories are spread, to help come up with strategies to counter them.”
    Scientists who study the dissemination of information often take a page from epidemiologists, modeling the spread of false beliefs on how a disease spreads through a social network. Most of those models, however, treat the people in the networks as all equally taking in any new belief passed on to them by contacts.
    The Tufts researchers instead based their model on the notion that our pre-existing beliefs can strongly influence whether we accept new information. Many people reject factual information supported by evidence if it takes them too far from what they already believe. Health-care workers have commented on the strength of this effect, observing that some patients dying from COVID cling to the belief that COVID does not exist. More

  • in

    New model examines the effects of toxicants on populations in polluted rivers

    When designing environmental policies to limit the damage of river pollution, it is paramount to assess the specific risks that particular pollutants pose to different species. However, rigorously testing the effects of toxicants — like insecticides, plastic debris, pathogens, and chemicals — on entire groups of organisms without severely damaging their whole ecosystems is simply not feasible. Mathematical modeling can provide a flexible way to assess toxicants’ impact on river populations without endangering the environment.
    In a paper that published today in the SIAM Journal on Applied Mathematics, Peng Zhou (Shanghai Normal University) and Qihua Huang (Southwest University, Chongqing) develop a model that describes the interactions between a population and a toxicant in an advective environment — a setting in which a fluid tends to transport material in one direction, like a river. Such a model can help scientists study how the way in which a pollutant moves through a river affects the wellbeing and distribution of the river’s inhabitants.
    Much of the previous experimental research on the ecological risks of toxicants has been performed on individual organisms in controlled laboratory conditions over a fairly short-term basis. The design of environmental management strategies, however, requires an understanding of toxicants’ impact on the health of entire exposed natural populations in the long term. Fortunately, there is an intermediary. “Mathematical models play a crucial role in translating individual responses to population-level impacts,” Huang said.
    The existing models that describe the way in which toxicants affect population dynamics generally ignore many of the properties of water bodies. But in doing so, they are missing a big piece of the puzzle. “In reality, numerous hydrological and physical characteristics of water bodies can have a substantial impact on the concentration and distribution of a toxicant,” Huang said. “[For example], once a toxicant is released into a river, several dispersal mechanisms — such as diffusion and transport — are present that may aid in the spread of the toxicant.”
    Similarly, the models that mathematicians often use to portray the transport of pollutants through a river also do not include all of the necessary components for this study. These are reaction-advection-diffusion equation models, whose solutions can show how pollutants distribute and vary under different influences like changes in the rate of water flow. While such models enable researchers to predict the evolution of toxicant concentrations and assess their impact on the environment, they do not consider toxicant influence on the dynamics of affected populations. Zhou and Huang thus expanded upon this type of model, adding new elements that allowed them to explore the interaction between a toxicant and a population in a polluted river.
    The authors’ model consists of two reaction-diffusion-advection equations — one that governs the population’s dispersal and growth under the toxicant’s influence, and another that describes the processes that the toxicant experiences. “As far as we know, our model represents the first effort to model the population-toxicant interactions in an advective environment by using reaction-diffusion-advection equations,” Zhou said. “This new model could potentially open a [novel] line of research.”
    The model allows Zhou and Huang to tweak different factors and investigate the resulting changes to the ecosystem. They tried altering the river’s flow speed and the advection rate — i.e., the rate at which the toxicant or organisms are carried downstream — and observing these parameters’ influence on the population persistence and distribution of both the population and toxicant. These theoretical results can provide insights that could help inform ecological policies when taken in concert with other information.
    One scenario that the researchers studied involved a toxicant that had a much slower advection rate than the population and thus was not washed away as easily. The model showed that, intuitively, the population density decreases with increasing water flow because more individuals are carried downstream and out of the river area in question. However, the concentration of the toxicant increases with the increasing flow speed because it can resist the downstream current and the organisms are often swept away before they can uptake it.
    In the opposite case, the toxicant has a faster advection rate and is therefore much more sensitive to water flow speed than the population. Increasing the water flow then reduces the toxicant concentration by sweeping the pollutants away. For a medium flow speed, the highest population density occurs downstream because the water flow plays a trade-off role; it transports more toxicants away but also carries more individuals downstream.
    This demonstrates that a higher sensitivity of a pollutant to water flow is generally more advantageous to population persistence. “In the absence of toxicants, it is generally known that the higher the flow speed, the more individuals will be washed out of the river,” Zhou said. “However, our findings suggest that, for a given toxicant level, population abundance may increase as flow rate increases.”
    By providing this model with the parameters for certain species and pollutants, one may be able to determine criteria regarding the water quality that is necessary to maintain aquatic life. This outcome could ultimately aid in the development of policy guidelines surrounding the target species and toxicants. “The findings here offer the basis for effective decision-making tools for water and environment managers,” Huang said. Managers could connect the results from the model with other factors, such as what may happen to the pollutant after it washes downstream.
    Further extensions to Zhou and Huang’s new model could make it even more applicable to real river ecosystems — for example, by allowing the flow velocity and release of toxicants to vary over time, or accounting for the different ways in which separate species may respond to the same pollutant. This mathematical model’s capability to find the population-level effects of toxicants might play a critical part in the accurate assessment of pollutants’ risk to rivers and their inhabitants. More

  • in

    Climate change communication should focus less on specific numbers

    What’s in a number? The goals of the 2021 United Nations’ climate summit in Glasgow, Scotland, called for nations to keep a warming limit of 1.5 degrees Celsius “within reach.” But when it comes to communicating climate change to the public, some scientists worry that too much emphasis on a specific number is a poor strategy.

    Focusing on one number obscures a more important point, they say: Even if nations don’t meet this goal to curb global climate change, any progress is better than none at all. Maybe it’s time to stop talking so much about one number.

    On November 13, the United Nations’ 26th annual climate change meeting, or COP26, ended in a new climate deal, the Glasgow Climate Pact. In that pact, the 197 assembled nations reaffirmed a common “ideal” goal: limiting global warming to no more than 1.5 degrees C by 2100, relative to preindustrial times (SN: 12/17/18).

    Holding temperature increases to 1.5 degrees C, researchers have found, would be a significant improvement over limiting warming to 2 degrees C, as agreed upon in the 2015 Paris Agreement (SN: 12/12/15). The more stringent limit would mean fewer global hazards, from extreme weather to the speed of sea level rise to habitat loss for species (SN: 12/17/18).

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    The trouble is that current national pledges to reduce greenhouse gas emissions are nowhere near enough to meet either of those goals. Even accounting for the most recent national pledges to cut emissions, the average global temperature by 2100 is likely to be between 2.2 and 2.7 degrees C warmer than it was roughly 150 years ago (SN: 10/26/21).

    And that glaring disparity is leading not just to fury and frustration for many, but also to despair and pervasive feelings of doom, says paleoclimatologist Jessica Tierney of the University of Arizona in Tucson.

    “It’s something I’ve been thinking about for a while, but I think it was definitely made sort of more front and center with COP,” Tierney says. She describes one news story in the wake of the conference that “mentioned 1.5 degrees C, and then said this is the threshold over which scientists have told us that catastrophic climate change will occur.”

    The article reveals a fundamental misunderstanding of what the agreed-upon limit really represents, Tierney explains. “A lot of my students, for example, are really worried about climate change, and they are really worried about passing some kind of boundary. People have this idea that if you pass that boundary, you sort of tip over a cliff.”

    The climate system certainly has tipping points — thresholds past which, for example, an ice sheet begins to collapse and it’s not possible to stop or reverse the process. But, Tierney says, “we really should start communicating more about the continuum of climate change. Obviously, less warming is better.” However, “if we do blow by 1.5, we don’t need to panic. It’s okay if we can stop at 1.6 or 1.7.”

    Tierney notes that climate communications expert Susan Hassol, director of the Colorado-based nonprofit Climate Communication, has likened the approach to missing an exit while driving on the highway. “If you miss the 1.5 exit, you just slow down and take the next one, or the next one,” Tierney says. “It’s still better than hitting the gas.”

    Target numbers do have some uses, notes climate scientist Joeri Rogelj of Imperial College London. After decades of international climate negotiations and wrangling over targets and strategies, the world has now agreed that 1.5 degrees C of warming is a desirable target for many countries, says Rogelj, who was one of the lead authors on the Intergovernmental Panel on Climate Change’s 2018 special report on global warming.

    A global temperature limit “is a good proxy for avoiding certain impacts,” he adds. “These numbers are basically how to say this.”

    But Rogelj agrees that focusing too much on a particular number may be counterproductive, even misleading. “There is a lot of layered meaning under those numbers,” he says. “The true interests, the true goals of countries are not those numbers, but avoiding the impacts that underlie them.”

    And framing goals as where we should be by the end of the century — such as staying below 1.5 degrees C by the year 2100 — can give too much leeway to stall on reducing emissions. For example, such framing implies the planet could blow past the temperature limit by mid-century and rely on still-unproven carbon dioxide removal strategies to bring warming back down in the next few decades, Rogelj and colleagues wrote in 2019 in Nature.

    Banking on future technologies that have yet to be developed is worrisome, Rogelj notes. After all, some warming-related extreme events, such as heat waves, are more reversible than others, such as sea level rise (SN: 8/9/21). Heat wave incidence may decrease once carbon is removed from the atmosphere, but the seas will stay high.

    Rogelj acknowledges that it’s a challenge to communicate the urgency of taking action to reduce emissions now without spinning off into climate catastrophe or cliff edge narratives. For his part, Rogelj says he’s trying to tackle this challenge by adding a hefty dose of reality in his scientific presentations, particularly those aimed at nonscientists.

    He starts with pictures of forest fires and floods in Europe from 2021. “I say, ‘Look, this is today, 1.1 degrees warmer than preindustrial times,’” Rogelj explains. “‘Do you think this is safe? Today is not safe. And so, 1.5 won’t be safer than today; it will be worse than today. But it will be better than 1.6. And 1.6 won’t be the end of the world.’ And that kind of makes people think about it a bit differently.” More

  • in

    Gauging the resilience of complex networks

    Whether a transformer catches fire in a power grid, a species disappears from an ecosystem, or water floods a city street, many systems can absorb a certain amount of disruption. But how badly does a single failure weaken the network? And how much damage can it take before it tips into collapse? Network scientist Jianxi Gao is building tools that can answer those questions, regardless of the nature of the system.
    “After a certain point, damage to a system is so great that it causes catastrophic failure. But the events leading to a loss of resilience in a system are rarely predictable and often irreversible. That makes it hard to prevent a collapse,” said Dr. Gao, an assistant professor of computer science at Rensselaer Polytechnic Institute, who was awarded a National Science Foundation CAREER award to tackle the problem. “The mathematical tools we are building will make it possible to evaluate the resilience of any system. And with that, we can predict and prevent failure.”
    Imagine the effects of climate change on an ecosystem, Dr. Gao said. A species that can’t adapt will dwindle to extinction, perhaps driving a cascade of other species, which eat the first, to the brink of extinction also. As the climate changes, and more species are stressed, Dr. Gao wants the ability to predict the impact of those dwindling populations on the rest of the ecosystem.
    Predicting resilience starts by mapping the system as a network, a graph in which the players (an animal, neuron, power station) are connected by the relationships between them, and how that relationship affects each of the players and the network overall. In one visualization of a network, each of the players is a dot, a node, connected to other players by links that represent the relationship between them — think who eats whom in a forest and how that impacts the overall population of each species, or how information moving across a social media site influences opinions. Over time, the system changes, with some nodes appearing or disappearing, links growing stronger or weaker or changing relationship to one another as the system as a whole responds to that change.
    Mathematically, a changing network can be described by a series of coupled nonlinear equations. And while equations have been developed to map networks in many fields, predicting the resiliency of complex networks or systems with missing information overwhelms the existing ability of even the most powerful supercomputers.
    “We’re very limited in what we can do with the existing methods. Even if the network is not very large, we may be able to use the computer to solve the coupled equations, but we cannot simulate many different failure scenarios,” Dr. Gao said.
    Dr. Gao debuted a preliminary solution to the problem in a 2016 paper published in Nature. In that paper, he and his colleagues declared that existing analytical tools are insufficient because they were designed for smaller models with few interacting components, as opposed to the vast networks we want to understand. The authors proposed a new set of tools, designed for complex networks, able to first identify the natural state and control parameters of the network, and then collapse the behavior of different networks into a single, solvable, universal function.
    The tools presented in the Nature paper worked with strict assumptions on a network where all information is known — all nodes, all links, and the interactions between those nodes and links. In the new work, Dr. Gao wants to extend the single universal equation to networks where some of the information is missing. The tools he is developing will estimate missing information — missing nodes and links, and the relationships between them — based on what is already known. The approach reduces accuracy somewhat, but enables a far greater reward than what is lost, Dr. Gao said.
    “For a network of millions or even billions of nodes, I will be able to use just one equation to estimate the macroscopic behavior of the network. Of course, I will lose some information, some accuracy, but I capture the most important dynamics or properties of the whole system,” Dr. Gao said. “Right now, people cannot do that. They cannot test the system, find where it gives way, and better still, improve it so that it will not fail.”
    “The ability to analyze and predict weaknesses across a variety of network types gives us a vast amount of power to safeguard vulnerable networks and ecosystems before they fail,” said Curt Breneman, dean of the Rensselaer School of Science. “This is the kind of work that changes the game, and this CAREER award is a recognition of that potential. We congratulate Jianxi and expect great things from his research.” More

  • in

    Measuring trust in AI

    Prompted by the increasing prominence of artificial intelligence (AI) in society, University of Tokyo researchers investigated public attitudes toward the ethics of AI. Their findings quantify how different demographics and ethical scenarios affect these attitudes. As part of this study, the team developed an octagonal visual metric, analogous to a rating system, which could be useful to AI researchers who wish to know how their work may be perceived by the public.
    Many people feel the rapid development of technology often outpaces that of the social structures that implicitly guide and regulate it, such as law or ethics. AI in particular exemplifies this as it has become so pervasive in everyday life for so many, seemingly overnight. This proliferation, coupled with the relative complexity of AI compared to more familiar technology, can breed fear and mistrust of this key component of modern living. Who distrusts AI and in what ways are matters that would be useful to know for developers and regulators of AI technology, but these kinds of questions are not easy to quantify.
    Researchers at the University of Tokyo, led by Professor Hiromi Yokoyama from the Kavli Institute for the Physics and Mathematics of the Universe, set out to quantify public attitudes toward ethical issues around AI. There were two questions in particular the team, through analysis of surveys, sought to answer: how attitudes change depending on the scenario presented to a respondent, and how the demographic of the respondent themself changed attitudes.
    Ethics cannot really be quantified, so to measure attitudes toward the ethics of AI, the team employed eight themes common to many AI applications that raised ethical questions: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. These, which the group has termed “octagon measurements,” were inspired by a 2020 paper by Harvard University researcher Jessica Fjeld and her team.
    Survey respondents were given a series of four scenarios to judge according to these eight criteria. Each scenario looked at a different application of AI. They were: AI-generated art, customer service AI, autonomous weapons and crime prediction.
    The survey respondents also gave the researchers information about themselves such as age, gender, occupation and level of education, as well as a measure of their level of interest in science and technology by way of an additional set of questions. This information was essential for the researchers to see what characteristics of people would correspond to certain attitudes.
    “Prior studies have shown that risk is perceived more negatively by women, older people, and those with more subject knowledge. I was expecting to see something different in this survey given how commonplace AI has become, but surprisingly we saw similar trends here,” said Yokoyama. “Something we saw that was expected, however, was how the different scenarios were perceived, with the idea of AI weapons being met with far more skepticism than the other three scenarios.”
    The team hopes the results could lead to the creation of a sort of universal scale to measure and compare ethical issues around AI. This survey was limited to Japan, but the team has already begun gathering data in several other countries.
    “With a universal scale, researchers, developers and regulators could better measure the acceptance of specific AI applications or impacts and act accordingly,” said Assistant Professor Tilman Hartwig. “One thing I discovered while developing the scenarios and questionnaire is that many topics within AI require significant explanation, more so than we realized. This goes to show there is a huge gap between perception and reality when it comes to AI.”
    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Fully 3D-printed, flexible OLED display

    In a groundbreaking new study, researchers at the University of Minnesota Twin Cities used a customized printer to fully 3D print a flexible organic light-emitting diode (OLED) display. The discovery could result in low-cost OLED displays in the future that could be widely produced using 3D printers by anyone at home, instead of by technicians in expensive microfabrication facilities.
    The OLED display technology is based on the conversion of electricity into light using an organic material layer. OLEDs function as high quality digital displays, which can be made flexible and used in both large-scale devices such as television screens and monitors as well as handheld electronics such as smartphones. OLED displays have gained popularity because they are lightweight, power-efficient, thin and flexible, and offer a wide viewing angle and high contrast ratio.
    “OLED displays are usually produced in big, expensive, ultra-clean fabrication facilities,” said Michael McAlpine, a University of Minnesota Kuhrmeyer Family Chair Professor in the Department of Mechanical Engineering and the senior author of the study. “We wanted to see if we could basically condense all of that down and print an OLED display on our table-top 3D printer, which was custom built and costs about the same as a Tesla Model S.”
    The group had previously tried 3D printing OLED displays, but they struggled with the uniformity of the light-emitting layers. Other groups partially printed displays but also relied on spin-coating or thermal evaporation to deposit certain components and create functional devices.
    In this new study, the University of Minnesota research team combined two different modes of printing to print the six device layers that resulted in a fully 3D-printed, flexible organic light-emitting diode display. The electrodes, interconnects, insulation, and encapsulation were all extrusion printed, while the active layers were spray printed using the same 3D printer at room temperature. The display prototype was about 1.5 inches on each side and had 64 pixels. Every pixel worked and displayed light.
    “I thought I would get something, but maybe not a fully working display,” said Ruitao Su, the first author of the study and a 2020 University of Minnesota mechanical engineering Ph.D. graduate who is now a postdoctoral researcher at MIT. “But then it turns out all the pixels were working, and I can display the text I designed. My first reaction was ‘It is real!’ I was not able to sleep, the whole night.”
    Su said the 3D-printed display was also flexible and could be packaged in an encapsulating material, which could make it useful for a wide variety of applications. More

  • in

    Light–matter interactions simulated on the world’s fastest supercomputer

    Light-matter interactions form the basis of many important technologies, including lasers, light-emitting diodes (LEDs), and atomic clocks. However, usual computational approaches for modeling such interactions have limited usefulness and capability. Now, researchers from Japan have developed a technique that overcomes these limitations.
    In a study published this month in The International Journal of High Performance Computing Applications, a research team led by the University of Tsukuba describes a highly efficient method for simulating light-matter interactions at the atomic scale.
    What makes these interactions so difficult to simulate? One reason is that phenomena associated with the interactions encompass many areas of physics, involving both the propagation of light waves and the dynamics of electrons and ions in matter. Another reason is that such phenomena can cover a wide range of length and time scales.
    Given the multiphysics and multiscale nature of the problem, light-matter interactions are typically modeled using two separate computational methods. The first is electromagnetic analysis, whereby the electromagnetic fields of the light are studied; the second is a quantum-mechanical calculation of the optical properties of the matter. But these methods assume that the electromagnetic fields are weak and that there is a difference in the length scale.
    “Our approach provides a unified and improved way to simulate light-matter interactions,” says senior author of the study Professor Kazuhiro Yabana. “We achieve this feat by simultaneously solving three key physics equations: the Maxwell equation for the electromagnetic fields, the time-dependent Kohn-Sham equation for the electrons, and the Newton equation for the ions.”
    The researchers implemented the method in their in-house software SALMON (Scalable Ab initio Light-Matter simulator for Optics and Nanoscience), and they thoroughly optimized the simulation computer code to maximize its performance. They then tested the code by modeling light-matter interactions in a thin film of amorphous silicon dioxide, composed of more than 10,000 atoms. This simulation was carried out using almost 28,000 nodes of the fastest supercomputer in the world, Fugaku, at the RIKEN Center for Computational Science in Kobe, Japan.
    “We found that our code is extremely efficient, achieving the goal of one second per time step of the calculation that is needed for practical applications,” says Professor Yabana. “The performance is close to its maximum possible value, set by the bandwidth of the computer memory, and the code has the desirable property of excellent weak scalability.”
    Although the team simulated light-matter interactions in a thin film in this work, their approach could be used to explore many phenomena in nanoscale optics and photonics.
    Story Source:
    Materials provided by University of Tsukuba. Note: Content may be edited for style and length. More

  • in

    Integrated photonics for quantum technologies

    An international team of leading scientists, headed up by Paderborn physicist Professor Klaus Jöns, has compiled a comprehensive overview of the potential, global outlook, background and frontiers of integrated photonics. The paper — a roadmap for integrated photonic circuits for quantum technologies — has now been published by journal Nature Reviews Physics. The review outlines underlying technologies, presents the current state of play of research and describes possible future applications.
    “Photonic quantum technologies have reached a number of important milestones over the last 20 years. However, scalability remains a major challenge when it comes to translating results from the lab to everyday applications. Applications often require more than 1,000 optical components, all of which have to be individually optimised. Photonic quantum technologies can, though, benefit from the parallel developments in classical photonic integration,” explains Jöns. According to the scientists, more research is required. “The integrated photonic platforms, which require a variety of multiple materials, component designs and integration strategies, bring multiple challenges, in particular signal losses, which are not easily compensated for in the quantum world,” continues Jöns. In their paper, the authors state that the complex innovation cycle for integrated photonic quantum technologies (IPQT) requires investments, the resolution of specific technological challenges, the development of the necessary infrastructure and further structuring towards a mature ecosystem. They conclude that there is an increasing demand for scientists and engineers with substantial knowledge of quantum mechanics and its technological applications.
    Integrated quantum photonics uses classical integrated photonic technologies and devices for quantum applications, whereby chip-level integration is critical for scaling up and translating laboratory demonstrators to real-life technologies. Jöns explains: “Efforts in the field of integrated quantum photonics are broad-ranging and include the development of quantum photonic circuits, which can be monolithically, hybrid or heterogeneously integrated. In our paper, we discuss what applications may become possible in the future by overcoming the current roadblocks.” The scientists also provide an overview of the research landscape and discuss the innovation and market potential. The aim is to stimulate further research and research funding by outlining not only the scientific issues, but also the challenges related to the development of the necessary manufacturing infrastructure and supply chains for bringing the technologies to market.
    According to the scientists, there is an urgent need to invest heavily in education in order to train the next generation of IPQT engineers. Jöns says: “Regardless of the type of technology that will be used in commercial quantum devices, the underlying principles of quantum mechanics are the same. We predict an increasing demand for scientists and engineers with substantial knowledge of both quantum mechanics and its technological applications. Investing in educating the next generation will contribute to pushing the scientific and technological frontiers.”
    Story Source:
    Materials provided by Universität Paderborn. Note: Content may be edited for style and length. More