More stories

  • in

    AI accurately diagnoses prostate cancer, study shows

    Researchers at Karolinska Institutet in Sweden have together with international collaborators completed a comprehensive international validation of artificial intelligence (AI) for diagnosing and grading prostate cancer. The study, published in Nature Medicine, shows that AI systems can identify and grade prostate cancer in tissue samples from different countries equally well as pathologists. The results suggest AI systems are ready to be responsibly introduced as a complementary tool in prostate cancer care, researchers say.
    The international validation was performed via a competition called PANDA. The competition lasted for three months and challenged more than 1000 AI experts to develop systems for accurately grading prostate cancer.
    Rapid innovation
    “Only ten days into the competition, algorithms matching average pathologists were developed. Organising PANDA shows how competitions can accelerate rapid innovation for solving specific problems in healthcare with the help of AI,” says Kimmo Kartasalo, a researcher at the Department of Medical Epidemiology and Biostatistics at Karolinska Institutet and corresponding author of the study.
    A problem in today’s prostate cancer diagnostics is that different pathologists can arrive at different conclusions even for the same tissue samples, which means that treatment decisions are based on uncertain information. The researchers believe the use of AI technology holds great potential for improved reproducibility, that is, increased consistency of the assessments of tissue samples irrespective of which pathologist performs the evaluation, leading to more accurate treatment selection.
    Accurate diagnostics
    The KI researchers have shown in earlier studies that AI systems can indicate if a tissue sample contains cancer or not, estimate the amount of tumour tissue in the biopsy, and grade the severity of prostate cancer, comparably to international experts. However, the main challenge associated with implementing AI in healthcare is that AI systems are often highly sensitive towards data that differ from the data used for training the system, and may consequently not produce reliable and robust results when applied in other hospitals and other countries. More

  • in

    When water is coming from all sides

    When Hurricanes Harvey (2017) and Florence (2018) hit, it was not solely the storm surge from the Gulf of Mexico and Atlantic Ocean that led to flooding. Inland sources, like rain-swollen rivers, lakes, and suburban culverts also contributed significantly. These factors were missed by many computer models at the time, which underestimated the flood risk.
    “People don’t care as much as to whether flooding is coming from the river or the ocean, especially when both contribute to water levels, as they want to know, ‘Is my house going to be flooded?'” said Edward Myers, branch chief of the Coastal Marine Modeling Branch, located in the Coast Survey Development Laboratory at the National Oceanographic and Atmospheric Administration (NOAA).
    Myers and his colleagues at NOAA are collaborating with Y. Joseph Zhang from the Virginia Institute of Marine Science (VIMS) at William & Mary to develop and test the world’s first three-dimensional operational storm surge model.
    “We started with the right attitude and the right core algorithm,” joked Zhang, research professor at the Center for Coastal Resources Management. “Over the years, we’ve re-engineered the dynamic core multiple times and that led to the current modeling system.”
    Now in its third incarnation, the Semi-implicit Cross-scale Hydroscience Integrated System Model (SCHISM) forecasts coastal flooding in Taiwan, at agencies across the European Union, and elsewhere. It is being considered for operational use by NOAA. (The researchers described the system in the Nov. 2021 issue of EOS, the science news magazine of the American Geophysical Union.)
    SCHISM is designed to serve the needs of a wide range of potential users. “Compound surge and flooding is a world-wide hazard,” Zhang said. “It’s notoriously challenging, especially in the transition zone where the river meets the sea. Lots of factors come into play and interact non-linearly.”
    Surrounding the hydrodynamic core of SCHISM are numerous modules that simulate other phenomena important to flooding. These include air-sea exchange, vegetation, and sediment. Other modules adapt the system for specific events, like oil spills, or to predict conditions, like water quality. More

  • in

    Machine learning for morphable materials

    Flat materials that can morph into three-dimensional shapes have potential applications in architecture, medicine, robotics, space travel, and much more. But programming these shape changes requires complex and time-consuming computations.
    Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a platform that uses machine learning to program the transformation of 2D stretchable surfaces into specific 3D shapes.
    “While machine learning methods have been classically employed for image recognition and language processing, they have also recently emerged as powerful tools to solve mechanics problems,” said Katia Bertoldi, the William and Ami Kuan Danoff Professor of Applied Mechanics at SEAS and senior author of the study. “In this work we demonstrate that these tools can be extended to study the mechanics of transformable, inflatable systems.”
    The research is published in Advanced Functional Materials.
    The research team began by dividing an inflatable membrane into a 10×10 grid of 100 square pixels that can either be soft or stiff. The soft or stiff pixels can be combined in an almost infinite variety of configurations, making manual programming extremely difficult. That’s where machine learning comes in.
    The researchers used what’s known as finite element simulations to sample this infinite design space. Then neural networks used that sample to learn how the location of soft and stiff pixels controls the deformation of the membrane when it is pressurized. More

  • in

    New cloud-based platform opens genomics data to all

    Harnessing the power of genomics to find risk factors for major diseases or search for relatives relies on the costly and time-consuming ability to analyze huge numbers of genomes. A team co-led by a Johns Hopkins University computer scientist has leveled the playing field by creating a cloud-based platform that grants genomics researchers easy access to one of the world’s largest genomics databases.
    Known as AnVIL (Genomic Data Science Analysis, Visualization, and Informatics Lab-space), the new platform gives any researcher with an Internet connection access to thousands of analysis tools, patient records, and more than 300,000 genomes. The work, a project of the National Human Genome Institute (NHGRI), appears today in Cell Genomics.
    “AnVIL is inverting the model of genomics data sharing, offering unprecedented new opportunities for science by connecting researchers and datasets in new ways and promising to enable exciting new discoveries,” said project co-leader Michael Schatz, Bloomberg Distinguished Professor of Computer Science and Biology at Johns Hopkins.
    Typically genomic analysis starts with researchers downloading massive amounts of data from centralized warehouses to their own data centers, a process that is not only time-consuming, inefficient, and expensive, but also makes collaborating with researchers at other institutions difficult.
    “AnVIL will be transformative for institutions of all sizes, especially smaller institutions that don’t have the resources to build their own data centers. It is our hope that AnVIL levels the playing field, so that everyone has equal access to make discoveries,” Schatz said.
    Genetic risk factors for ailments such as cancer or cardiovascular disease are often very subtle, requiring researchers to analyze thousands of patients’ genomes to discover new associations. The raw data for a single human genome comprises about 40GB, so downloading thousands of genomes can take takes several days to several weeks: A single genome requires about 10 DVDs worth of data, so transferring thousands means moving “tens of thousands of DVDs worth of data,” Schatz said. More

  • in

    Photon pairs are more sensitive to rotations than single photons

    In the field of quantum metrology, scientists are developing novel measurement schemes that benefit from quantum features and are more precise and sensitive than classical conventional methods. The team of researchers from Tampere University, Finland, and the National Research Council of Canada has now shown how a simple and powerful technique called two-photon N00N states can be used to create spatially structured quantum states of light that can go beyond the classical limit in rotation estimation. The results are published in the journal Physical Review Letters.
    “Our experimental results demonstrate a simple but powerful way of custom-tailoring two-photon quantum states and holds promise for applications that can achieve high measurement precisions.The simplicity of our method opens a path to creating a measurement system that beats the classical estimation limit with current technologies,” explains Doctoral Researcher and lead author Markus Hiekkam√§ki.
    Measurement precisions at the absolute quantum limit
    The method utilizes a fundamental quantum feature, i.e., the interference between two photons, which is often termed photon bunching. In contrast to the more common photon bunching into the same physical path, the novel scheme leads to a bunching into the same spatial structure.
    “In our case, the quantum interference results in an entangled state of two photons. Because of the quantum nature of the realized state, the entangled photon pair gives a better measurement precision when compared to the same spatial shape imprinted on a similar amount of single photons or laser light. Using a counter-intuitive quantum response, we were able to show that it will be possible to achieve measurement precisions at the absolute quantum limit ,” says Associate Professor Robert Fickler, leader of the Experimental Quantum Optics group at Tampere University.
    Besides rotational measurements, the method allows the generation of a large variety of different quantum states for transverse-spatial modes. Hence, it could also be utilized in measurements of many different types of systems as well as in fundamental tests of multi-photon quantum states of light.
    After demonstrating the advantage in rotational estimation, the researchers are now planning on using the method to shed new light on another fundamental property of waves called the Gouy phase. In addition, they study how it could be extended into quantum-enhanced measurement schemes in multiple degrees of freedom.
    Story Source:
    Materials provided by Tampere University. Note: Content may be edited for style and length. More

  • in

    Computer model seeks to explain the spread of misinformation, and suggest counter measures

    It starts with a superspreader, and winds its way through a network of interactions, eventually leaving no one untouched. Those who have been exposed previously may only experience mild effects.
    No, it’s not a virus. It’s the contagious spread of misinformation and disinformation — misinformation that’s fully intended to deceive.
    Now Tufts University researchers have come up with a computer model that remarkably mirrors the way misinformation spreads in real life. The work might provide insight on how to protect people from the current contagion of misinformation that threatens public health and the health of democracy, the researchers say.
    “Our society has been grappling with widespread beliefs in conspiracies, increasing political polarization, and distrust in scientific findings,” said Nicholas Rabb, a Ph.D. computer science student at Tufts School of Engineering and lead author of the study, which came out January 7 in the journal PLOS ONE. “This model could help us get a handle on how misinformation and conspiracy theories are spread, to help come up with strategies to counter them.”
    Scientists who study the dissemination of information often take a page from epidemiologists, modeling the spread of false beliefs on how a disease spreads through a social network. Most of those models, however, treat the people in the networks as all equally taking in any new belief passed on to them by contacts.
    The Tufts researchers instead based their model on the notion that our pre-existing beliefs can strongly influence whether we accept new information. Many people reject factual information supported by evidence if it takes them too far from what they already believe. Health-care workers have commented on the strength of this effect, observing that some patients dying from COVID cling to the belief that COVID does not exist. More

  • in

    New model examines the effects of toxicants on populations in polluted rivers

    When designing environmental policies to limit the damage of river pollution, it is paramount to assess the specific risks that particular pollutants pose to different species. However, rigorously testing the effects of toxicants — like insecticides, plastic debris, pathogens, and chemicals — on entire groups of organisms without severely damaging their whole ecosystems is simply not feasible. Mathematical modeling can provide a flexible way to assess toxicants’ impact on river populations without endangering the environment.
    In a paper that published today in the SIAM Journal on Applied Mathematics, Peng Zhou (Shanghai Normal University) and Qihua Huang (Southwest University, Chongqing) develop a model that describes the interactions between a population and a toxicant in an advective environment — a setting in which a fluid tends to transport material in one direction, like a river. Such a model can help scientists study how the way in which a pollutant moves through a river affects the wellbeing and distribution of the river’s inhabitants.
    Much of the previous experimental research on the ecological risks of toxicants has been performed on individual organisms in controlled laboratory conditions over a fairly short-term basis. The design of environmental management strategies, however, requires an understanding of toxicants’ impact on the health of entire exposed natural populations in the long term. Fortunately, there is an intermediary. “Mathematical models play a crucial role in translating individual responses to population-level impacts,” Huang said.
    The existing models that describe the way in which toxicants affect population dynamics generally ignore many of the properties of water bodies. But in doing so, they are missing a big piece of the puzzle. “In reality, numerous hydrological and physical characteristics of water bodies can have a substantial impact on the concentration and distribution of a toxicant,” Huang said. “[For example], once a toxicant is released into a river, several dispersal mechanisms — such as diffusion and transport — are present that may aid in the spread of the toxicant.”
    Similarly, the models that mathematicians often use to portray the transport of pollutants through a river also do not include all of the necessary components for this study. These are reaction-advection-diffusion equation models, whose solutions can show how pollutants distribute and vary under different influences like changes in the rate of water flow. While such models enable researchers to predict the evolution of toxicant concentrations and assess their impact on the environment, they do not consider toxicant influence on the dynamics of affected populations. Zhou and Huang thus expanded upon this type of model, adding new elements that allowed them to explore the interaction between a toxicant and a population in a polluted river.
    The authors’ model consists of two reaction-diffusion-advection equations — one that governs the population’s dispersal and growth under the toxicant’s influence, and another that describes the processes that the toxicant experiences. “As far as we know, our model represents the first effort to model the population-toxicant interactions in an advective environment by using reaction-diffusion-advection equations,” Zhou said. “This new model could potentially open a [novel] line of research.”
    The model allows Zhou and Huang to tweak different factors and investigate the resulting changes to the ecosystem. They tried altering the river’s flow speed and the advection rate — i.e., the rate at which the toxicant or organisms are carried downstream — and observing these parameters’ influence on the population persistence and distribution of both the population and toxicant. These theoretical results can provide insights that could help inform ecological policies when taken in concert with other information.
    One scenario that the researchers studied involved a toxicant that had a much slower advection rate than the population and thus was not washed away as easily. The model showed that, intuitively, the population density decreases with increasing water flow because more individuals are carried downstream and out of the river area in question. However, the concentration of the toxicant increases with the increasing flow speed because it can resist the downstream current and the organisms are often swept away before they can uptake it.
    In the opposite case, the toxicant has a faster advection rate and is therefore much more sensitive to water flow speed than the population. Increasing the water flow then reduces the toxicant concentration by sweeping the pollutants away. For a medium flow speed, the highest population density occurs downstream because the water flow plays a trade-off role; it transports more toxicants away but also carries more individuals downstream.
    This demonstrates that a higher sensitivity of a pollutant to water flow is generally more advantageous to population persistence. “In the absence of toxicants, it is generally known that the higher the flow speed, the more individuals will be washed out of the river,” Zhou said. “However, our findings suggest that, for a given toxicant level, population abundance may increase as flow rate increases.”
    By providing this model with the parameters for certain species and pollutants, one may be able to determine criteria regarding the water quality that is necessary to maintain aquatic life. This outcome could ultimately aid in the development of policy guidelines surrounding the target species and toxicants. “The findings here offer the basis for effective decision-making tools for water and environment managers,” Huang said. Managers could connect the results from the model with other factors, such as what may happen to the pollutant after it washes downstream.
    Further extensions to Zhou and Huang’s new model could make it even more applicable to real river ecosystems — for example, by allowing the flow velocity and release of toxicants to vary over time, or accounting for the different ways in which separate species may respond to the same pollutant. This mathematical model’s capability to find the population-level effects of toxicants might play a critical part in the accurate assessment of pollutants’ risk to rivers and their inhabitants. More

  • in

    Gauging the resilience of complex networks

    Whether a transformer catches fire in a power grid, a species disappears from an ecosystem, or water floods a city street, many systems can absorb a certain amount of disruption. But how badly does a single failure weaken the network? And how much damage can it take before it tips into collapse? Network scientist Jianxi Gao is building tools that can answer those questions, regardless of the nature of the system.
    “After a certain point, damage to a system is so great that it causes catastrophic failure. But the events leading to a loss of resilience in a system are rarely predictable and often irreversible. That makes it hard to prevent a collapse,” said Dr. Gao, an assistant professor of computer science at Rensselaer Polytechnic Institute, who was awarded a National Science Foundation CAREER award to tackle the problem. “The mathematical tools we are building will make it possible to evaluate the resilience of any system. And with that, we can predict and prevent failure.”
    Imagine the effects of climate change on an ecosystem, Dr. Gao said. A species that can’t adapt will dwindle to extinction, perhaps driving a cascade of other species, which eat the first, to the brink of extinction also. As the climate changes, and more species are stressed, Dr. Gao wants the ability to predict the impact of those dwindling populations on the rest of the ecosystem.
    Predicting resilience starts by mapping the system as a network, a graph in which the players (an animal, neuron, power station) are connected by the relationships between them, and how that relationship affects each of the players and the network overall. In one visualization of a network, each of the players is a dot, a node, connected to other players by links that represent the relationship between them — think who eats whom in a forest and how that impacts the overall population of each species, or how information moving across a social media site influences opinions. Over time, the system changes, with some nodes appearing or disappearing, links growing stronger or weaker or changing relationship to one another as the system as a whole responds to that change.
    Mathematically, a changing network can be described by a series of coupled nonlinear equations. And while equations have been developed to map networks in many fields, predicting the resiliency of complex networks or systems with missing information overwhelms the existing ability of even the most powerful supercomputers.
    “We’re very limited in what we can do with the existing methods. Even if the network is not very large, we may be able to use the computer to solve the coupled equations, but we cannot simulate many different failure scenarios,” Dr. Gao said.
    Dr. Gao debuted a preliminary solution to the problem in a 2016 paper published in Nature. In that paper, he and his colleagues declared that existing analytical tools are insufficient because they were designed for smaller models with few interacting components, as opposed to the vast networks we want to understand. The authors proposed a new set of tools, designed for complex networks, able to first identify the natural state and control parameters of the network, and then collapse the behavior of different networks into a single, solvable, universal function.
    The tools presented in the Nature paper worked with strict assumptions on a network where all information is known — all nodes, all links, and the interactions between those nodes and links. In the new work, Dr. Gao wants to extend the single universal equation to networks where some of the information is missing. The tools he is developing will estimate missing information — missing nodes and links, and the relationships between them — based on what is already known. The approach reduces accuracy somewhat, but enables a far greater reward than what is lost, Dr. Gao said.
    “For a network of millions or even billions of nodes, I will be able to use just one equation to estimate the macroscopic behavior of the network. Of course, I will lose some information, some accuracy, but I capture the most important dynamics or properties of the whole system,” Dr. Gao said. “Right now, people cannot do that. They cannot test the system, find where it gives way, and better still, improve it so that it will not fail.”
    “The ability to analyze and predict weaknesses across a variety of network types gives us a vast amount of power to safeguard vulnerable networks and ecosystems before they fail,” said Curt Breneman, dean of the Rensselaer School of Science. “This is the kind of work that changes the game, and this CAREER award is a recognition of that potential. We congratulate Jianxi and expect great things from his research.” More