More stories

  • in

    Creating a reference map to explore the electronic device mimicking brain activity

    Maps are essential for exploring trackless wilderness or vast expanses of ocean. The same is true for scientific studies that try to open up new fields and develop brand-new devices. A journey without maps and signposts tends to end in vain.
    In the world of “neuromorphic devices,” an electronic device that mimics neural cells such as our brain, researchers have long been forced to travel without maps. Such devices will lead to a fresh field of brain-inspired computers with substantial benefits such as low-energy consumption. But its operation mechanism has remained unclear, particularly in regards to controlling the response speed control.
    A research group from Tohoku University and the University of Cambridge brought clarity in a recent study published in the journal Advanced Electronic Materials on January 13, 2022.
    They looked into organic electrochemical transistors (OECT), which are often applied in neuromorphic devices and control the movement of the ion in the active layer. The analysis revealed that response timescale depends on the size of ion in the electrolyte.
    Based on these experimental results, the group modeled the neuromorphic response of the devices. Comparisons of the data showed that movements of the ions in the OECT controlled the response. This indicates tuning the timescale for ion movement can be an effective way to regulate the neuromorphic behavior of OECTs.
    “We obtained a map that provides rational design guidelines for neuromorphic devices through changing ion size and material composition in the active layer,” said Shunsuke Yamamoto, paper corresponding author and assistant professor at Tohoku University’s Graduate School of Engineering. “Further studies will pave the way for application to artificial neural networks and lead to better and more precise designs of the conducting polymer materials used in this field.”
    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More

  • in

    Mathematical model may help improve treatments and clinical trials of patients with COVID-19 and other illnesses

    Investigators who recently developed a mathematical model that indicated why treatment responses vary widely among individuals with COVID-19 have now used the model to identify biological markers related to these different responses. The team, which was led by scientists at Massachusetts General Hospital (MGH) and the University of Cyprus, notes that the model can be used to provide a better understanding of the complex interactions between illness and response and can help clinicians provide optimal care for diverse patients.
    The work, which is published in EBioMedicine, was initiated because COVID-19 is extremely heterogeneous, meaning that illness following SARS-CoV-2 infection ranges from asymptomatic to life-threatening conditions such as respiratory failure or acute respiratory distress syndrome (ARDS), in which fluid collects in the lungs. “Even within the subset of critically ill COVID-19 patients who develop ARDS, there exists substantial heterogeneity. Significant efforts have been made to identify subtypes of ARDS defined by clinical features or biomarkers,” explains co-senior author Rakesh K. Jain, PhD, director of the E.L. Steele Laboratories for Tumor Biology at MGH and the Andrew Werk Cook Professor of Radiation Oncology at Harvard Medical School (HMS). “To predict disease progression and personalize treatment, it is necessary to determine the associations among clinical features, biomarkers and underlying biology. Although this can be achieved over the course of numerous clinical trials, this process is time-consuming and extremely expensive.”
    As an alternative, Jain and his colleagues used their model to analyze the effects that different patient characteristics yield on outcomes following treatment with different therapies. This allowed the team to determine the optimal treatment for distinct categories of patients, reveal biologic pathways responsible for different clinical responses, and identify markers of these pathways.
    The researchers simulated six patient types (defined by the presence or absence of different comorbidities) and three types of therapies that modulate the immune system. “Using a novel treatment efficacy scoring system, we found that older and hyperinflamed patients respond better to immunomodulation therapy than obese and diabetic patients,” says co-senior and corresponding author Lance Munn, PhD, who is the deputy director of the Steele Labs and an associate professor at HMS. “We also found that the optimal time to initiate immunomodulation therapy differs between patients and also depends on the drug itself.” Certain biological markers that differed based on patient characteristics determined optimal treatment initiation time, and these markers pointed to particular biologic programs or mechanisms that impacted a patient’s outcome. The markers also matched clinically identified markers of disease severity.
    For COVID-19 as well as other conditions, the team’s approach could enable investigators to enrich a clinical trial with patients most likely to respond to a given drug. “Such enrichment based on prospectively predicted biomarkers is a potential strategy for increasing precision of clinical trials and accelerating therapy development,” says co-senior author Triantafyllos Stylianopoulos, PhD, an associate professor at the University of Cyprus.
    Other co-authors include Sonu Subudhi, Chrysovalantis Voutouri, C. Corey Hardin, Mohammad Reza Nikmaneshi, Melin J. Khandekar and Sayon Dutta from MGH; and Ankit B. Patel and Ashish Verma from Brigham and Women’s Hospital.
    Funding for the study was provided by the National Institutes of Health, Harvard Ludwig Cancer Center, Niles Albright Research Foundation and Jane’s Trust Foundation. Voutouri is a recipient of a Marie Sk?odowska Curie Actions Individual Fellowship.
    Story Source:
    Materials provided by Massachusetts General Hospital. Note: Content may be edited for style and length. More

  • in

    The first AI breast cancer sleuth that shows its work

    Computer engineers and radiologists at Duke University have developed an artificial intelligence platform to analyze potentially cancerous lesions in mammography scans to determine if a patient should receive an invasive biopsy. But unlike its many predecessors, this algorithm is interpretable, meaning it shows physicians exactly how it came to its conclusions.
    The researchers trained the AI to locate and evaluate lesions just like an actual radiologist would be trained, rather than allowing it to freely develop its own procedures, giving it several advantages over its “black box” counterparts. It could make for a useful training platform to teach students how to read mammography images. It could also help physicians in sparsely populated regions around the world who do not regularly read mammography scans make better health care decisions.
    The results appeared online December 15 in the journal Nature Machine Intelligence.
    “If a computer is going to help make important medical decisions, physicians need to trust that the AI is basing its conclusions on something that makes sense,” said Joseph Lo, professor of radiology at Duke. “We need algorithms that not only work, but explain themselves and show examples of what they’re basing their conclusions on. That way, whether a physician agrees with the outcome or not, the AI is helping to make better decisions.”
    Engineering AI that reads medical images is a huge industry. Thousands of independent algorithms already exist, and the FDA has approved more than 100 of them for clinical use. Whether reading MRI, CT or mammogram scans, however, very few of them use validation datasets with more than 1000 images or contain demographic information. This dearth of information, coupled with the recent failures of several notable examples, has led many physicians to question the use of AI in high-stakes medical decisions.
    In one instance, an AI model failed even when researchers trained it with images taken from different facilities using different equipment. Rather than focusing exclusively on the lesions of interest, the AI learned to use subtle differences introduced by the equipment itself to recognize the images coming from the cancer ward and assigning those lesions a higher probability of being cancerous. As one would expect, the AI did not transfer well to other hospitals using different equipment. But because nobody knew what the algorithm was looking at when making decisions, nobody knew it was destined to fail in real-world applications. More

  • in

    Quantum particles can feel the influence of gravitational fields they never touch

    If you’re superstitious, a black cat in your path is bad luck, even if you keep your distance. Likewise, in quantum physics, particles can feel the influence of magnetic fields that they never come into direct contact with. Now scientists have shown that this eerie quantum effect holds not just for magnetic fields, but for gravity too — and it’s no superstition.

    Usually, to feel the influence of a magnetic field, a particle would have to pass through it. But in 1959, physicists Yakir Aharonov and David Bohm predicted that, in a specific scenario, the conventional wisdom would fail. A magnetic field contained within a cylindrical region can affect particles — electrons, in their example — that never enter the cylinder. In this scenario, the electrons don’t have well-defined locations, but are in “superpositions,” quantum states described by the odds of a particle materializing in two different places. Each fractured particle simultaneously takes two different paths around the magnetic cylinder. Despite never touching the electrons, and hence exerting no force on them, the magnetic field shifts the pattern of where particles are found at the end of this journey, as various experiments have confirmed (SN: 3/1/86).

    In the new experiment, the same uncanny physics is at play for gravitational fields, physicists report in the Jan. 14 Science. “Every time I look at this experiment, I’m like, ‘It’s amazing that nature is that way,’” says physicist Mark Kasevich of Stanford University.

    Kasevich and colleagues launched rubidium atoms inside a 10-meter-tall vacuum chamber, hit them with lasers to put them in quantum superpositions tracing two different paths, and watched how the atoms fell. Notably, the particles weren’t in a gravitational field–free zone. Instead, the experiment was designed so that the researchers could filter out the effects of gravitational forces, laying bare the eerie Aharonov-Bohm influence.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    The study not only reveals a famed physics effect in a new context, but also showcases the potential to study subtle effects in gravitational systems. For example, researchers aim to use this type of technique to better measure Newton’s gravitational constant, G, which reveals the strength of gravity, and is currently known less precisely than other fundamental constants of nature (SN: 8/29/18).

    A phenomenon called interference is key to this experiment. In quantum physics, atoms and other particles behave like waves that can add and subtract, just as two swells merging in the ocean make a larger wave. At the end of the atoms’ flight, the scientists recombined the atoms’ two paths so their waves would interfere, then measured where the atoms arrived. The arrival locations are highly sensitive to tweaks that alter where the peaks and troughs of the waves land, known as phase shifts.

    At the top of the vacuum chamber, the researchers placed a hunk of tungsten with a mass of 1.25 kilograms. To isolate the Aharonov-Bohm effect, the scientists performed the same experiment with and without this mass, and for two different sets of launched atoms, one which flew close to the mass, and the other lower. Each of those two sets of atoms were split into superpositions, with one path traveling closer to the mass than the other, separated by about 25 centimeters. Other sets of atoms, with superpositions split across smaller distances, rounded out the crew. Comparing how the various sets of atoms interfered, both with and without the tungsten mass, teased out a phase shift that was not due to the gravitational force. Instead, that tweak was from time dilation, a feature of Einstein’s theory of gravity, general relativity, which causes time to pass more slowly close to a massive object.

    The two theories that underlie this experiment, general relativity and quantum mechanics, don’t work well together. Scientists don’t know how to combine them to describe reality. So, for physicists, says Guglielmo Tino of the University of Florence, who was not involved with the new study, “probing gravity with a quantum sensor, I think it’s really one of … the most important challenges at the moment.” More

  • in

    AI accurately diagnoses prostate cancer, study shows

    Researchers at Karolinska Institutet in Sweden have together with international collaborators completed a comprehensive international validation of artificial intelligence (AI) for diagnosing and grading prostate cancer. The study, published in Nature Medicine, shows that AI systems can identify and grade prostate cancer in tissue samples from different countries equally well as pathologists. The results suggest AI systems are ready to be responsibly introduced as a complementary tool in prostate cancer care, researchers say.
    The international validation was performed via a competition called PANDA. The competition lasted for three months and challenged more than 1000 AI experts to develop systems for accurately grading prostate cancer.
    Rapid innovation
    “Only ten days into the competition, algorithms matching average pathologists were developed. Organising PANDA shows how competitions can accelerate rapid innovation for solving specific problems in healthcare with the help of AI,” says Kimmo Kartasalo, a researcher at the Department of Medical Epidemiology and Biostatistics at Karolinska Institutet and corresponding author of the study.
    A problem in today’s prostate cancer diagnostics is that different pathologists can arrive at different conclusions even for the same tissue samples, which means that treatment decisions are based on uncertain information. The researchers believe the use of AI technology holds great potential for improved reproducibility, that is, increased consistency of the assessments of tissue samples irrespective of which pathologist performs the evaluation, leading to more accurate treatment selection.
    Accurate diagnostics
    The KI researchers have shown in earlier studies that AI systems can indicate if a tissue sample contains cancer or not, estimate the amount of tumour tissue in the biopsy, and grade the severity of prostate cancer, comparably to international experts. However, the main challenge associated with implementing AI in healthcare is that AI systems are often highly sensitive towards data that differ from the data used for training the system, and may consequently not produce reliable and robust results when applied in other hospitals and other countries. More

  • in

    When water is coming from all sides

    When Hurricanes Harvey (2017) and Florence (2018) hit, it was not solely the storm surge from the Gulf of Mexico and Atlantic Ocean that led to flooding. Inland sources, like rain-swollen rivers, lakes, and suburban culverts also contributed significantly. These factors were missed by many computer models at the time, which underestimated the flood risk.
    “People don’t care as much as to whether flooding is coming from the river or the ocean, especially when both contribute to water levels, as they want to know, ‘Is my house going to be flooded?'” said Edward Myers, branch chief of the Coastal Marine Modeling Branch, located in the Coast Survey Development Laboratory at the National Oceanographic and Atmospheric Administration (NOAA).
    Myers and his colleagues at NOAA are collaborating with Y. Joseph Zhang from the Virginia Institute of Marine Science (VIMS) at William & Mary to develop and test the world’s first three-dimensional operational storm surge model.
    “We started with the right attitude and the right core algorithm,” joked Zhang, research professor at the Center for Coastal Resources Management. “Over the years, we’ve re-engineered the dynamic core multiple times and that led to the current modeling system.”
    Now in its third incarnation, the Semi-implicit Cross-scale Hydroscience Integrated System Model (SCHISM) forecasts coastal flooding in Taiwan, at agencies across the European Union, and elsewhere. It is being considered for operational use by NOAA. (The researchers described the system in the Nov. 2021 issue of EOS, the science news magazine of the American Geophysical Union.)
    SCHISM is designed to serve the needs of a wide range of potential users. “Compound surge and flooding is a world-wide hazard,” Zhang said. “It’s notoriously challenging, especially in the transition zone where the river meets the sea. Lots of factors come into play and interact non-linearly.”
    Surrounding the hydrodynamic core of SCHISM are numerous modules that simulate other phenomena important to flooding. These include air-sea exchange, vegetation, and sediment. Other modules adapt the system for specific events, like oil spills, or to predict conditions, like water quality. More

  • in

    Machine learning for morphable materials

    Flat materials that can morph into three-dimensional shapes have potential applications in architecture, medicine, robotics, space travel, and much more. But programming these shape changes requires complex and time-consuming computations.
    Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a platform that uses machine learning to program the transformation of 2D stretchable surfaces into specific 3D shapes.
    “While machine learning methods have been classically employed for image recognition and language processing, they have also recently emerged as powerful tools to solve mechanics problems,” said Katia Bertoldi, the William and Ami Kuan Danoff Professor of Applied Mechanics at SEAS and senior author of the study. “In this work we demonstrate that these tools can be extended to study the mechanics of transformable, inflatable systems.”
    The research is published in Advanced Functional Materials.
    The research team began by dividing an inflatable membrane into a 10×10 grid of 100 square pixels that can either be soft or stiff. The soft or stiff pixels can be combined in an almost infinite variety of configurations, making manual programming extremely difficult. That’s where machine learning comes in.
    The researchers used what’s known as finite element simulations to sample this infinite design space. Then neural networks used that sample to learn how the location of soft and stiff pixels controls the deformation of the membrane when it is pressurized. More

  • in

    New cloud-based platform opens genomics data to all

    Harnessing the power of genomics to find risk factors for major diseases or search for relatives relies on the costly and time-consuming ability to analyze huge numbers of genomes. A team co-led by a Johns Hopkins University computer scientist has leveled the playing field by creating a cloud-based platform that grants genomics researchers easy access to one of the world’s largest genomics databases.
    Known as AnVIL (Genomic Data Science Analysis, Visualization, and Informatics Lab-space), the new platform gives any researcher with an Internet connection access to thousands of analysis tools, patient records, and more than 300,000 genomes. The work, a project of the National Human Genome Institute (NHGRI), appears today in Cell Genomics.
    “AnVIL is inverting the model of genomics data sharing, offering unprecedented new opportunities for science by connecting researchers and datasets in new ways and promising to enable exciting new discoveries,” said project co-leader Michael Schatz, Bloomberg Distinguished Professor of Computer Science and Biology at Johns Hopkins.
    Typically genomic analysis starts with researchers downloading massive amounts of data from centralized warehouses to their own data centers, a process that is not only time-consuming, inefficient, and expensive, but also makes collaborating with researchers at other institutions difficult.
    “AnVIL will be transformative for institutions of all sizes, especially smaller institutions that don’t have the resources to build their own data centers. It is our hope that AnVIL levels the playing field, so that everyone has equal access to make discoveries,” Schatz said.
    Genetic risk factors for ailments such as cancer or cardiovascular disease are often very subtle, requiring researchers to analyze thousands of patients’ genomes to discover new associations. The raw data for a single human genome comprises about 40GB, so downloading thousands of genomes can take takes several days to several weeks: A single genome requires about 10 DVDs worth of data, so transferring thousands means moving “tens of thousands of DVDs worth of data,” Schatz said. More