More stories

  • in

    Enhanced statistical models will aid conservation of killer whales and other species

    Ecologists need to understand wild animal behaviours in order to conserve species, but following animals around can be expensive, dangerous, or sometimes impossible in the case of animals that move underwater or into areas we can’t reach easily.
    Scientists turned to the next best thing: bio-logging devices that can be attached to animals and capture information about movement, breathing rate, heart rate, and more.
    However, retrieving an accurate picture of what a tagged animal does as it journeys through its environment requires statistical analysis, especially when it comes to animal movement, and the methods statisticians use are always evolving to make full use of the large and complex data sets that are available.
    A recent study by researchers at the Institute for the Oceans and Fisheries (IOF) and the UBC department of statistics has taken us a step closer to understanding the behaviours of northern resident killer whales by improving statistical tools useful for identifying animal behaviours that can’t be observed directly.
    “The thing we really tackled with this paper was trying to get at some of those fine-scale behaviours that aren’t that easy to model,” said Evan Sidrow, a doctoral student in the department of statistics and the study’s lead author. “It’s a matter of finding behaviours on the order of seconds — maybe 10 to 15 seconds. Usually, it’s a matter of a whale looking around, and then actively swimming for a second to get over to a new location. We are trying to observe fleeting behaviours, like a whale catching a fish.”
    The research team improved a statistical tool that is based on what is called a hidden Markov model, which is helpful for unlocking the mysteries hidden inside animal movement datasets. More

  • in

    Bone growth inspired 'microrobots' that can create their own bone

    Inspired by the growth of bones in the skeleton, researchers at the universities of Linköping in Sweden and Okayama in Japan have developed a combination of materials that can morph into various shapes before hardening. The material is initially soft, but later hardens through a bone development process that uses the same materials found in the skeleton.
    When we are born, we have gaps in our skulls that are covered by pieces of soft connective tissue called fontanelles. It is thanks to fontanelles that our skulls can be deformed during birth and pass successfully through the birth canal. Post-birth, the fontanelle tissue gradually changes to hard bone. Now, researchers have combined materials which together resemble this natural process.
    “We want to use this for applications where materials need to have different properties at different points in time. Firstly, the material is soft and flexible, and it is then locked into place when it hardens. This material could be used in, for example, complicated bone fractures. It could also be used in microrobots — these soft microrobots could be injected into the body through a thin syringe, and then they would unfold and develop their own rigid bones,” says Edwin Jager, associate professor at the Department of Physics, Chemistry and Biology (IFM) at Linköping University.
    The idea was hatched during a research visit in Japan when materials scientist Edwin Jager met Hiroshi Kamioka and Emilio Hara, who conduct research into bones. The Japanese researchers had discovered a kind of biomolecule that could stimulate bone growth under a short period of time. Would it be possible to combine this biomolecule with Jager’s materials research, to develop new materials with variable stiffness?
    In the study that followed, published in Advanced Materials, the researchers constructed a kind of simple “microrobot,” one which can assume different shapes and change stiffness. The researchers began with a gel material called alginate. On one side of the gel, a polymer material is grown. This material is electroactive, and it changes its volume when a low voltage is applied, causing the microrobot to bend in a specified direction. On the other side of the gel, the researchers attached biomolecules that allow the soft gel material to harden. These biomolecules are extracted from the cell membrane of a kind of cell that is important for bone development. When the material is immersed in a cell culture medium — an environment that resembles the body and contains calcium and phosphor — the biomolecules make the gel mineralise and harden like bone.
    One potential application of interest to the researchers is bone healing. The idea is that the soft material, powered by the electroactive polymer, will be able to manoeuvre itself into spaces in complicated bone fractures and expand. When the material has then hardened, it can form the foundation for the construction of new bone. In their study, the researchers demonstrate that the material can wrap itself around chicken bones, and the artificial bone that subsequently develops grows together with the chicken bone.
    By making patterns in the gel, the researchers can determine how the simple microrobot will bend when voltage is applied. Perpendicular lines on the surface of the material make the robot bend in a semicircle, while diagonal lines make it bend like a corkscrew.
    “By controlling how the material turns, we can make the microrobot move in different ways, and also affect how the material unfurls in broken bones. We can embed these movements into the material’s structure, making complex programmes for steering these robots unnecessary,” says Edwin Jager.
    In order to learn more about the biocompatibility of this combination of materials, the researchers are now looking further into how its properties work together with living cells.
    The research was carried out with financial support from organisations including the Japanese Society for the Promotion of Science (JSPS) Bridge Fellowship program and KAKENHI, the Swedish Research Council, Promobilia and STINT (Swedish Foundation for International Cooperation in Research and Higher Education).
    Story Source:
    Materials provided by Linköping University. Original written by Karin Söderlund Leifler. Note: Content may be edited for style and length. More

  • in

    How to make sure digital technology works for the public good

    The Internet of Things (IoT) is completely enmeshed in our daily lives, a network of connected laptops, phones, cars, fitness trackers — even smart toasters and refrigerators — that are increasingly able to make decisions on their own. But how to ensure that these devices benefit us, rather than exploit us or put us at risk? New work, led by Francine Berman at the University of Massachusetts Amherst, proposes a novel framework, the “impact universe,” that can help policymakers keep the public interest in focus amidst the rush to adopt ever-new digital technology.
    “How,” asks Berman, Stuart Rice Honorary Chair and Research Professor in UMass Amherst’s Manning College of Information and Computer Sciences (CICS), “can we ensure that technology works for us, rather than the other way around?” Berman, lead author of a new paper recently published in the journal Patterns, and her co-authors sketch out what they call the “impact universe” — a way for policymakers and others to think “holistically about the potential impacts of societal controls for systems and devices in the IoT.”
    One of the wonders of modern digital technology is that it increasingly makes decisions for us on its own. But, as Berman puts it, “technology needs adult supervision.”
    The impact universe is a way of holistically sketching out all the competing implications of a given technology, taking into consideration environmental, social, economic and other impacts to develop effective policy, law and other societal controls. Instead of focusing on a single desirable outcome, sustainability, say, or profit, the impact universe allows us to see that some outcomes will come at the cost of others.
    “The model reflects the messiness of real life and how we make decisions,” says Berman, but it brings clarity to that messiness so that decision makers can see and debate the tradeoffs and benefits of different social controls to regulate technology. The framework allows decisions makers to be more deliberate in their policy-making and to better focus on the common good.
    Berman is at the forefront of an emerging field called public interest technology (PIT), and she’s building an initiative at UMass Amherst that unites campus students and scholars whose work is empowered by technology and focused on social responsibility. The ultimate goal of PIT is to develop the knowledge and critical thinking needed to create a society capable of effectively managing the digital ecosystem that powers our daily lives.
    Berman’s co-authors, Emilia Cabrera, Ali Jebari and Wassim Marrakchi, were Harvard undergraduates and worked with Berman on the paper during her Radcliffe Fellowship at Harvard. The fellowship gave Berman a chance to work broadly with a multidisciplinary group of scholars and thinkers, and to appreciate the importance of designing, developing, and framing societal controls so that technology promotes the public benefit.
    “The real world is complex and there are always competing priorities,” says Berman. “Tackling this complexity head-on by taking the universe of potential technology impacts into account is critical if we want digital technologies to serve society rather than overwhelm it.”
    Story Source:
    Materials provided by University of Massachusetts Amherst. Note: Content may be edited for style and length. More

  • in

    Creating a reference map to explore the electronic device mimicking brain activity

    Maps are essential for exploring trackless wilderness or vast expanses of ocean. The same is true for scientific studies that try to open up new fields and develop brand-new devices. A journey without maps and signposts tends to end in vain.
    In the world of “neuromorphic devices,” an electronic device that mimics neural cells such as our brain, researchers have long been forced to travel without maps. Such devices will lead to a fresh field of brain-inspired computers with substantial benefits such as low-energy consumption. But its operation mechanism has remained unclear, particularly in regards to controlling the response speed control.
    A research group from Tohoku University and the University of Cambridge brought clarity in a recent study published in the journal Advanced Electronic Materials on January 13, 2022.
    They looked into organic electrochemical transistors (OECT), which are often applied in neuromorphic devices and control the movement of the ion in the active layer. The analysis revealed that response timescale depends on the size of ion in the electrolyte.
    Based on these experimental results, the group modeled the neuromorphic response of the devices. Comparisons of the data showed that movements of the ions in the OECT controlled the response. This indicates tuning the timescale for ion movement can be an effective way to regulate the neuromorphic behavior of OECTs.
    “We obtained a map that provides rational design guidelines for neuromorphic devices through changing ion size and material composition in the active layer,” said Shunsuke Yamamoto, paper corresponding author and assistant professor at Tohoku University’s Graduate School of Engineering. “Further studies will pave the way for application to artificial neural networks and lead to better and more precise designs of the conducting polymer materials used in this field.”
    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More

  • in

    Mathematical model may help improve treatments and clinical trials of patients with COVID-19 and other illnesses

    Investigators who recently developed a mathematical model that indicated why treatment responses vary widely among individuals with COVID-19 have now used the model to identify biological markers related to these different responses. The team, which was led by scientists at Massachusetts General Hospital (MGH) and the University of Cyprus, notes that the model can be used to provide a better understanding of the complex interactions between illness and response and can help clinicians provide optimal care for diverse patients.
    The work, which is published in EBioMedicine, was initiated because COVID-19 is extremely heterogeneous, meaning that illness following SARS-CoV-2 infection ranges from asymptomatic to life-threatening conditions such as respiratory failure or acute respiratory distress syndrome (ARDS), in which fluid collects in the lungs. “Even within the subset of critically ill COVID-19 patients who develop ARDS, there exists substantial heterogeneity. Significant efforts have been made to identify subtypes of ARDS defined by clinical features or biomarkers,” explains co-senior author Rakesh K. Jain, PhD, director of the E.L. Steele Laboratories for Tumor Biology at MGH and the Andrew Werk Cook Professor of Radiation Oncology at Harvard Medical School (HMS). “To predict disease progression and personalize treatment, it is necessary to determine the associations among clinical features, biomarkers and underlying biology. Although this can be achieved over the course of numerous clinical trials, this process is time-consuming and extremely expensive.”
    As an alternative, Jain and his colleagues used their model to analyze the effects that different patient characteristics yield on outcomes following treatment with different therapies. This allowed the team to determine the optimal treatment for distinct categories of patients, reveal biologic pathways responsible for different clinical responses, and identify markers of these pathways.
    The researchers simulated six patient types (defined by the presence or absence of different comorbidities) and three types of therapies that modulate the immune system. “Using a novel treatment efficacy scoring system, we found that older and hyperinflamed patients respond better to immunomodulation therapy than obese and diabetic patients,” says co-senior and corresponding author Lance Munn, PhD, who is the deputy director of the Steele Labs and an associate professor at HMS. “We also found that the optimal time to initiate immunomodulation therapy differs between patients and also depends on the drug itself.” Certain biological markers that differed based on patient characteristics determined optimal treatment initiation time, and these markers pointed to particular biologic programs or mechanisms that impacted a patient’s outcome. The markers also matched clinically identified markers of disease severity.
    For COVID-19 as well as other conditions, the team’s approach could enable investigators to enrich a clinical trial with patients most likely to respond to a given drug. “Such enrichment based on prospectively predicted biomarkers is a potential strategy for increasing precision of clinical trials and accelerating therapy development,” says co-senior author Triantafyllos Stylianopoulos, PhD, an associate professor at the University of Cyprus.
    Other co-authors include Sonu Subudhi, Chrysovalantis Voutouri, C. Corey Hardin, Mohammad Reza Nikmaneshi, Melin J. Khandekar and Sayon Dutta from MGH; and Ankit B. Patel and Ashish Verma from Brigham and Women’s Hospital.
    Funding for the study was provided by the National Institutes of Health, Harvard Ludwig Cancer Center, Niles Albright Research Foundation and Jane’s Trust Foundation. Voutouri is a recipient of a Marie Sk?odowska Curie Actions Individual Fellowship.
    Story Source:
    Materials provided by Massachusetts General Hospital. Note: Content may be edited for style and length. More

  • in

    The first AI breast cancer sleuth that shows its work

    Computer engineers and radiologists at Duke University have developed an artificial intelligence platform to analyze potentially cancerous lesions in mammography scans to determine if a patient should receive an invasive biopsy. But unlike its many predecessors, this algorithm is interpretable, meaning it shows physicians exactly how it came to its conclusions.
    The researchers trained the AI to locate and evaluate lesions just like an actual radiologist would be trained, rather than allowing it to freely develop its own procedures, giving it several advantages over its “black box” counterparts. It could make for a useful training platform to teach students how to read mammography images. It could also help physicians in sparsely populated regions around the world who do not regularly read mammography scans make better health care decisions.
    The results appeared online December 15 in the journal Nature Machine Intelligence.
    “If a computer is going to help make important medical decisions, physicians need to trust that the AI is basing its conclusions on something that makes sense,” said Joseph Lo, professor of radiology at Duke. “We need algorithms that not only work, but explain themselves and show examples of what they’re basing their conclusions on. That way, whether a physician agrees with the outcome or not, the AI is helping to make better decisions.”
    Engineering AI that reads medical images is a huge industry. Thousands of independent algorithms already exist, and the FDA has approved more than 100 of them for clinical use. Whether reading MRI, CT or mammogram scans, however, very few of them use validation datasets with more than 1000 images or contain demographic information. This dearth of information, coupled with the recent failures of several notable examples, has led many physicians to question the use of AI in high-stakes medical decisions.
    In one instance, an AI model failed even when researchers trained it with images taken from different facilities using different equipment. Rather than focusing exclusively on the lesions of interest, the AI learned to use subtle differences introduced by the equipment itself to recognize the images coming from the cancer ward and assigning those lesions a higher probability of being cancerous. As one would expect, the AI did not transfer well to other hospitals using different equipment. But because nobody knew what the algorithm was looking at when making decisions, nobody knew it was destined to fail in real-world applications. More

  • in

    Quantum particles can feel the influence of gravitational fields they never touch

    If you’re superstitious, a black cat in your path is bad luck, even if you keep your distance. Likewise, in quantum physics, particles can feel the influence of magnetic fields that they never come into direct contact with. Now scientists have shown that this eerie quantum effect holds not just for magnetic fields, but for gravity too — and it’s no superstition.

    Usually, to feel the influence of a magnetic field, a particle would have to pass through it. But in 1959, physicists Yakir Aharonov and David Bohm predicted that, in a specific scenario, the conventional wisdom would fail. A magnetic field contained within a cylindrical region can affect particles — electrons, in their example — that never enter the cylinder. In this scenario, the electrons don’t have well-defined locations, but are in “superpositions,” quantum states described by the odds of a particle materializing in two different places. Each fractured particle simultaneously takes two different paths around the magnetic cylinder. Despite never touching the electrons, and hence exerting no force on them, the magnetic field shifts the pattern of where particles are found at the end of this journey, as various experiments have confirmed (SN: 3/1/86).

    In the new experiment, the same uncanny physics is at play for gravitational fields, physicists report in the Jan. 14 Science. “Every time I look at this experiment, I’m like, ‘It’s amazing that nature is that way,’” says physicist Mark Kasevich of Stanford University.

    Kasevich and colleagues launched rubidium atoms inside a 10-meter-tall vacuum chamber, hit them with lasers to put them in quantum superpositions tracing two different paths, and watched how the atoms fell. Notably, the particles weren’t in a gravitational field–free zone. Instead, the experiment was designed so that the researchers could filter out the effects of gravitational forces, laying bare the eerie Aharonov-Bohm influence.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    The study not only reveals a famed physics effect in a new context, but also showcases the potential to study subtle effects in gravitational systems. For example, researchers aim to use this type of technique to better measure Newton’s gravitational constant, G, which reveals the strength of gravity, and is currently known less precisely than other fundamental constants of nature (SN: 8/29/18).

    A phenomenon called interference is key to this experiment. In quantum physics, atoms and other particles behave like waves that can add and subtract, just as two swells merging in the ocean make a larger wave. At the end of the atoms’ flight, the scientists recombined the atoms’ two paths so their waves would interfere, then measured where the atoms arrived. The arrival locations are highly sensitive to tweaks that alter where the peaks and troughs of the waves land, known as phase shifts.

    At the top of the vacuum chamber, the researchers placed a hunk of tungsten with a mass of 1.25 kilograms. To isolate the Aharonov-Bohm effect, the scientists performed the same experiment with and without this mass, and for two different sets of launched atoms, one which flew close to the mass, and the other lower. Each of those two sets of atoms were split into superpositions, with one path traveling closer to the mass than the other, separated by about 25 centimeters. Other sets of atoms, with superpositions split across smaller distances, rounded out the crew. Comparing how the various sets of atoms interfered, both with and without the tungsten mass, teased out a phase shift that was not due to the gravitational force. Instead, that tweak was from time dilation, a feature of Einstein’s theory of gravity, general relativity, which causes time to pass more slowly close to a massive object.

    The two theories that underlie this experiment, general relativity and quantum mechanics, don’t work well together. Scientists don’t know how to combine them to describe reality. So, for physicists, says Guglielmo Tino of the University of Florence, who was not involved with the new study, “probing gravity with a quantum sensor, I think it’s really one of … the most important challenges at the moment.” More

  • in

    AI accurately diagnoses prostate cancer, study shows

    Researchers at Karolinska Institutet in Sweden have together with international collaborators completed a comprehensive international validation of artificial intelligence (AI) for diagnosing and grading prostate cancer. The study, published in Nature Medicine, shows that AI systems can identify and grade prostate cancer in tissue samples from different countries equally well as pathologists. The results suggest AI systems are ready to be responsibly introduced as a complementary tool in prostate cancer care, researchers say.
    The international validation was performed via a competition called PANDA. The competition lasted for three months and challenged more than 1000 AI experts to develop systems for accurately grading prostate cancer.
    Rapid innovation
    “Only ten days into the competition, algorithms matching average pathologists were developed. Organising PANDA shows how competitions can accelerate rapid innovation for solving specific problems in healthcare with the help of AI,” says Kimmo Kartasalo, a researcher at the Department of Medical Epidemiology and Biostatistics at Karolinska Institutet and corresponding author of the study.
    A problem in today’s prostate cancer diagnostics is that different pathologists can arrive at different conclusions even for the same tissue samples, which means that treatment decisions are based on uncertain information. The researchers believe the use of AI technology holds great potential for improved reproducibility, that is, increased consistency of the assessments of tissue samples irrespective of which pathologist performs the evaluation, leading to more accurate treatment selection.
    Accurate diagnostics
    The KI researchers have shown in earlier studies that AI systems can indicate if a tissue sample contains cancer or not, estimate the amount of tumour tissue in the biopsy, and grade the severity of prostate cancer, comparably to international experts. However, the main challenge associated with implementing AI in healthcare is that AI systems are often highly sensitive towards data that differ from the data used for training the system, and may consequently not produce reliable and robust results when applied in other hospitals and other countries. More