More stories

  • in

    Bone growth inspired 'microrobots' that can create their own bone

    Inspired by the growth of bones in the skeleton, researchers at the universities of Linköping in Sweden and Okayama in Japan have developed a combination of materials that can morph into various shapes before hardening. The material is initially soft, but later hardens through a bone development process that uses the same materials found in the skeleton.
    When we are born, we have gaps in our skulls that are covered by pieces of soft connective tissue called fontanelles. It is thanks to fontanelles that our skulls can be deformed during birth and pass successfully through the birth canal. Post-birth, the fontanelle tissue gradually changes to hard bone. Now, researchers have combined materials which together resemble this natural process.
    “We want to use this for applications where materials need to have different properties at different points in time. Firstly, the material is soft and flexible, and it is then locked into place when it hardens. This material could be used in, for example, complicated bone fractures. It could also be used in microrobots — these soft microrobots could be injected into the body through a thin syringe, and then they would unfold and develop their own rigid bones,” says Edwin Jager, associate professor at the Department of Physics, Chemistry and Biology (IFM) at Linköping University.
    The idea was hatched during a research visit in Japan when materials scientist Edwin Jager met Hiroshi Kamioka and Emilio Hara, who conduct research into bones. The Japanese researchers had discovered a kind of biomolecule that could stimulate bone growth under a short period of time. Would it be possible to combine this biomolecule with Jager’s materials research, to develop new materials with variable stiffness?
    In the study that followed, published in Advanced Materials, the researchers constructed a kind of simple “microrobot,” one which can assume different shapes and change stiffness. The researchers began with a gel material called alginate. On one side of the gel, a polymer material is grown. This material is electroactive, and it changes its volume when a low voltage is applied, causing the microrobot to bend in a specified direction. On the other side of the gel, the researchers attached biomolecules that allow the soft gel material to harden. These biomolecules are extracted from the cell membrane of a kind of cell that is important for bone development. When the material is immersed in a cell culture medium — an environment that resembles the body and contains calcium and phosphor — the biomolecules make the gel mineralise and harden like bone.
    One potential application of interest to the researchers is bone healing. The idea is that the soft material, powered by the electroactive polymer, will be able to manoeuvre itself into spaces in complicated bone fractures and expand. When the material has then hardened, it can form the foundation for the construction of new bone. In their study, the researchers demonstrate that the material can wrap itself around chicken bones, and the artificial bone that subsequently develops grows together with the chicken bone.
    By making patterns in the gel, the researchers can determine how the simple microrobot will bend when voltage is applied. Perpendicular lines on the surface of the material make the robot bend in a semicircle, while diagonal lines make it bend like a corkscrew.
    “By controlling how the material turns, we can make the microrobot move in different ways, and also affect how the material unfurls in broken bones. We can embed these movements into the material’s structure, making complex programmes for steering these robots unnecessary,” says Edwin Jager.
    In order to learn more about the biocompatibility of this combination of materials, the researchers are now looking further into how its properties work together with living cells.
    The research was carried out with financial support from organisations including the Japanese Society for the Promotion of Science (JSPS) Bridge Fellowship program and KAKENHI, the Swedish Research Council, Promobilia and STINT (Swedish Foundation for International Cooperation in Research and Higher Education).
    Story Source:
    Materials provided by Linköping University. Original written by Karin Söderlund Leifler. Note: Content may be edited for style and length. More

  • in

    How to make sure digital technology works for the public good

    The Internet of Things (IoT) is completely enmeshed in our daily lives, a network of connected laptops, phones, cars, fitness trackers — even smart toasters and refrigerators — that are increasingly able to make decisions on their own. But how to ensure that these devices benefit us, rather than exploit us or put us at risk? New work, led by Francine Berman at the University of Massachusetts Amherst, proposes a novel framework, the “impact universe,” that can help policymakers keep the public interest in focus amidst the rush to adopt ever-new digital technology.
    “How,” asks Berman, Stuart Rice Honorary Chair and Research Professor in UMass Amherst’s Manning College of Information and Computer Sciences (CICS), “can we ensure that technology works for us, rather than the other way around?” Berman, lead author of a new paper recently published in the journal Patterns, and her co-authors sketch out what they call the “impact universe” — a way for policymakers and others to think “holistically about the potential impacts of societal controls for systems and devices in the IoT.”
    One of the wonders of modern digital technology is that it increasingly makes decisions for us on its own. But, as Berman puts it, “technology needs adult supervision.”
    The impact universe is a way of holistically sketching out all the competing implications of a given technology, taking into consideration environmental, social, economic and other impacts to develop effective policy, law and other societal controls. Instead of focusing on a single desirable outcome, sustainability, say, or profit, the impact universe allows us to see that some outcomes will come at the cost of others.
    “The model reflects the messiness of real life and how we make decisions,” says Berman, but it brings clarity to that messiness so that decision makers can see and debate the tradeoffs and benefits of different social controls to regulate technology. The framework allows decisions makers to be more deliberate in their policy-making and to better focus on the common good.
    Berman is at the forefront of an emerging field called public interest technology (PIT), and she’s building an initiative at UMass Amherst that unites campus students and scholars whose work is empowered by technology and focused on social responsibility. The ultimate goal of PIT is to develop the knowledge and critical thinking needed to create a society capable of effectively managing the digital ecosystem that powers our daily lives.
    Berman’s co-authors, Emilia Cabrera, Ali Jebari and Wassim Marrakchi, were Harvard undergraduates and worked with Berman on the paper during her Radcliffe Fellowship at Harvard. The fellowship gave Berman a chance to work broadly with a multidisciplinary group of scholars and thinkers, and to appreciate the importance of designing, developing, and framing societal controls so that technology promotes the public benefit.
    “The real world is complex and there are always competing priorities,” says Berman. “Tackling this complexity head-on by taking the universe of potential technology impacts into account is critical if we want digital technologies to serve society rather than overwhelm it.”
    Story Source:
    Materials provided by University of Massachusetts Amherst. Note: Content may be edited for style and length. More

  • in

    Creating a reference map to explore the electronic device mimicking brain activity

    Maps are essential for exploring trackless wilderness or vast expanses of ocean. The same is true for scientific studies that try to open up new fields and develop brand-new devices. A journey without maps and signposts tends to end in vain.
    In the world of “neuromorphic devices,” an electronic device that mimics neural cells such as our brain, researchers have long been forced to travel without maps. Such devices will lead to a fresh field of brain-inspired computers with substantial benefits such as low-energy consumption. But its operation mechanism has remained unclear, particularly in regards to controlling the response speed control.
    A research group from Tohoku University and the University of Cambridge brought clarity in a recent study published in the journal Advanced Electronic Materials on January 13, 2022.
    They looked into organic electrochemical transistors (OECT), which are often applied in neuromorphic devices and control the movement of the ion in the active layer. The analysis revealed that response timescale depends on the size of ion in the electrolyte.
    Based on these experimental results, the group modeled the neuromorphic response of the devices. Comparisons of the data showed that movements of the ions in the OECT controlled the response. This indicates tuning the timescale for ion movement can be an effective way to regulate the neuromorphic behavior of OECTs.
    “We obtained a map that provides rational design guidelines for neuromorphic devices through changing ion size and material composition in the active layer,” said Shunsuke Yamamoto, paper corresponding author and assistant professor at Tohoku University’s Graduate School of Engineering. “Further studies will pave the way for application to artificial neural networks and lead to better and more precise designs of the conducting polymer materials used in this field.”
    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More

  • in

    Mathematical model may help improve treatments and clinical trials of patients with COVID-19 and other illnesses

    Investigators who recently developed a mathematical model that indicated why treatment responses vary widely among individuals with COVID-19 have now used the model to identify biological markers related to these different responses. The team, which was led by scientists at Massachusetts General Hospital (MGH) and the University of Cyprus, notes that the model can be used to provide a better understanding of the complex interactions between illness and response and can help clinicians provide optimal care for diverse patients.
    The work, which is published in EBioMedicine, was initiated because COVID-19 is extremely heterogeneous, meaning that illness following SARS-CoV-2 infection ranges from asymptomatic to life-threatening conditions such as respiratory failure or acute respiratory distress syndrome (ARDS), in which fluid collects in the lungs. “Even within the subset of critically ill COVID-19 patients who develop ARDS, there exists substantial heterogeneity. Significant efforts have been made to identify subtypes of ARDS defined by clinical features or biomarkers,” explains co-senior author Rakesh K. Jain, PhD, director of the E.L. Steele Laboratories for Tumor Biology at MGH and the Andrew Werk Cook Professor of Radiation Oncology at Harvard Medical School (HMS). “To predict disease progression and personalize treatment, it is necessary to determine the associations among clinical features, biomarkers and underlying biology. Although this can be achieved over the course of numerous clinical trials, this process is time-consuming and extremely expensive.”
    As an alternative, Jain and his colleagues used their model to analyze the effects that different patient characteristics yield on outcomes following treatment with different therapies. This allowed the team to determine the optimal treatment for distinct categories of patients, reveal biologic pathways responsible for different clinical responses, and identify markers of these pathways.
    The researchers simulated six patient types (defined by the presence or absence of different comorbidities) and three types of therapies that modulate the immune system. “Using a novel treatment efficacy scoring system, we found that older and hyperinflamed patients respond better to immunomodulation therapy than obese and diabetic patients,” says co-senior and corresponding author Lance Munn, PhD, who is the deputy director of the Steele Labs and an associate professor at HMS. “We also found that the optimal time to initiate immunomodulation therapy differs between patients and also depends on the drug itself.” Certain biological markers that differed based on patient characteristics determined optimal treatment initiation time, and these markers pointed to particular biologic programs or mechanisms that impacted a patient’s outcome. The markers also matched clinically identified markers of disease severity.
    For COVID-19 as well as other conditions, the team’s approach could enable investigators to enrich a clinical trial with patients most likely to respond to a given drug. “Such enrichment based on prospectively predicted biomarkers is a potential strategy for increasing precision of clinical trials and accelerating therapy development,” says co-senior author Triantafyllos Stylianopoulos, PhD, an associate professor at the University of Cyprus.
    Other co-authors include Sonu Subudhi, Chrysovalantis Voutouri, C. Corey Hardin, Mohammad Reza Nikmaneshi, Melin J. Khandekar and Sayon Dutta from MGH; and Ankit B. Patel and Ashish Verma from Brigham and Women’s Hospital.
    Funding for the study was provided by the National Institutes of Health, Harvard Ludwig Cancer Center, Niles Albright Research Foundation and Jane’s Trust Foundation. Voutouri is a recipient of a Marie Sk?odowska Curie Actions Individual Fellowship.
    Story Source:
    Materials provided by Massachusetts General Hospital. Note: Content may be edited for style and length. More

  • in

    The first AI breast cancer sleuth that shows its work

    Computer engineers and radiologists at Duke University have developed an artificial intelligence platform to analyze potentially cancerous lesions in mammography scans to determine if a patient should receive an invasive biopsy. But unlike its many predecessors, this algorithm is interpretable, meaning it shows physicians exactly how it came to its conclusions.
    The researchers trained the AI to locate and evaluate lesions just like an actual radiologist would be trained, rather than allowing it to freely develop its own procedures, giving it several advantages over its “black box” counterparts. It could make for a useful training platform to teach students how to read mammography images. It could also help physicians in sparsely populated regions around the world who do not regularly read mammography scans make better health care decisions.
    The results appeared online December 15 in the journal Nature Machine Intelligence.
    “If a computer is going to help make important medical decisions, physicians need to trust that the AI is basing its conclusions on something that makes sense,” said Joseph Lo, professor of radiology at Duke. “We need algorithms that not only work, but explain themselves and show examples of what they’re basing their conclusions on. That way, whether a physician agrees with the outcome or not, the AI is helping to make better decisions.”
    Engineering AI that reads medical images is a huge industry. Thousands of independent algorithms already exist, and the FDA has approved more than 100 of them for clinical use. Whether reading MRI, CT or mammogram scans, however, very few of them use validation datasets with more than 1000 images or contain demographic information. This dearth of information, coupled with the recent failures of several notable examples, has led many physicians to question the use of AI in high-stakes medical decisions.
    In one instance, an AI model failed even when researchers trained it with images taken from different facilities using different equipment. Rather than focusing exclusively on the lesions of interest, the AI learned to use subtle differences introduced by the equipment itself to recognize the images coming from the cancer ward and assigning those lesions a higher probability of being cancerous. As one would expect, the AI did not transfer well to other hospitals using different equipment. But because nobody knew what the algorithm was looking at when making decisions, nobody knew it was destined to fail in real-world applications. More

  • in

    AI accurately diagnoses prostate cancer, study shows

    Researchers at Karolinska Institutet in Sweden have together with international collaborators completed a comprehensive international validation of artificial intelligence (AI) for diagnosing and grading prostate cancer. The study, published in Nature Medicine, shows that AI systems can identify and grade prostate cancer in tissue samples from different countries equally well as pathologists. The results suggest AI systems are ready to be responsibly introduced as a complementary tool in prostate cancer care, researchers say.
    The international validation was performed via a competition called PANDA. The competition lasted for three months and challenged more than 1000 AI experts to develop systems for accurately grading prostate cancer.
    Rapid innovation
    “Only ten days into the competition, algorithms matching average pathologists were developed. Organising PANDA shows how competitions can accelerate rapid innovation for solving specific problems in healthcare with the help of AI,” says Kimmo Kartasalo, a researcher at the Department of Medical Epidemiology and Biostatistics at Karolinska Institutet and corresponding author of the study.
    A problem in today’s prostate cancer diagnostics is that different pathologists can arrive at different conclusions even for the same tissue samples, which means that treatment decisions are based on uncertain information. The researchers believe the use of AI technology holds great potential for improved reproducibility, that is, increased consistency of the assessments of tissue samples irrespective of which pathologist performs the evaluation, leading to more accurate treatment selection.
    Accurate diagnostics
    The KI researchers have shown in earlier studies that AI systems can indicate if a tissue sample contains cancer or not, estimate the amount of tumour tissue in the biopsy, and grade the severity of prostate cancer, comparably to international experts. However, the main challenge associated with implementing AI in healthcare is that AI systems are often highly sensitive towards data that differ from the data used for training the system, and may consequently not produce reliable and robust results when applied in other hospitals and other countries. More

  • in

    When water is coming from all sides

    When Hurricanes Harvey (2017) and Florence (2018) hit, it was not solely the storm surge from the Gulf of Mexico and Atlantic Ocean that led to flooding. Inland sources, like rain-swollen rivers, lakes, and suburban culverts also contributed significantly. These factors were missed by many computer models at the time, which underestimated the flood risk.
    “People don’t care as much as to whether flooding is coming from the river or the ocean, especially when both contribute to water levels, as they want to know, ‘Is my house going to be flooded?'” said Edward Myers, branch chief of the Coastal Marine Modeling Branch, located in the Coast Survey Development Laboratory at the National Oceanographic and Atmospheric Administration (NOAA).
    Myers and his colleagues at NOAA are collaborating with Y. Joseph Zhang from the Virginia Institute of Marine Science (VIMS) at William & Mary to develop and test the world’s first three-dimensional operational storm surge model.
    “We started with the right attitude and the right core algorithm,” joked Zhang, research professor at the Center for Coastal Resources Management. “Over the years, we’ve re-engineered the dynamic core multiple times and that led to the current modeling system.”
    Now in its third incarnation, the Semi-implicit Cross-scale Hydroscience Integrated System Model (SCHISM) forecasts coastal flooding in Taiwan, at agencies across the European Union, and elsewhere. It is being considered for operational use by NOAA. (The researchers described the system in the Nov. 2021 issue of EOS, the science news magazine of the American Geophysical Union.)
    SCHISM is designed to serve the needs of a wide range of potential users. “Compound surge and flooding is a world-wide hazard,” Zhang said. “It’s notoriously challenging, especially in the transition zone where the river meets the sea. Lots of factors come into play and interact non-linearly.”
    Surrounding the hydrodynamic core of SCHISM are numerous modules that simulate other phenomena important to flooding. These include air-sea exchange, vegetation, and sediment. Other modules adapt the system for specific events, like oil spills, or to predict conditions, like water quality. More

  • in

    Machine learning for morphable materials

    Flat materials that can morph into three-dimensional shapes have potential applications in architecture, medicine, robotics, space travel, and much more. But programming these shape changes requires complex and time-consuming computations.
    Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a platform that uses machine learning to program the transformation of 2D stretchable surfaces into specific 3D shapes.
    “While machine learning methods have been classically employed for image recognition and language processing, they have also recently emerged as powerful tools to solve mechanics problems,” said Katia Bertoldi, the William and Ami Kuan Danoff Professor of Applied Mechanics at SEAS and senior author of the study. “In this work we demonstrate that these tools can be extended to study the mechanics of transformable, inflatable systems.”
    The research is published in Advanced Functional Materials.
    The research team began by dividing an inflatable membrane into a 10×10 grid of 100 square pixels that can either be soft or stiff. The soft or stiff pixels can be combined in an almost infinite variety of configurations, making manual programming extremely difficult. That’s where machine learning comes in.
    The researchers used what’s known as finite element simulations to sample this infinite design space. Then neural networks used that sample to learn how the location of soft and stiff pixels controls the deformation of the membrane when it is pressurized. More