More stories

  • in

    Human Lung Chip leveraged to faithfully model radiation-induced lung injury

    Researchers have developed a human in vitro model that closely mimics the complexities of radiation-induced lung injury (RILI) and radiation dose sensitivity of the human lung. Using a previously developed microfluidic human Lung Alveolus Chip lined by human lung alveolar epithelial cells interfaced with lung capillary cells to recreate the alveolar-capillary interface in vitro, the researchers recapitulated many of the hallmarks of RILI, including radiation-induced DNA damage in lung tissue, cell-specific changes in gene expression, inflammation, and injury to both the lung epithelial cells and blood vessel-lining endothelial cells. By also evaluating the potential of two drugs to suppress the effects of acute RILI, the researchers demonstrated their model’s capabilities as an advanced, human-relevant, preclinical, drug discovery platform.
    The lung is one of the tissues most sensitive to radiation in the human body. People exposed to high radiation doses following nuclear incidents develop radiation-induced lung injury (RILI), which affects the function of many cell types in the lung, causing acute and sustained inflammation, and in the longer term, the thickening and scarring of lung tissue known as fibrosis. RILI also is a common side effect of radiation therapy administered to cancer patients to kill malignant cells in their bodies, and can limit the maximum radiation dose doctors can use to control their tumors, as well as dramatically impair patients’ quality of life.
    Anti-inflammatory drugs given to patients during radiation therapy can dampen the inflammation in the lungs, called pneumonitis, but not all patients respond equally well. This is because RILI is a complex disorder that varies between patients and is influenced by risk factors, such as age, lung cancer state, and other pre-existing lung diseases, and likely the patient’s genetic makeup. In the event of nuclear accidents, which usually involve the one-time exposure to much higher doses of radiation, no medical countermeasures are available yet that could prevent and protect against the damage to the lungs and other organs, making this a key priority of the US Food and Drug Administration (FDA).
    A major obstacle to developing a much deeper understanding of the pathological processes triggered by radiation in the lung and other organs, which is the basis for discovering medical countermeasures, is the lack of experimental model systems that recapitulate how exactly the damage occurs in people. Small animal preclinical models fail to produce key hallmarks of the human pathophysiology and do not mimic the dose sensitivities observed in humans. And although non-human primate models are considered the gold-standard for radiation injury, they are in short supply, costly, and raise serious ethical concerns; they also are not human and sometimes fail to predict responses observed when drugs move into the clinic.
    Now, a multi-disciplinary research team at the Wyss Institute for Biologically Inspired Engineering at Harvard University and Boston Children’s Hospital led by Wyss Founding Director Donald Ingber, M.D., Ph.D., in an FDA-funded project, has developed a human in vitro model that closely mimics the complexities of RILI and radiation dose sensitivity of the human lung. Lung alveoli are the small air sacs where oxygen and CO2 exchange between the lung and blood takes place, and the major site of radiation pneumonitis. Using a previously developed microfluidic human Lung Alveolus Chip lined by human lung alveolar epithelial cells interfaced with lung capillary cells to recreate the alveolar-capillary interface in vitro, the researchers recapitulated many of the hallmarks of RILI, including radiation-induced DNA damage in lung tissue, cell-specific changes in gene expression, inflammation, and injury to both the lung epithelial cells and blood vessel-lining endothelial cells. By also evaluating the potential of two drugs to suppress the effects of acute RILI, the researchers demonstrated their model’s capabilities as an advanced, human-relevant, preclinical, drug discovery platform. The findings are published in Nature Communications.
    “Forming a better understanding of how radiation injury occurs and finding new strategies to treat and prevent it poses a multifaceted challenge that in the face of nuclear threats and the realities of current cancer therapies needs entirely new solutions,” said Ingber. “The Lung Chip model that we developed to recapitulatedevelopment of RILI leverages our extensive microfluidic Organ Chip culture expertise and, in combination with new analytical and computational drug and biomarker discovery tools, gives us powerful new inroads into this problem.” Ingber is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and Boston Children’s Hospital, and the Hansjörg Wyss Professor of Bioinspired Engineering at the Harvard John A. Paulson School of Engineering and Applied Sciences.
    Advanced human in vitro model of RILI
    The human Lung Alveolus Chip is a 2-channel microfluidic culture system in which primary human lung alveolar epithelial cells are cultured in one channel where they are exposed to air as they would be in the lung. They are also interfaced across a porous membrane with primary human lung capillary endothelial cells in the parallel channel that are constantly perfused with a blood-like nutrient medium that contains circulating human immune cells, which also can contribute to radiation responses. This carefully engineered, immunologically active, alveolar-capillary interface also experiences cyclic mechanical movements mimicking actual breathing motions. Importantly, this living breathing Lung Chip can be transiently exposed to clinically relevant doses of radiation, and then investigated for the effects over an extended period of time. More

  • in

    AI models identify biodiversity from animal sounds in tropical rainforests

    Tropical forests are among the most important habitats on our planet. They are characterised by extremely high species diversity and play an eminent role in the global carbon cycle and the world climate. However, many tropical forest areas have been deforested and overexploitation continues day by day.
    Reforested areas in the tropics are therefore becoming increasingly important for the climate and biodiversity. How well biodiversity develops on such areas can be monitored very well with an automated analysis of animal sounds. This was reported by researchers in the journal Nature Communications.
    Recordings on Former Cocoa Plantations and Pastures
    As part of the DFG research group Reassembly, the team worked in northern Ecuador on abandoned pastures and former cacao plantations where forest is gradually reestablishing itself. There, they investigated whether autonomous sound recorders and artificial intelligence (AI) can be used to automatically recognise how the species communities of birds, amphibians and mammals are composed.
    “The research results show that the sound data reflect excellently the return of biodiversity in abandoned agricultural areas,” Professor Jörg Müller is pleased to say. The head of the Ecological Station Fabrikschleichach at Julius-Maximilians-Universität (JMU) Würzburg and his colleague Oliver Mitesser were in charge of the study.
    Overall it is particularly the communities of vocalizing species that mirrors the recolonisation very well — because the communities follow strictly the recovery gradients. A preliminary set of 70 AI bird models was able to describe the entire species communities of birds, amphibians and some calling mammals. Even the changes in nocturnal insects could be meaningfully correlated with them.
    AI Models are Being Further Refined
    The team is currently working on further improving the AI models used and expanding the set of models. The goal is to be able to automatically record even more species. The models are also to be established in other protected areas in Ecuador, the Sailershausen JMU Forest and in Germany’s oldest national park in the Bavarian Forest.
    “Our AI models can be the basis for a very universal tool for monitoring biodiversity in reforested areas,” says Jörg Müller. The Würzburg professor sees possible applications, for example, in the context of certifications or biodiversity credits. Biodiversity credits function similarly to carbon dioxide emissions trading. They are issued by projects that protect or improve biodiversity. They are purchased by companies or organisations that want to compensate for negative impacts of their activities. More

  • in

    Virtual reality helps people with hoarding disorder practice decluttering

    Many people who dream of an organized, uncluttered home à la Marie Kondo find it hard to decide what to keep and what to let go. But for those with hoarding disorder — a mental condition estimated to affect 2.5% of the U.S. population — the reluctance to let go can reach dangerous and debilitating levels.
    Now, a pilot study by Stanford Medicine researchers suggests that a virtual reality therapy that allows those with hoarding disorder to rehearse relinquishing possessions in a simulation of their own home could help them declutter in real life. The simulations can help patients practice organizational and decision-making skills learned in cognitive behavioral therapy — currently the standard treatment — and desensitize them to the distress they feel when discarding.
    The study was published in the October issue of the Journal of Psychiatric Research.
    A hidden problem
    Hoarding disorder is an under-recognized and under-treated condition that has been included in the Diagnostic and Statistical Manual of Mental Disorders — referred to as the DSM-5 — as a formal diagnosis only since 2013. People with the disorder, who tend to be older, have persistent difficulty parting with possessions, resulting in an accumulation of clutter that impairs their relationships, their work and even their safety.
    “Unfortunately, stigma and shame prevent people from seeking help for hoarding disorder,” said Carolyn Rodriguez, MD, PhD, professor of psychiatry and behavioral sciences and senior author of the study. “They may also be unwilling to have anyone else enter the home to help.”
    Sometimes the condition is discovered through cluttered backgrounds on Zoom calls or, tragically, when firefighters respond to a fire, Rodriguez said. Precarious piles of stuff not only prevent people from sleeping in their beds and cooking in their kitchens, but they can also attract pests; block fire exits; and collapse on occupants, first responders and clinicians offering treatment. More

  • in

    New polymer membranes, AI predictions could dramatically reduce energy, water use in oil refining

    A new kind of polymer membrane created by researchers at Georgia Tech could reshape how refineries process crude oil, dramatically reducing the energy and water required while extracting even more useful materials.
    The so-called DUCKY polymers — more on the unusual name in a minute — are reported Oct. 16 in Nature Materials. And they’re just the beginning for the team of Georgia Tech chemists, chemical engineers, and materials scientists. They also have created artificial intelligence tools to predict the performance of these kinds of polymer membranes, which could accelerate development of new ones.
    The implications are stark: the initial separation of crude oil components is responsible for roughly 1% of energy used across the globe. What’s more, the membrane separation technology the researchers are developing could have several uses, from biofuels and biodegradable plastics to pulp and paper products.
    “We’re establishing concepts here that we can then use with different molecules or polymers, but we apply them to crude oil because that’s the most challenging target right now,” said M.G. Finn, professor and James A. Carlos Family Chair in the School of Chemistry and Biochemistry.
    Crude oil in its raw state includes thousands of compounds that have to be processed and refined to produce useful materials — gas and other fuels, as well as plastics, textiles, food additives, medical products, and more. Squeezing out the valuable stuff involves dozens of steps, but it starts with distillation, a water- and energy-intensive process.
    Researchers have been trying to develop membranes to do that work instead, filtering out the desirable molecules and skipping all the boiling and cooling.
    “Crude oil is an enormously important feedstock for almost all aspects of life, and most people don’t think about how it’s processed,” said Ryan Lively, Thomas C. DeLoach Jr. Professor in the School of Chemical and Biomolecular Engineering. “These distillation systems are massive water consumers, and the membranes simply are not. They’re not using heat or combustion. They just use electricity. You could ostensibly run it off of a wind turbine, if you wanted. It’s just a fundamentally different way of doing a separation.”
    What makes the team’s new membrane formula so powerful is a new family of polymers. The researchers used building blocks called spirocyclic monomers that assemble together in chains with lots of 90-degree turns, forming a kinky material that doesn’t compress easily and forms pores that selectively bind and permit desirable molecules to pass through. The polymers are not rigid, which means they’re easier to make in large quantities. They also have a well-controlled flexibility or mobility that allows pores of the right filtering structure to come and go over time. More

  • in

    Deep neural networks don’t see the world the way we do

    Human sensory systems are very good at recognizing objects that we see or words that we hear, even if the object is upside down or the word is spoken by a voice we’ve never heard.
    Computational models known as deep neural networks can be trained to do the same thing, correctly identifying an image of a dog regardless of what color its fur is, or a word regardless of the pitch of the speaker’s voice. However, a new study from MIT neuroscientists has found that these models often also respond the same way to images or words that have no resemblance to the target.
    When these neural networks were used to generate an image or a word that they responded to in the same way as a specific natural input, such as a picture of a bear, most of them generated images or sounds that were unrecognizable to human observers. This suggests that these models build up their own idiosyncratic “invariances” — meaning that they respond the same way to stimuli with very different features.
    The findings offer a new way for researchers to evaluate how well these models mimic the organization of human sensory perception, says Josh McDermott, an associate professor of brain and cognitive sciences at MIT and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines.
    “This paper shows that you can use these models to derive unnatural signals that end up being very diagnostic of the representations in the model,” says McDermott, who is the senior author of the study. “This test should become part of a battery of tests that we as a field are using to evaluate models.”
    Jenelle Feather PhD ’22, who is now a research fellow at the Flatiron Institute Center for Computational Neuroscience, is the lead author of the open-access paper, which appears today in Nature Neuroscience. Guillaume Leclerc, an MIT graduate student, and Aleksander M?dry, the Cadence Design Systems Professor of Computing at MIT, are also authors of the paper.
    Different perceptions
    In recent years, researchers have trained deep neural networks that can analyze millions of inputs (sounds or images) and learn common features that allow them to classify a target word or object roughly as accurately as humans do. These models are currently regarded as the leading models of biological sensory systems. More

  • in

    Virtual driving assessment predicts risk of crashing for newly licensed teen drivers

    New research published today by the journal Pediatrics found that driving skills measured at the time of licensure on a virtual driving assessment (VDA), which exposes drivers to common serious crash scenarios, helps predict crash risk in newly licensed young drivers.
    This study, conducted by the Center for Injury Research and Prevention (CIRP) at Children’s Hospital of Philadelphia (CHOP) with colleagues at the University of Pennsylvania and the University of Michigan, brings the research community one step closer to identifying which skill deficits put young new drivers at higher risk for crashes. With this cutting-edge information, more personalized interventions can be developed to improve the driving skills that prevent crashes.
    While drivers between the ages of 15 and 20 only make up about 5% of all drivers on the road, they are involved in approximately 12% of all vehicle crashes and 8.5% of fatal crashes. The time of greatest crash risk is in the months right after these young drivers receive their license, largely due to deficits in driving skills.
    However, many of these newly licensed drivers do avoid crashes. The challenge for policymakers, clinicians, and families has been identifying which drivers are at increased risk of crashing during the learning phase before they drive on their own. Early identification of at-risk drivers offers the opportunity to intervene with training and other resources known to help prevent crashes, making the roads safer for everyone.
    Over the past two decades, CIRP researchers have systematically determined the primary reason for novice driver crashes — inadequate driving skills, such as speed management — and conducted studies that informed the development and validation of a self-guided VDA that measures performance of these driving skills in common serious crash scenarios that cannot be evaluated with on-road testing. The VDA utilizes the Ready-Assess™ platform developed by Diagnostic Driving, Inc., an AI-driven virtual driving assessment that provides the driver with the insights and tools to improve.
    In this study, researchers examined the ability of the VDA, delivered at the time of the licensing road test, to predict crash risk in the first year after obtaining licensure in the state of Ohio. Using a unique study design, the results of the VDA were linked to police-reported crash records for the first year after obtaining a license.
    “Our previous research showed that performance on the VDA predicted actual on-road driving performance, as measured by failure on the licensing road test. This new study went further to determine whether VDA performance could identify unsafe driving performance predictive of future crash risk,” said lead study author Elizabeth Walshe, PhD, a cognitive neuroscientist and clinical researcher who directs the Neuroscience of Driving team at CIRP. “We found that drivers categorized by their performance as having major issues with dangerous behavior were at higher risk of crashing than average new drivers.”
    The researchers analyzed a unique integrated dataset of individual results of VDA performance, collected in the Ohio Bureau of Motor Vehicles before the licensing road test, linked to licensing and police-reported crash records in 16,914 first-time newly licensed drivers under the age of 25. Data were collected from applicants who completed the VDA between July 2017 and December 2019 on the day they passed the on-road licensing examination in Ohio. Researchers examined crash records up to mid-March 2020. More

  • in

    Photonic crystals bend light as though it were under the influence of gravity

    A collaborative group of researchers has manipulated the behavior of light as if it were under the influence of gravity. The findings, which were published in the journal Physical Review A on September 28, 2023, have far-reaching implications for the world of optics and materials science, and bear significance for the development of 6G communications.
    Albert Einstein’s theory of relativity has long established that the trajectory of electromagnetic waves — including light and terahertz electromagnetic waves — can be deflected by gravitational fields.
    Scientists have recently theoretically predicted that replicating the effects of gravity — i.e., pseudogravity — is possible by deforming crystals in the lower normalized energy (or frequency) region.
    “We set out to explore whether lattice distortion in photonic crystals can produce pseudogravity effects,” said Professor Kyoko Kitamura from Tohoku University’s Graduate School of Engineering.
    Photonic crystals possess unique properties that enable scientists to manipulate and control the behavior of light, serving as ‘traffic controllers’ for light within crystals. They are constructed by periodically arranging two or more different materials with varying abilities to interact with and slow down light in a regular, repeating pattern. Furthermore, pseudogravity effects due to adiabatic changes have been observed in photonic crystals.
    Kitamura and her colleagues modified photonic crystals by introducing lattice distortion: gradual deformation of the regular spacing of elements, which disrupted the grid-like pattern of protonic crystals. This manipulated the photonic band structure of the crystals, resulting in a curved beam trajectory in-medium — just like a light-ray passing by a massive celestial body such as a black hole.
    Specifically, they employed a silicon distorted photonic crystal with a primal lattice constant of 200 micrometers and terahertz waves. Experiments successfully demonstrated the deflection of these waves.
    “Much like gravity bends the trajectory of objects, we came up with a means to bend light within certain materials,” adds Kitamura. “Such in-plane beam steering within the terahertz range could be harnessed in 6G communication. Academically, the findings show that photonic crystals could harness gravitational effects, opening new pathways within the field of graviton physics,” said Associate Professor Masayuki Fujita from Osaka University. More

  • in

    Researchers measure global consensus over the ethical use of AI

    To examine the global state of AI ethics, a team of researchers from Brazil performed a systematic review and meta-analysis of global guidelines for AI use. Publishing October 13 in in the journal Patterns, the researchers found that, while most of the guidelines valued privacy, transparency, and accountability, very few valued truthfulness, intellectual property, or children’s rights. Additionally, most of the guidelines described ethical principles and values without proposing practical methods for implementing them and without pushing for legally binding regulation.
    “Establishing clear ethical guidelines and governance structures for the deployment of AI around the world is the first step to promoting trust and confidence, mitigating its risks, and ensuring that its benefits are fairly distributed,” says social scientist and co-author James William Santos of the Pontifical Catholic University of Rio Grande do Sul.
    “Previous work predominantly centered around North American and European documents, which prompted us to actively seek and include perspectives from regions such as Asia, Latin America, Africa, and beyond,” says lead author Nicholas Kluge Corrêa of the Pontifical Catholic University of Rio Grande do Sul and the University of Bonn.
    To determine whether a global consensus exists regarding the ethical development and use of AI, and to help guide such a consensus, the researchers conducted a systematic review of policy and ethical guidelines published between 2014 and 2022. From this, they identified 200 documents related to AI ethics and governance from 37 countries and six continents and written or translated into five different languages (English, Portuguese, French, German, and Spanish). These documents included recommendations, practical guides, policy frameworks, legal landmarks, and codes of conduct.
    Then, the team conducted a meta-analysis of these documents to identify the most common ethical principles, examine their global distribution, and assess biases in terms of the type of organizations or people producing these documents.
    The researchers found that the most common principles were transparency, security, justice, privacy, and accountability, which appeared in 82.5%, 78%, 75.5%, 68.5%, and 67% of the documents, respectively. The least common principles were labor rights, truthfulness, intellectual property, and children/adolescent rights, which appeared in 19.5%, 8.5%, 7%, and 6% of the documents, and the authors emphasize that these principles deserve more attention. For example, truthfulness — the idea that AI should provide truthful information — is becoming increasingly relevant with the release of generative AI technologies like ChatGPT. And since AI has the potential to displace workers and change the way we work, practical measures are to avoid mass unemployment or monopolies.
    Most (96%) of the guidelines were “normative” — describing ethical values that should be considered during AI development and use — while only 2% recommended practical methods of implementing AI ethics, and only 4.5% proposed legally binding forms of AI regulation. More