More stories

  • in

    New AI tool can revolutionize microscopy

    An AI tool developed at the University of Gothenburg offers new opportunities for analysing images taken with microscopes. A study shows that the tool, which has already received international recognition, can fundamentally change microscopy and pave the way for new discoveries and areas of use within both research and industry.
    The focus of the study is deep learning, a type of artificial intelligence (AI) and machine learning that we all interact with daily, often without thinking about it. For example when a new song on Spotify pops up that is similar to songs we have previously listened to or when our mobile phone camera automatically finds the best settings and corrects colours in a photo.
    “Deep learning has taken the world by storm and has had a huge impact on many industries, sectors and scientific fields. We have now developed a tool that makes it possible to utilise the incredible potential of deep learning, with focus on images taken with microscopes,” says Benjamin Midtvedt, a doctoral student in physics and the main author of the study.
    Deep learning can be described as a mathematical model used to solve problems that are difficult to tackle using traditional algorithmic methods. In microscopy, the great challenge is to retrieve as much information as possible from the data-packed images, and this is where deep learning has proven to be very effective.
    The tool that Midtvedt and his research colleagues have developed involves neural networks learning to retrieve exactly the information that a researcher wants from an image by looking through a huge number of images, known as training data. The tool simplifies the process of producing training data compared with having to do so manually, so that tens of thousands of images can be generated in an hour instead of a hundred in a month.
    “This makes it possible to quickly extract more details from microscope images without needing to create a complicated analysis with traditional methods. In addition, the results are reproducible, and customised, specific information can be retrieved for a specific purpose.”
    For example, the tool allows the user to decide the size and material characteristics for very small particles and to easily count and classify cells. The researchers have already demonstrated that the tool can be used by industries that need to purify their emissions since they can see in real time whether all unwanted particles have been filtered out.
    The researchers are hopeful that in the future the tool can be used to follow infections in a cell and map cellular defence mechanisms, which would open up huge possibilities for new medicines and treatments.
    “We have already seen major international interest in the tool. Regardless of the microscopic challenges, researchers can now more easily conduct analyses, make new discoveries, implement ideas and break new ground within their fields.”

    Story Source:
    Materials provided by University of Gothenburg. Note: Content may be edited for style and length. More

  • in

    Faster drug discovery through machine learning

    Drugs can only work if they stick to their target proteins in the body. Assessing that stickiness is a key hurdle in the drug discovery and screening process. New research combining chemistry and machine learning could lower that hurdle.
    The new technique, dubbed DeepBAR, quickly calculates the binding affinities between drug candidates and their targets. The approach yields precise calculations in a fraction of the time compared to previous state-of-the-art methods. The researchers say DeepBAR could one day quicken the pace of drug discovery and protein engineering.
    “Our method is orders of magnitude faster than before, meaning we can have drug discovery that is both efficient and reliable,” says Bin Zhang, the Pfizer-Laubach Career Development Professor in Chemistry at MIT, an associate member of the Broad Institute of MIT and Harvard, and a co-author of a new paper describing the technique.
    The research appears today in the Journal of Physical Chemistry Letters. The study’s lead author is Xinqiang Ding, a postdoc in MIT’s Department of Chemistry.
    The affinity between a drug molecule and a target protein is measured by a quantity called the binding free energy — the smaller the number, the stickier the bind. “A lower binding free energy means the drug can better compete against other molecules,” says Zhang, “meaning it can more effectively disrupt the protein’s normal function.” Calculating the binding free energy of a drug candidate provides an indicator of a drug’s potential effectiveness. But it’s a difficult quantity to nail down.
    Methods for computing binding free energy fall into two broad categories, each with its own drawbacks. One category calculates the quantity exactly, eating up significant time and computer resources. The second category is less computationally expensive, but it yields only an approximation of the binding free energy. Zhang and Ding devised an approach to get the best of both worlds.

    advertisement

    Exact and efficient
    DeepBAR computes binding free energy exactly, but it requires just a fraction of the calculations demanded by previous methods. The new technique combines traditional chemistry calculations with recent advances in machine learning.
    The “BAR” in DeepBAR stands for “Bennett acceptance ratio,” a decades-old algorithm used in exact calculations of binding free energy. Using the Bennet acceptance ratio typically requires a knowledge of two “endpoint” states (e.g., a drug molecule bound to a protein and a drug molecule completely dissociated from a protein), plus knowledge of many intermediate states (e.g., varying levels of partial binding), all of which bog down calculation speed.
    DeepBAR slashes those in-between states by deploying the Bennett acceptance ratio in machine-learning frameworks called deep generative models. “These models create a reference state for each endpoint, the bound state and the unbound state,” says Zhang. These two reference states are similar enough that the Bennett acceptance ratio can be used directly, without all the costly intermediate steps.
    In using deep generative models, the researchers were borrowing from the field of computer vision. “It’s basically the same model that people use to do computer image synthensis,” says Zhang. “We’re sort of treating each molecular structure as an image, which the model can learn. So, this project is building on the effort of the machine learning community.”
    While adapting a computer vision approach to chemistry was DeepBAR’s key innovation, the crossover also raised some challenges. “These models were originally developed for 2D images,” says Ding. “But here we have proteins and molecules — it’s really a 3D structure. So, adapting those methods in our case was the biggest technical challenge we had to overcome.”

    advertisement

    A faster future for drug screening
    In tests using small protein-like molecules, DeepBAR calculated binding free energy nearly 50 times faster than previous methods. Zhang says that efficiency means “we can really start to think about using this to do drug screening, in particular in the context of Covid. DeepBAR has the exact same accuracy as the gold standard, but it’s much faster.” The researchers add that, in addition to drug screening, DeepBAR could aid protein design and engineering, since the method could be used to model interactions between multiple proteins.
    DeepBAR is “a really nice computational work” with a few hurdles to clear before it can be used in real-world drug discovery, says Michael Gilson, a professor of pharmaceutical sciences at the University of California at San Diego, who was not involved in the research. He says DeepBAR would need to be validated against complex experimental data. “That will certainly pose added challenges, and it may require adding in further approximations.”
    In the future, the researchers plan to improve DeepBAR’s ability to run calculations for large proteins, a task made feasible by recent advances in computer science. “This research is an example of combining traditional computational chemistry methods, developed over decades, with the latest developments in machine learning,” says Ding. “So, we achieved something that would have been impossible before now.” More

  • in

    Researchers enhance Alzheimer's disease classification through artificial intelligence

    Warning signs for Alzheimer’s disease (AD) can begin in the brain years before the first symptoms appear. Spotting these clues may allow for lifestyle changes that could possibly delay the disease’s destruction of the brain.
    “Improving the diagnostic accuracy of Alzheimer’s disease is an important clinical goal. If we are able to increase the diagnostic accuracy of the models in ways that can leverage existing data such as MRI scans, then that can be hugely beneficial,” explained corresponding author Vijaya B. Kolachalama, PhD, assistant professor of medicine at Boston University School of Medicine (BUSM).
    Using an advanced AI (artificial intelligence) framework based on game theory (known as generative adversarial network or GAN), Kolachalama and his team processed brain images (some low and high quality) to generate a model that was able to classify Alzheimer’s disease with improved accuracy.
    Quality of an MRI scan is dependent on the scanner instrument that is used. For example, a 1.5 Tesla magnet scanner has a slightly lower quality image than an image taken from a 3 Tesla magnet scanner. The magnetic strength is a key parameter associated with a specific scanner. The researchers obtained brain MR images from both 1.5 Tesla and the 3 Tesla scanners of the same subjects taken at the same time, and developed a GAN model that learned from both these images.
    As the model was “learning” from the 1.5 Tesla and 3 Tesla images, it generated images that had improved quality than the 1.5 Tesla scanner, and these generated images also better predicted the Alzheimer’s disease status on these individuals than what could possibly be achieved using models that are based on 1.5 Tesla images alone. “Our model essentially can take 1.5 Tesla scanner derived images and generate images that are of better quality and we can also use the derived images to better predict Alzheimer’s disease than what we could possibly do using just 1.5 Tesla-based images alone,” he added.
    Globally, the population aged 65 and over is growing faster than all other age groups. By 2050, one in six people in the world will be over age 65. While the estimated total healthcare costs for the treatment of AD) in 2020 was estimated at $305 billion and expected to increase to more than $1 trillion as the population ages. The severe burden upon patients and their caregivers, in particular, family caregivers of AD patients face extreme hardship and distress that represents a major but often hidden burden.
    According to the researchers it may be possible to generate images of enhanced quality on disease cohorts that have previously used the 1.5T scanners, and in those centers who continue to rely on 1.5T scanners. “This would allow us to reconstruct the earliest phases of AD, and build a more accurate model of predicting Alzheimer’s disease status than would otherwise be possible using data from 1.5T scanners alone,” said Kolachalama.
    He hopes that such advanced AI methods can be put to good use so that medical imaging community can get the best out of the advances in AI. Such frameworks he believes, can be used to harmonize imaging data across multiple studies so that models can be developed and compared across different populations. This can lead to the development of better approaches to diagnosing AD.
    These findings appear online in the journal Alzheimer’s Research & Therapy.
    Story Source:
    Materials provided by Boston University School of Medicine. Note: Content may be edited for style and length. More

  • in

    Standard digital camera and AI to monitor soil moisture for affordable smart irrigation

    Researchers at UniSA have developed a cost-effective new technique to monitor soil moisture using a standard digital camera and machine learning technology.
    The United Nations predicts that by 2050 many areas of the planet may not have enough fresh water to meet the demands of agriculture if we continue our current patterns of use.
    One solution to this global dilemma is the development of more efficient irrigation, central to which is precision monitoring of soil moisture, allowing sensors to guide ‘smart’ irrigation systems to ensure water is applied at the optimum time and rate.
    Current methods for sensing soil moisture are problematic — buried sensors are susceptible to salts in the substrate and require specialised hardware for connections, while thermal imaging cameras are expensive and can be compromised by climatic conditions such as sunlight intensity, fog, and clouds.
    Researchers from The University of South Australia and Baghdad’s Middle Technical University have developed a cost-effective alternative that may make precision soil monitoring simple and affordable in almost any circumstance.
    A team including UniSA engineers Dr Ali Al-Naji and Professor Javaan Chahl has successfully tested a system that uses a standard RGB digital camera to accurately monitor soil moisture under a wide range of conditions.

    advertisement

    “The system we trialled is simple, robust and affordable, making it promising technology to support precision agriculture,” Dr Al-Naji says.
    “It is based on a standard video camera which analyses the differences in soil colour to determine moisture content. We tested it at different distances, times and illumination levels, and the system was very accurate.”
    The camera was connected to an artificial neural network (ANN) a form of machine learning software that the researchers trained to recognise different soil moisture levels under different sky conditions.
    Using this ANN, the monitoring system could potentially be trained to recognise the specific soil conditions of any location, allowing it to be customised for each user and updated for changing climatic circumstances, ensuing maximum accuracy.
    “Once the network has been trained it should be possible to achieve controlled irrigation by maintaining the appearance of the soil at the desired state,” Prof Chahl says.
    “Now that we know the monitoring method is accurate, we are planning to design a cost-effective smart-irrigation system based on our algorithm using a microcontroller, USB camera and water pump that can work with different types of soils.
    “This system holds promise as a tool for improved irrigation technologies in agriculture in terms of cost, availability and accuracy under changing climatic conditions.”

    Story Source:
    Materials provided by University of South Australia. Note: Content may be edited for style and length. More

  • in

    Calls to poison centers about high-powered magnets increased by 444% after ban lifted

    High-powered magnets are small, shiny magnets made from powerful rare earth metals. Since they started showing up in children’s toys in the early 2000s and then later in desk sets in 2009, high-powered magnets have caused thousands of injuries and are considered to be among the most dangerous ingestion hazards in children.
    When more than one is swallowed, these high-powered magnets attract to each other across tissue, cutting off blood supply to the bowel and causing obstructions, tissue necrosis, sepsis and even death. The U.S. Consumer Product Safety Commission (CPSC) found them dangerous enough that in 2012 they halted the sale of high-powered magnet sets and instituted a recall followed by a federal rule that effectively eliminated the sale of these products. This rule was overturned by the U.S. Court of Appeals in December 2016.
    A recent study led by researchers at the Center for Injury Research and Policy, Emergency Medicine, and the Central Ohio Poison Center at Nationwide Children’s Hospital along with the Children’s Hospital at Montefiore (CHAM) analyzed calls to U.S. poison centers for magnet exposures in children age 19 years and younger from 2008 through October 2019 to determine the impact of the CPSC rule and the subsequent lift of the ban.
    The study, recently published in Journal of Pediatrics, found that the average number of cases per year decreased 33% from 2012 to 2017 after high-powered magnet sets were removed from the market. When the ban was lifted and high-powered magnet sets re-entered the market, the average number of cases per year increased 444%. There was also a 355% increase in the number of cases that were serious enough to require treatment in a hospital. Cases from 2018 and 2019 increased across all age groups and accounted for 39% of magnet cases since 2008.
    “Regulations on these products were effective, and the dramatic increase in the number of high-powered magnet related injuries since the ban was lifted — even compared to pre-ban numbers — is alarming,” said Leah Middelberg, MD, lead author of the study and emergency medicine physician at Nationwide Children’s. “Parents don’t always know if their child swallowed something or what they swallowed — they just know their child is uncomfortable — so when children are brought in, an exam and sometimes x-rays are needed to determine what’s happening. Because damage caused by magnets can be serious, it’s so important to keep these kinds of magnets out of reach of children, and ideally out of the home.”
    The study found a total of 5,738 magnet exposures during the nearly 12-year study period. Most calls were for children who were male (55%), younger than six years (62%), with an unintentional injury (84%). Approximately one-half (48.4%) of patients were treated at a hospital or other healthcare facility while 48.7% were managed at a non-healthcare site such as a home, workplace, or school. Children in older age groups were more likely than younger children to be admitted to the hospital.
    “While many cases occur among young children, parents need to be aware that high-powered magnets are a risk for teenagers as well,” said Bryan Rudolph, MD, MPH, co-senior author of this study and gastroenterologist at CHAM. “Serious injuries can happen when teens use these products to mimic tongue or lip piercings. If there are children or teens who live in or frequently visit your home, don’t buy these products. If you have high-powered magnets in your home, throw them away. The risk of serious injury is too great.”
    “Significant increases in magnet injuries correspond to time periods in which high-powered magnet sets were sold, including a 444% increase since 2018,” said Middelberg. “These data reflect the urgent need to protect children by preventive measures and government action,” Rudolph emphasized. Both Middelberg and Rudolph support the federal legislation, “Magnet Injury Prevention Act,” which would limit the strength and/or size of magnets sold as part of a set, as well as reinstatement of a CPSC federal safety standard that would effectively restrict the sale of these magnet products in the U.S.

    Story Source:
    Materials provided by Nationwide Children’s Hospital. Note: Content may be edited for style and length. More

  • in

    Engineers combine AI and wearable cameras in self-walking robotic exoskeletons

    Robotics researchers are developing exoskeletons and prosthetic legs capable of thinking and making control decisions on their own using sophisticated artificial intelligence (AI) technology.
    The system combines computer vision and deep-learning AI to mimic how able-bodied people walk by seeing their surroundings and adjusting their movements.
    “We’re giving robotic exoskeletons vision so they can control themselves,” said Brokoslaw Laschowski, a PhD candidate in systems design engineering who leads a University of Waterloo research project called ExoNet.
    Exoskeletons legs operated by motors already exist, but users must manually control them via smartphone applications or joysticks.
    “That can be inconvenient and cognitively demanding,” said Laschowski, also a student member of the Waterloo Artificial Intelligence Institute (Waterloo.ai). “Every time you want to perform a new locomotor activity, you have to stop, take out your smartphone and select the desired mode.”
    To address that limitation, the researchers fitted exoskeleton users with wearable cameras and are now optimizing AI computer software to process the video feed to accurately recognize stairs, doors and other features of the surrounding environment.
    The next phase of the ExoNet research project will involve sending instructions to motors so that robotic exoskeletons can climb stairs, avoid obstacles or take other appropriate actions based on analysis of the user’s current movement and the upcoming terrain.
    “Our control approach wouldn’t necessarily require human thought,” said Laschowski, who is supervised by engineering professor John McPhee, the Canada Research Chair in Biomechatronic System Dynamics. “Similar to autonomous cars that drive themselves, we’re designing autonomous exoskeletons and prosthetic legs that walk for themselves.”
    The researchers are also working to improve the energy efficiency of motors for robotic exoskeletons and prostheses by using human motion to self-charge the batteries.

    Story Source:
    Materials provided by University of Waterloo. Note: Content may be edited for style and length. More

  • in

    Computing clean water

    Water is perhaps Earth’s most critical natural resource. Given increasing demand and increasingly stretched water resources, scientists are pursuing more innovative ways to use and reuse existing water, as well as to design new materials to improve water purification methods. Synthetically created semi-permeable polymer membranes used for contaminant solute removal can provide a level of advanced treatment and improve the energy efficiency of treating water; however, existing knowledge gaps are limiting transformative advances in membrane technology. One basic problem is learning how the affinity, or the attraction, between solutes and membrane surfaces impacts many aspects of the water purification process.
    “Fouling — where solutes stick to and gunk up membranes — significantly reduces performance and is a major obstacle in designing membranes to treat produced water,” said M. Scott Shell, a chemical engineering professor at UC Santa Barbara, who conducts computational simulations of soft materials and biomaterials. “If we can fundamentally understand how solute stickiness is affected by the chemical composition of membrane surfaces, including possible patterning of functional groups on these surfaces, then we can begin to design next-generation, fouling-resistant membranes to repel a wide range of solute types.”
    Now, in a paper published in the Proceedings of the National Academy of Sciences (PNAS), Shell and lead author Jacob Monroe, a recent Ph.D. graduate of the department and a former member of Shell’s research group, explain the relevance of macroscopic characterizations of solute-to-surface affinity.
    “Solute-surface interactions in water determine the behavior of a huge range of physical phenomena and technologies, but are particularly important in water separation and purification, where often many distinct types of solutes need to be removed or captured,” said Monroe, now a postdoctoral researcher at the National Institute of Standards and Technology (NIST). “This work tackles the grand challenge of understanding how to design next-generation membranes that can handle huge yearly volumes of highly contaminated water sources, like those produced in oilfield operations, where the concentration of solutes is high and their chemistries quite diverse.”
    Solutes are frequently characterized as spanning a range from hydrophilic, which can be thought of as water-liking and dissolving easily in water, to hydrophobic, or water-disliking and preferring to separate from water, like oil. Surfaces span the same range; for example, water beads up on hydrophobic surfaces and spreads out on hydrophilic surfaces. Hydrophilic solutes like to stick to hydrophilic surfaces, and hydrophobic solutes stick to hydrophobic surfaces. Here, the researchers corroborated the expectation that “like sticks to like,” but also discovered, surprisingly, that the complete picture is more complex.
    “Among the wide range of chemistries that we considered, we found that hydrophilic solutes also like hydrophobic surfaces, and that hydrophobic solutes also like hydrophilic surfaces, though these attractions are weaker than those of like to like,” explained Monroe, referencing the eight solutes the group tested, ranging from ammonia and boric acid, to isopropanol and methane. The group selected small-molecule solutes typically found in produced waters to provide a fundamental perspective on solute-surface affinity.

    advertisement

    The computational research group developed an algorithm to repattern surfaces by rearranging surface chemical groups in order to minimize or maximize the affinity of a given solute to the surface, or alternatively, to maximize the surface affinity of one solute relative to that of another. The approach relied on a genetic algorithm that “evolved” surface patterns in a way similar to natural selection, optimizing them toward a particular function goal.
    Through simulations, the team discovered that surface affinity was poorly correlated to conventional methods of solute hydrophobicity, such as how soluble a solute is in water. Instead, they found a stronger connection between surface affinity and the way that water molecules near a surface or near a solute change their structures in response. In some cases, these neighboring waters were forced to adopt structures that were unfavorable; by moving closer to hydrophobic surfaces, solutes could then reduce the number of such unfavorable water molecules, providing an overall driving force for affinity.
    “The missing ingredient was understanding how the water molecules near a surface are structured and move around it,” said Monroe. “In particular, water structural fluctuations are enhanced near hydrophobic surfaces, compared to bulk water, or the water far away from the surface. We found that fluctuations drove the stickiness of every small solute types that we tested. ”
    The finding is significant because it shows that in designing new surfaces, researchers should focus on the response of water molecules around them and avoid being guided by conventional hydrophobicity metrics.
    Based on their findings, Monroe and Shell say that surfaces comprised of different types of molecular chemistries may be the key to achieving multiple performance goals, such as preventing an assortment of solutes from fouling a membrane.
    “Surfaces with multiple types of chemical groups offer great potential. We showed that not only the presence of different surface groups, but their arrangement or pattern, influence solute-surface affinity,” Monroe said. “Just by rearranging the spatial pattern, it becomes possible to significantly increase or decrease the surface affinity of a given solute, without changing how many surface groups are present.”
    According to the team, their findings show that computational methods can contribute in significant ways to next-generation membrane systems for sustainable water treatment.
    “This work provided detailed insight into the molecular-scale interactions that control solute-surface affinity,” said Shell, the John E. Myers Founder’s Chair in Chemical Engineering. “Moreover, it shows that surface patterning offers a powerful design strategy in engineering membranes are resistant to fouling by a variety of contaminants and that can precisely control how each solute type is separated out. As a result, it offers molecular design rules and targets for next-generation membrane systems capable of purifying highly contaminated waters in an energy-efficient manner.”
    Most of the surfaces examined were model systems, simplified to facilitate analysis and understanding. The researchers say that the natural next step will be to examine increasingly complex and realistic surfaces that more closely mimic actual membranes used in water treatment. Another important step to bring the modeling closer to membrane design will be to move beyond understanding merely how sticky a membrane is for a solute and toward computing the rates at which solutes move through membranes. More

  • in

    A computational guide to lead cells down desired differentiation paths

    There is a great need to generate various types of cells for use in new therapies to replace tissues that are lost due to disease or injuries, or for studies outside the human body to improve our understanding of how organs and tissues function in health and disease. Many of these efforts start with human induced pluripotent stem cells (iPSCs) that, in theory, have the capacity to differentiate into virtually any cell type in the right culture conditions. The 2012 Nobel Prize awarded to Shinya Yamanaka recognized his discovery of a strategy that can reprogram adult cells to become iPSCs by providing them with a defined set of gene-regulatory transcription factors (TFs). However, progressing from there to efficiently generating a wide range of cell types with tissue-specific differentiated functions for biomedical applications has remained a challenge.
    While the expression of cell type-specific TFs in iPSCs is the most often used cellular conversion technology, the efficiencies of guiding iPSC through different “lineage stages” to the fully functional differentiated state of, for example, a specific heart, brain, or immune cell currently are low, mainly because the most effective TF combinations cannot be easily pinpointed. TFs that instruct cells to pass through a specific cell differentiation process bind to regulatory regions of genes to control their expression in the genome. However, multiple TFs must function in the context of larger gene regulatory networks (GRNs) to drive the progression of cells through their lineages until the final differentiated state is reached.
    Now, a collaborative effort led by George Church, Ph.D. at Harvard’s Wyss Institute for Biologically Inspired Engineering and Harvard Medical School (HMS), and Antonio del Sol, Ph.D., who leads Computational Biology groups at CIC bioGUNE, a member of the Basque Research and Technology Alliance, in Spain, and at the Luxembourg Centre for Systems Biomedicine (LCSB, University of Luxembourg), has developed a computer-guided design tool called IRENE, which significantly helps increase the efficiency of cell conversions by predicting highly effective combinations of cell type-specific TFs. By combining IRENE with a genomic integration system that allows robust expression of selected TFs in iPSCs, the team demonstrated their approach to generate higher numbers of natural killer cells used in immune therapies, and melanocytes used in skin grafts, than other methods. In a scientific first, generated breast mammary epithelial cells, whose availability would be highly desirable for the repopulation of surgically removed mammary tissue. The study is published in Nature Communications.
    “In our group, the study naturally built on the ‘TFome’ project, which assembled a comprehensive library containing 1,564 human TFs as a powerful resource for the identification of TF combinations with enhanced abilities to reprogram human iPSCs to different target cell types,” said Wyss Core Faculty member Church. “The efficacy of this computational algorithm will boost a number of our tissue engineering efforts at the Wyss Institute and HMS, and as an open resource can do the same for many researchers in this burgeoning field.” Church is the lead of the Wyss Institute’s Synthetic Biology platform, and Professor of Genetics at HMS and of Health Sciences and Technology at Harvard and MIT.
    Tooling up
    Several computational tools have been developed to predict combinations of TFs for specific cell conversions, but almost exclusively these are based on the analysis of gene expression patterns in many cell types. Missing in these approaches was a view of the epigenetic landscape, the organization of the genome itself around genes and on the scale of entire chromosome sections which goes far beyond the sequence of the naked genomic DNA.

    advertisement

    “The changing epigenetic landscape in differentiating cells predicts areas in the genome undergoing physical changes that are critical for key TFs to gain access to their target genes. Analyzing these changes can inform more accurately about GRNs and their participating TFs that drive specific cell conversions,” said co-first author Evan Appleton, Ph.D. Appleton is a Postdoctoral Fellow in Church’s group who joined forces with Sascha Jung, Ph.D., from del Sol’s group in the new study. “Our collaborators in Spain had developed a computational approach that integrated those epigenetic changes with changes in gene expression to produce critical TF combinations as an output, which we were in an ideal position to test.”
    The team used their computational “Integrative gene Regulatory Network model” (IRENE) approach to reconstruct the GRN controlling iPSCs, and then focused on three target cell types with clinical relevance to experimentally validate TF combinations prioritized by IRENE. To deliver TF combinations into iPSCs, they deployed a transposon-based genomic integration system that can integrate multiple copies of a gene encoding a TF into the genome, which allows all factors of a combination to be stably expressed. Transposons are DNA elements that can jump from one position of the genome to another, or in this case from an exogenously provided piece of DNA into the genome.
    “Our research team composed of scientists from the LCSB and CIC bioGUNE has a long-standing expertise in developing computational methods to facilitate cell conversion. IRENE is an additional resource in our toolbox and one for which experimental validation has demonstrated it substantially increased efficiency in most tested cases,” corresponding author Del Sol, who is Professor at LCSB and CIC bioGUNE. “Our fundamental research should ultimately benefit patients, and we are thrilled that IRENE could enhance the production of cell sources readily usable in therapeutic applications, such as cell transplantation and gene therapies.”
    Validating the computer-guided design tool in cells
    The researchers chose human mammary epithelial cells (HMECs) as a first cell type. Thus far HMECs are obtained from one tissue environment, dissociated, and transplanted to one where breast tissue has been resected. HMECs generated from patients’ cells, via an intermediate iPSC stage, could provide a means for less invasive and more effective breast tissue regeneration. One of the combinations that was generated by IRENE enabled the team to convert 14% of iPSCs into differentiated HMECs in iPSC-specific culture media, showing that the provided TFs were sufficient to drive the conversion without help from additional factors.
    The team then turned their attention to melanocytes, which can provide a source of cells in cellular grafts to replace damaged skin. This time they performed the cell conversion in melanocyte destination medium to show that the selected TFs work under culture conditions optimized for the desired cell type. Two out of four combinations were able to increase the efficiency of melanocyte conversion by 900% compared to iPSCs grown in destination medium without the TFs. Finally, the researchers compared combinations of TFs prioritized by IRENE to generate natural killer (NK) cells with a state-of-the-art differentiation method based on cell culture conditions alone. Immune NK cells have been found to improve the treatment of leukemia. The researchers’ approach outperformed the standard with five out of eight combinations increasing the differentiation of NK cells with critical markers by up to 250%.
    “This novel computational approach could greatly facilitate a range of cell and tissue engineering efforts at the Wyss Institute and many other sites around the world. This advance should greatly expand our toolbox as we strive to develop new approaches in regenerative medicine to improve patients’ lives,” said Wyss Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at HMS and Boston Children’s Hospital, and Professor of Bioengineering at the Harvard John A. Paulson School of Engineering and Applied Sciences. More