More stories

  • in

    Machine learning antibiotic prescriptions can help minimize resistance spread

    Antibiotics are a double-edged sword: on the one hand, antibiotics are essential to curing bacterial infections. On the other, their use promotes the appearance and proliferation of antibiotic-resistant bacteria. Using genomic sequencing techniques and machine learning analysis of patient records, the researchers have developed an antibiotic prescribing algorithm which cuts the risk of emergence of antibiotic resistance by half.
    The paper, published today in Science, is a collaboration between research group of Professor Roy Kishony from the Technion — Israel Institute of Technology Faculty of Biology and the Henry and Marilyn Taub Faculty of Computer Science in collaboration with Professors Varda Shalev, Gabriel Chodick, and Jacob Kuint at Maccabi KSM Research and Innovation Center headed by Dr. Tal Patalon. Focusing on two very common bacterial infections, urinary tract infections and wound infections, the paper describes how each patient’s past infection history can be used to choose the best antibiotic to prescribe them to reduce the chances of antibiotic resistance emerging.
    Clinical treatment of infections focuses on correctly matching an antibiotic to the resistance profile of the pathogen, but even such correctly matched treatments can fail as resistance can emergence during treatment itself. “We wanted to understand how antibiotic resistance emerges during treatment and find ways to better tailor antibiotic treatment for each patient to not only correctly match the patient’s current infection susceptibility, but also to minimize their risk of infection recurrence and gain of resistance to treatment,” said Prof. Kishony.
    The key to the success of the approach was understanding that the emergence of antibiotic resistance could be predicted in individual patients’ infections. Bacteria can evolve by randomly acquiring mutations that makes them resistant, but the randomness of the process makes it hard to predict and to avoid. However, the researchers discovered that in most patients’ infections resistance was not acquired by random mutations. Instead, resistance emerged due to reinfection by existing resistant bacteria from the patient’s own microbiome. The researchers turned these findings into an advantage: they proposed matching an antibiotic not only to the susceptibility of the bacteria causing the patient’s current infection, but also to the bacteria in their microbiome that could replace it.
    “We found that the antibiotic susceptibility of the patient’s past infections could be used to predict their risk of returning with a resistant infection following antibiotic treatment’ explained Dr. Mathew Stracy, the first author of the paper. “Using this data, together with the patient’s demographics like age and gender, allowed us to develop the algorithm.”
    The study was supported by the National Institutes of Health (NIH), the Israel Science Foundation within the Israel Precision Medicine Partnership program, the Ernest and Bonnie Beutler Research Program of Excellence in Genomic Medicine, the European Research Council (ERC), the Wellcome Trust, and the D. Dan & Betty Kahn Foundation.
    “I hope to see the algorithm applied at the point of care, providing doctors with better tools to personalize antibiotic treatments to improve treatment and minimize the spread of resistance,” said Dr. Tal Patalon.
    Story Source:
    Materials provided by Technion-Israel Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Faster, more efficient living cell separation achieved with new microfluidic chip

    A Japanese research team created a new way to sort living cells suspended in fluid using an all-in-one operation in a lab-on-chip that required only 30 minutes for the entire separation process. This device eliminated the need for labor-intensive sample pretreatment and chemical tagging techniques while preserving the original structure of the cells. They constructed a prototype of a microfluidic chip that uses electric fields to gently coax cells in one direction or another in dielectrophoresis, a phenomenon or movement of neutral particles when they are subjected to an external non-uniform electric field.
    The Hiroshima University Office of Academic Research and Industry-Academia-Government and Community Collaboration, led by Professor Fumito Maruyama, published their findings on January 14 in iScience.
    Dielectrophoresis induces the motion of suspended particles, such as cells, by applying a non-uniform electric field. Since the strength of dielectrophoretic force depends on the size of the cell and its dielectric properties, this technique can be used to selectively separate cells based on these differences. In this paper, Maruyama and his team introduced the separation of two types of eukaryotic cells with the developed microfluidic chip that used dielectrophoresis.
    Dielectrophoresis could be particularly useful in separating living cells for medical research applications and the medical industry. Its most significant advantage over other methods is its simplicity.
    “In conventional cell separation methods such as commercially available cell sorters, cells are generally labeled with markers such as fluorescent substances or antibodies, and cells cannot be maintained in their original physical state,” Maruyama said. “Therefore, separating differently sized cells using microfluidic channels and dielectrophoresis has been studied as a potentially great method for separating cells without labeling.”
    Maruyama noted, “Dielectrophoresis cannot entirely replace existing separation methods such as centrifuge and polyester mesh filters. However, it opens the door to faster cell separation that may be useful in certain research and industrial areas, such as the preparation of cells for therapeutics, platelets, and cancer-fighting T-cells come to mind.”
    Other common medical industry uses of cell separation include removing unwanted bacteria cells from donated blood and separating stem cells and their derivatives, which are crucial for developing stem cell therapies.
    “If enrichment of a certain cell type from a solution of two or more cell types is needed, our dielectrophoresis-based system is an excellent option as it can simply enable a continuous pass-through of a large number of cells. The enriched cells are then easily collected from an outlet port,” Maruyama added.
    The process outlined by Maruyama and his colleagues was all-in-one.
    “The device eliminated sample pretreatment and established cell separation by all-in-one operation in a lab-on-chip, requiring only a small volume (0.5-1 mL) to enumerate the target cells and completing the entire separation process within 30 minutes. Such a rapid cell separation technique is in high demand by many researchers to promptly characterize the target cells,” he said.
    “Future research may examine refinements, allowing us to use dielectrophoresis to target certain cell types with greater specificity.”
    Story Source:
    Materials provided by Hiroshima University. Note: Content may be edited for style and length. More

  • in

    New simulations refine axion mass, refocusing dark matter search

    Physicists searching — unsuccessfully — for today’s most favored candidate for dark matter, the axion, have been looking in the wrong place, according to a new supercomputer simulation of how axions were produced shortly after the Big Bang 13.6 billion years ago.
    Using new calculational techniques and one of the world’s largest computers, Benjamin Safdi, assistant professor of physics at the University of California, Berkeley; Malte Buschmann, a postdoctoral research associate at Princeton University; and colleagues at MIT and Lawrence Berkeley National Laboratory simulated the era when axions would have been produced, approximately a billionth of a billionth of a billionth of a second after the universe came into existence and after the epoch of cosmic inflation.
    The simulation at Berkeley Lab’s National Research Scientific Computing Center (NERSC) found the axion’s mass to be more than twice as big as theorists and experimenters have thought: between 40 and 180 microelectron volts (micro-eV, or ?eV), or about one 10-billionth the mass of the electron. There are indications, Safdi said, that the mass is close to 65 ?eV. Since physicists began looking for the axion 40 years ago, estimates of the mass have ranged widely, from a few ?eV to 500 ?eV.
    “We provide over a thousandfold improvement in the dynamic range of our axion simulations relative to prior work and clear up a 40-year old question regarding the axion mass and axion cosmology,” Safdi said.
    The more definitive mass means that the most common type of experiment to detect these elusive particles — a microwave resonance chamber containing a strong magnetic field, in which scientists hope to snag the conversion of an axion into a faint electromagnetic wave — won’t be able to detect them, no matter how much the experiment is tweaked. The chamber would have to be smaller than a few centimeters on a side to detect the higher-frequency wave from a higher-mass axion, Safdi said, and that volume would be too small to capture enough axions for the signal to rise above the noise.
    “Our work provides the most precise estimate to date of the axion mass and points to a specific range of masses that is not currently being explored in the laboratory,” he said. “I really do think it makes sense to focus experimental efforts on 40 to 180 ?eV axion masses, but there’s a lot of work gearing up to go after that mass range.”
    One newer type of experiment, a plasma haloscope, which looks for axion excitations in a metamaterial — a solid-state plasma — should be sensitive to an axion particle of this mass, and could potentially detect one. More

  • in

    Researchers develop design scheme for fiber reinforced composites

    Fiber reinforced composites (FRCs), which are engineering materials comprising stiff fibers embedded in a soft matrix, typically have a constant fiber radius that limits their performance. Now, researchers from the Gwangju Institute of Science and Technology in Korea have developed a scheme for AI-assisted design of FRC structures with spatially varying optimal fiber sizes, making FRCs more lightweight without compromising their mechanical strength and stiffness, which will reduce the energy consumption of cars, aircrafts, and other vehicles.
    Fiber reinforced composites (FRCs) are a class of sophisticated engineering materials composed of stiff fibers embedded in a soft matrix. When properly designed, FRCs provide outstanding structural strength and stiffness for their weight, making them an attractive option for aircrafts, spacecrafts, and other vehicles where having a lightweight structure is essential.
    Despite their usefulness, however, FRCs are limited by the fact that they are designed using fibers with a constant radius and a spatially-fixed fiber density, which compromises the trade-off between weight and mechanical strength. Simply put, currently available FRCs are, in fact, heavier than necessary to meet the application standards.
    To tackle this issue, an international research team led by Professor Jaewook Lee of the Gwangju Institute of Science and Technology in Korea recently developed a new approach for the inverse design of FCRs with spatially-varying fiber size and orientation, also known as “functionally graded composites.” The proposed method is based on a “multiscale topology optimization,” which allows one to automatically find the best functionally graded composite structure given a set of design parameters and constraints.
    “Topology optimization is an AI-based design technique that relies on computer simulation to generate an optimal structural shape instead of on the designer’s intuition and experience,” explains Prof. Lee, “On the other hand, a multiscale approach is a numerical method that combines the results of analyses conducted at different scales to derive structural characteristics.” Unlike similar existing approaches that are limited to two-dimensional functionally graded composites, the proposed methodology can simultaneously determine the optimal three-dimensional composite structure alongside its microscale fiber densities and fiber orientations.
    The team demonstrated the potential of their method through several computer-assisted experiments where various functionally graded composite designs with constant or varying fiber sizes were compared. The experiments included designs for a bell crank, a displacement inverter mechanism, and a simple support beam. As expected, the results showed improved performances in the designs with locally tailored fiber sizes. This paper was made available online on October 9, 2021, and published in Volume 279 of Composite Structures on January 1, 2022.
    Many applications for vehicles, aircrafts, and robotics benefit from lightweight structures, and the proposed approach can now assist engineers to this end. However, the benefits can extend well beyond the target applications themselves. As Prof. Lee explains: “Our methodology could help develop more energy-efficient vehicles and machinery by weight reduction, which would reduce their energy consumption and, in turn, contribute towards achieving carbon neutrality.”
    Story Source:
    Materials provided by GIST (Gwangju Institute of Science and Technology). Note: Content may be edited for style and length. More

  • in

    Transparent ultrasound chip improves cell stimulation and imaging

    Ultrasound scans — best known for monitoring pregnancies or imaging organs — can also be used to stimulate cells and direct cell function. A team of Penn State researchers has developed an easier, more effective way to harness the technology for biomedical applications.
    The team created a transparent, biocompatible ultrasound transducer chip that resembles a microscope glass slide and can be inserted into any optical microscope for easy viewing. Cells can be cultured and stimulated directly on top of the transducer chip and the cells’ resulting changes can be imaged with optical microscopy techniques.
    Published in the Royal Society of Chemistry’s journal Lab on a Chip, the paper was selected as the cover article for the December 2021 issue. Future applications of the technology could impact stem cell, cancer and neuroscience research.
    “In the conventional ultrasound stimulation experiments, a cell culture dish is placed in a water bath, and a bulky ultrasound transducer directs the ultrasound waves to the cells through the water medium,” said Sri-Rajasekhar “Raj” Kothapalli, principal investigator and assistant professor of biomedical engineering at Penn State. “This was a complex setup that didn’t provide reproducible results: The results that one group saw another did not, even while using the same parameters, because there are several things that could affect the cells’ survival and stimulation while they are in water, as well as how we visualize them.”
    Kothapalli and his collaborators miniaturized the ultrasound stimulation setup by creating a transparent transducer platform made of a piezoelectric lithium niobate material. Piezoelectric materials generate mechanical energy when electric voltage is applied. The chip’s biocompatible surface allows the cells to be cultured directly on the transducer and used for repeated stimulation experiments over several weeks.
    When connected to a power supply, the transducer emits ultrasound waves, which pulse the cells and trigger ion influx and outflux. More

  • in

    A new platform for customizable quantum devices

    A ground-up approach to qubit design leads to a new framework for creating versatile, highly tailored quantum devices.
    Advances in quantum science have the potential to revolutionize the way we live. Quantum computers hold promise for solving problems that are intractable today, and we may one day use quantum networks as hackerproof information highways.
    The realization of such forward-looking technologies hinges in large part on the qubit — the fundamental component of quantum systems. A major challenge of qubit research is designing them to be customizable, tailored to work with all kinds of sensing, communication and computational devices.
    Scientists have taken a major step in the development of tailored qubits. In a paper published in the Journal of the American Chemical Society, the team, which includes researchers at MIT, the University of Chicago and Columbia University, demonstrates how a particular molecular family of qubits can be finely tuned over a broad spectrum, like turning a sensitive dial on a wideband radio.
    The team also outlines the underlying design features that enable exquisite control over these quantum bits.
    “This is a new platform for qubit design. We can use our predictable, controllable, tunable design strategy to create a new quantum system,” said Danna Freedman, MIT professor of chemistry and a co-author of the study. ?”We’ve demonstrated the broad range of tunability over which these design principles work.”
    The work is partially supported by Q-NEXT, a U.S. Department of Energy (DOE) National Quantum Information Science Research Center led by Argonne National Laboratory. More

  • in

    More sensitive X-ray imaging

    Scintillators are materials that emit light when bombarded with high-energy particles or X-rays. In medical or dental X-ray systems, they convert incoming X-ray radiation into visible light that can then be captured using film or photosensors. They’re also used for night-vision systems and for research, such as in particle detectors or electron microscopes.
    Researchers at MIT have now shown how one could improve the efficiency of scintillators by at least tenfold, and perhaps even a hundredfold, by changing the material’s surface to create certain nanoscale configurations, such as arrays of wave-like ridges. While past attempts to develop more efficient scintillators have focused on finding new materials, the new approach could in principle work with any of the existing materials.
    Though it will require more time and effort to integrate their scintillators into existing X-ray machines, the team believes that this method might lead to improvements in medical diagnostic X-rays or CT scans, to reduce dose exposure and improve image quality. In other applications, such as X-ray inspection of manufactured parts for quality control, the new scintillators could enable inspections with higher accuracy or at faster speeds.
    The findings are described in the journal Science, in a paper by MIT doctoral students Charles Roques-Carmes and Nicholas Rivera; MIT professors Marin Soljacic, Steven Johnson, and John Joannopoulos; and 10 others.
    While scintillators have been in use for some 70 years, much of the research in the field has focused on developing new materials that produce brighter or faster light emissions. The new approach instead applies advances in nanotechnology to existing materials. By creating patterns in scintillator materials at a length scale comparable to the wavelengths of the light being emitted, the team found that it was possible to dramatically change the material’s optical properties.
    To make what they coined “nanophotonic scintillators,” Roques-Carmes says, “you can directly make patterns inside the scintillators, or you can glue on another material that would have holes on the nanoscale. The specifics depend on the exact structure and material.” For this research, the team took a scintillator and made holes spaced apart by roughly one optical wavelength, or about 500 nanometers (billionths of a meter). More

  • in

    Largest ever human family tree: 27 million ancestors

    Researchers from the University of Oxford’s Big Data Institute have taken a major step towards mapping the entirety of genetic relationships among humans: a single genealogy that traces the ancestry of all of us. The study has been published today in Science.
    The past two decades have seen extraordinary advancements in human genetic research, generating genomic data for hundreds of thousands of individuals, including from thousands of prehistoric people. This raises the exciting possibility of tracing the origins of human genetic diversity to produce a complete map of how individuals across the world are related to each other.
    Until now, the main challenges to this vision were working out a way to combine genome sequences from many different databases and developing algorithms to handle data of this size. However, a new method published today by researchers from the University of Oxford’s Big Data Institute can easily combine data from multiple sources and scale to accommodate millions of genome sequences.
    Dr Yan Wong, an evolutionary geneticist at the Big Data Institute, and one of the principal authors, explained: “We have basically built a huge family tree, a genealogy for all of humanity that models as exactly as we can the history that generated all the genetic variation we find in humans today. This genealogy allows us to see how every person’s genetic sequence relates to every other, along all the points of the genome.”
    Since individual genomic regions are only inherited from one parent, either the mother or the father, the ancestry of each point on the genome can be thought of as a tree. The set of trees, known as a “tree sequence” or “ancestral recombination graph,” links genetic regions back through time to ancestors where the genetic variation first appeared.
    Lead author Dr Anthony Wilder Wohns, who undertook the research as part of his PhD at the Big Data Institute and is now a postdoctoral researcher at the Broad Institute of MIT and Harvard, said: “Essentially, we are reconstructing the genomes of our ancestors and using them to form a vast network of relationships. We can then estimate when and where these ancestors lived. The power of our approach is that it makes very few assumptions about the underlying data and can also include both modern and ancient DNA samples.”
    The study integrated data on modern and ancient human genomes from eight different databases and included a total of 3,609 individual genome sequences from 215 populations. The ancient genomes included samples found across the world with ages ranging from 1,000s to over 100,000 years. The algorithms predicted where common ancestors must be present in the evolutionary trees to explain the patterns of genetic variation. The resulting network contained almost 27 million ancestors.
    After adding location data on these sample genomes, the authors used the network to estimate where the predicted common ancestors had lived. The results successfully recaptured key events in human evolutionary history, including the migration out of Africa.
    Although the genealogical map is already an extremely rich resource, the research team plans to make it even more comprehensive by continuing to incorporate genetic data as it becomes available. Because tree sequences store data in a highly efficient way, the dataset could easily accommodate millions of additional genomes.
    Dr Wong said: “This study is laying the groundwork for the next generation of DNA sequencing. As the quality of genome sequences from modern and ancient DNA samples improves, the trees will become even more accurate and we will eventually be able to generate a single, unified map that explains the descent of all the human genetic variation we see today.”
    Dr Wohns added: “While humans are the focus of this study, the method is valid for most living things; from orangutans to bacteria. It could be particularly beneficial in medical genetics, in separating out true associations between genetic regions and diseases from spurious connections arising from our shared ancestral history.”
    Story Source:
    Materials provided by University of Oxford. Note: Content may be edited for style and length. More