More stories

  • in

    ‘Smart glove’ can boost hand mobility of stroke patients

    This month, a group of stroke survivors in B.C. will test a new technology designed to aid their recovery, and ultimately restore use of their limbs and hands.
    Participants will wear a new groundbreaking “smart glove” capable of tracking their hand and finger movements during rehabilitation exercises supervised by Dr. Janice Eng, a leading stroke rehabilitation specialist and professor of medicine at UBC.
    The glove incorporates a sophisticated network of highly sensitive sensor yarns and pressure sensors that are woven into a comfortable stretchy fabric, enabling it to track, capture and wirelessly transmit even the smallest hand and finger movements.
    “With this glove, we can monitor patients’ hand and finger movements without the need for cameras. We can then analyze and fine-tune their exercise programs for the best possible results, even remotely,” says Dr. Eng.
    Precision in a wearable device
    UBC electrical and computer engineering professor Dr. Peyman Servati, PhD student Arvin Tashakori and their team at their startup, Texavie, created the smart glove for collaboration on the stroke project. Dr. Servati highlighted a number of breakthroughs, described in a paper published last week in Nature Machine Intelligence.
    “This is the most accurate glove we know of that can track hand and finger movement and grasping force without requiring motion-capture cameras. Thanks to machine learning models we developed, the glove can accurately determine the angles of all finger joints and the wrist as they move. The technology is highly precise and fast, capable of detecting small stretches and pressures and predicting movement with at least 99-per-cent accuracy — matching the performance of costly motion-capture cameras.”
    Unlike other products in the market, the glove is wireless and comfortable, and can be easily washed after removing the battery. Dr. Servati and his team have developed advanced methods to manufacture the smart gloves and related apparel at a relatively low cost locally.

    Augmented reality and robotics
    Dr. Servati envisions a seamless transition of the glove into the consumer market with ongoing improvements, in collaboration with different industrial partners. The team also sees potential applications in virtual reality and augmented reality, animation and robotics.
    “Imagine being able to accurately capture hand movements and interactions with objects and have it automatically display on a screen. There are endless applications. You can type text without needing a physical keyboard, control a robot, or translate American Sign Language into written speech in real time, providing easier communication for individuals who are deaf or hard of hearing.” More

  • in

    Advancement in thermoelectricity could light up the Internet of Things

    Imagine stoplights and cars communicating with each other to optimize the flow of traffic. This isn’t science fiction — it’s the Internet of Things (IoT), i.e., objects that sense their surroundings and respond via the internet. As the global population rises and such technologies continue to develop, you might wonder — what will power this digital world of tomorrow?
    Wind, solar, yes. Something all around us might not immediately come to mind though — heat. Now, in a study recently published in Nature Communications, a multi-institutional research team including Osaka University has unveiled a breakthrough in clean energy: greatly improved thermoelectric conversion. One of its many potential applications? That’s right, the IoT.
    Large-scale, global integration of the IoT is limited by the lack of a suitable energy supply. Realistically, an energy supply for the IoT must be local and small scale. Miniaturization of thermoelectric conversion can help solve this energy-supply problem by applying the otherwise wasted heat from microelectronics as a source of electricity. However, for practical applications, the efficiency of current thermoelectric-energy conversion is insufficient. Improving this efficiency was the goal of the research team’s study.
    “In our work, we demonstrate a two-dimensional electron gas (2DEG) system with multiple subbands that uses gallium arsenide. The system is different from conventional methods of thermoelectric conversion,” explain Yuto Uematsu and Yoshiaki Nakamura, lead and senior authors of the study. “Our system facilitates better conversion from temperature (heat) to electricity, and improves the mobility of electrons in their 2D sheet. This readily benefits everyday devices like semiconductors.”
    Incredibly, the researchers were able to improve the power factor of thermoelectric conversion by a factor of 4 compared with conventional 2DEG systems. Other technologies like resonant scattering have not been as efficient for thermoelectric conversion.
    The team’s findings could open the way to a sustainable power source for the IoT. Thin thermoelectric films on substrates made of gallium arsenide would be suitable for IoT application. For example, these could power environmental monitoring systems in remote locations or wearable devices for medical monitoring.
    “We’re excited because we have expanded upon the principles of a process that is crucial to clean energy and the development of a sustainable IoT,” says Yoshiaki Nakamura, senior author. “What’s more, our methodology can be applied to any element-based material; the practical applications are far reaching.”
    This work is an important step forward in maximizing the utility of thermoelectric power generation in modern microelectronics and is especially suitable for the IoT. As the results are not limited to gallium arsenide, further advancements to the system are possible, with sustainability and the IoT potentially benefitting greatly. More

  • in

    Do violent video games numb us towards real violence?

    Neuroscientists from the University of Vienna and the Karolinska Institute in Stockholm have investigated whether playing violent video games leads to a reduction in human empathy. To do this, they had adult test subjects play a violent video game repeatedly over the course of an experiment lasting several weeks. Before and after, their empathic responses to the pain of another person were measured. It was found that the violent video game had no discernible effect on empathy and underlying brain activity. These results have now been published in the journal eLife.
    Video games have become an integral part of the everyday life of many children and adults. Many of the most popular video games contain explicit depictions of extreme violence. Therefore, concerns have been raised that these games may blunt the empathy of their players and could therefore lower the inhibition threshold for real violence. An international research team led by Viennese neuroscientists Claus Lamm and Lukas Lengersdorff has now investigated whether this is actually the case.
    The Austrian and Swedish researchers invited a total of 89 adult male subjects to take part in the study. A key selection criterion was that the subjects had had little or no previous contact with violent video games. This ensured that the results were not influenced by different experiences with these games. In a first experimental study, the baseline level of empathy of the test subjects was assessed. Brain scans were used to record how the test subjects reacted when a second person was administered painful electric shocks. Then, the video game phase of the experiment began, during which the test subjects came to the research laboratory seven times to play a video game for one hour each time. The participants in the experimental group played a highly violent version of the game Grand Theft Auto V and were given the task of killing as many other game characters as possible. In the control group, all violence had been removed from the game and the participants were given the task of taking photos of other game characters. Finally, after the video game phase was over, the test subjects were examined a second time to determine whether their empathic responses had changed.
    The analysis of the data showed that the video game violence had no discernible effect on the empathic abilities of the test subjects. The reactions of the participants in the experimental group who were confronted with extreme depictions of violence did not differ statistically from those of the participants who only had to take photos. In addition, there were no significant differences in the activity of brain regions that had been identified in other studies as being associated with empathy — such as the anterior insular and anterior midcingulate cortex.
    Does that mean that concerns about violence in video games are unfounded? The authors advise against jumping to conclusions. “Precisely because this is such a sensitive topic, we have to be very careful when interpreting these results,” explains lead author Lukas Lengersdorff, who carried out the study as part of his doctoral studies. “The conclusion should not be that violent video games are now definitively proven to be harmless. Our study lacks the data to make such statements.” According to the neuroscientist and statistician, the value of the study lies rather in the fact that it allows a sober look at previous results. “A few hours of video game violence have no significant influence on the empathy of mentally healthy adult test subjects. We can clearly draw this conclusion. Our results thus contradict those of previous studies, in which negative effects were reported after just a few minutes of play.” In these previous studies, participants had played the violent video game immediately before data collection. “Such experimental designs are not able to distinguish the short-term and long-term effects of video games,” explains Lengersdorff.
    According to research group leader and co-author Claus Lamm, the study also sets a new standard for future research in this area: “Strong experimental controls and longitudinal research designs that allow causal conclusions to be drawn are needed to make clear statements about the effects of violent video games. We wanted to take a step in this direction with our study.” Now it is the task of further research to check whether there are no negative consequences even after significantly longer exposure to video game violence — and whether this is also the case for vulnerable subpopulations. “The most important question is of course: are children and young people also immune to violence in video games? The young brain is highly plastic, so repeated exposure to depictions of violence could have a much greater effect. But of course these questions are difficult to investigate experimentally without running up against the limits of scientific ethics,” says Lamm. More

  • in

    Experiment could test quantum nature of large masses for the first time

    An experiment outlined by a UCL (University College London)-led team of scientists from the UK and India could test whether relatively large masses have a quantum nature, resolving the question of whether quantum mechanical description works at a much larger scale than that of particles and atoms.
    Quantum theory is typically seen as describing nature at the tiniest scales and quantum effects have not been observed in a laboratory for objects more massive than about a quintillionth of a gram, or more precisely 10^(-20)g.
    The new experiment, described in a paper published in Physical Review Letters and involving researchers at UCL, the University of Southampton and the Bose Institute in Kolkata, India, could in principle test the quantumness of an object regardless of its mass or energy.
    The proposed experiment exploits the principle in quantum mechanics that the act of measurement of an object can change its nature. (The term measurement encompasses any interaction of the object with a probe — for instance, if light shines on it, or if it emits light or heat).
    The experiment focuses on a pendulum-like object oscillating like a ball on a string. A light is shone on one half of the area of oscillation, revealing information about the location of the object (i.e., if scattered light is not observed, then it can be concluded that the object is not in that half). A second light is shone, showing the location of the object further along on its swing.
    If the object is quantum, the first measurement (the first flash of light) will disturb its path (by measurement induced collapse — a property inherent to quantum mechanics), changing the likelihood of where it will be at the second flash of light, whereas if it is classical then the act of observation will make no difference. Researchers can then compare scenarios in which they shine a light twice to ones where only the second flash of light occurs to see if there is a difference in the final distributions of the object.
    Lead author Dr Debarshi Das (UCL Physics & Astronomy and the Royal Society) said: “A crowd at a football match cannot affect the result of the game simply by staring strongly. But with quantum mechanics, the act of observation or measurement itself changes the system.

    “Our proposed experiment can test if an object is classical or quantum by seeing if an act of observation can lead to a change in its motion.”
    The proposal, the researchers say, could be implemented with current technologies using nanocrystals or, in principle, even using mirrors at LIGO (Laser Interferometer Gravitational-Wave Observatory) in the United States which have an effective mass of 10kg.
    The four LIGO mirrors, which each weigh 40kg but together vibrate as if they were a single 10kg object, have already been cooled to the minimum-energy state (a fraction above absolute zero) that would be required in any experiment seeking to detect quantum behaviour.
    Senior author Professor Sougato Bose (UCL Physics & Astronomy) said: “Our scheme has wide conceptual implications. It could test whether relatively large objects have definite properties, i.e., their properties are real, even when we are not measuring them. It could extend the domain of quantum mechanics and probe whether this fundamental theory of nature is valid only at certain scales or if it holds true for larger masses too.
    “If we do not encounter a mass limit to quantum mechanics, this makes ever more acute the problem of trying to reconcile quantum theory with reality as we experience it.”
    In quantum mechanics, objects do not have definite properties until they are observed or interact with their environment. Prior to observation they do not exist in a definite location but may be in two places at once (a state of superposition). This led to Einstein’s remark: “Is the moon there when no one is looking at it?”
    Quantum mechanics may seem at odds with our experience of reality but its insights have helped the development of computers, smartphones, broadband, GPS, and magnetic resonance imaging.

    Most physicists believe quantum mechanics holds true at larger scales, but is merely harder to observe due to the isolation required to preserve a quantum state. To detect quantum behaviour in an object, its temperature or vibrations must be reduced to its lowest possible level (its ground state) and it must be in a vacuum so that nearly no atoms are interacting with it. That is because a quantum state will collapse, a process called decoherence, if the object interacts with its environment.
    The new proposed experiment is a development of an earlier quantum test devised by Professor Bose and colleagues in 2018. A project to conduct an experiment using this methodology, which will test the quantum nature of a nanocrystal numbering a billion atoms, is already underway, funded by the Engineering and Physical Sciences Research Council (EPSRC) and led by the University of Southampton.
    That project already aims for a jump in terms of mass, with previous attempts to test the quantum nature of a macroscopic object limited to hundreds of thousands of atoms. The newly published scheme, meanwhile, could be achieved with current technologies using a nanocrystal with trillions of atoms.
    The new paper was co-authored by Dr Das and Professor Bose at UCL along with Professor Dipankar Home of India’s Bose Institute (who also co-authored the 2018 paper) and Professor Hendrik Ulbricht of the University of Southampton. More

  • in

    Accelerating how new drugs are made with machine learning

    Researchers have developed a platform that combines automated experiments with AI to predict how chemicals will react with one another, which could accelerate the design process for new drugs.
    Predicting how molecules will react is vital for the discovery and manufacture of new pharmaceuticals, but historically this has been a trial-and-error process, and the reactions often fail. To predict how molecules will react, chemists usually simulate electrons and atoms in simplified models, a process which is computationally expensive and often inaccurate.
    Now, researchers from the University of Cambridge have developed a data-driven approach, inspired by genomics, where automated experiments are combined with machine learning to understand chemical reactivity, greatly speeding up the process. They’ve called their approach, which was validated on a dataset of more than 39,000 pharmaceutically relevant reactions, the chemical ‘reactome’.
    Their results, reported in the journal Nature Chemistry, are the product of a collaboration between Cambridge and Pfizer.
    “The reactome could change the way we think about organic chemistry,” said Dr Emma King-Smith from Cambridge’s Cavendish Laboratory, the paper’s first author. “A deeper understanding of the chemistry could enable us to make pharmaceuticals and so many other useful products much faster. But more fundamentally, the understanding we hope to generate will be beneficial to anyone who works with molecules.”
    The reactome approach picks out relevant correlations between reactants, reagents, and performance of the reaction from the data, and points out gaps in the data itself. The data is generated from very fast, or high throughput, automated experiments.
    “High throughput chemistry has been a game-changer, but we believed there was a way to uncover a deeper understanding of chemical reactions than what can be observed from the initial results of a high throughput experiment,” said King-Smith.

    “Our approach uncovers the hidden relationships between reaction components and outcomes,” said Dr Alpha Lee, who led the research. “The dataset we trained the model on is massive — it will help bring the process of chemical discovery from trial-and-error to the age of big data.”
    In a related paper, published in Nature Communications, the team developed a machine learning approach that enables chemists to introduce precise transformations to pre-specified regions of a molecule, enabling faster drug design.
    The approach allows chemists to tweak complex molecules — like a last-minute design change — without having to make them from scratch. Making a molecule in the lab is typically a multi-step process, like building a house. If chemists want to vary the core of a molecule, the conventional way is to rebuild the molecule, like knocking the house down and rebuilding from scratch. However, core variations are important to medicine design.
    A class of reactions, known as late-stage functionalisation reactions, attempts to directly introduce chemical transformations to the core, avoiding the need to start from scratch. However, it is challenging to make late-stage functionalisation selective and controlled — there are typically many regions of the molecules that can react, and it is difficult to predict the outcome.
    “Late-stage functionalisations can yield unpredictable results and current methods of modelling, including our own expert intuition, isn’t perfect,” said King-Smith. “A more predictive model would give us the opportunity for better screening.”
    The researchers developed a machine learning model that predicts where a molecule would react, and how the site of reaction vary as a function of different reaction conditions. This enables chemists to find ways to precisely tweak the core of a molecule.
    “We pretrained the model on a large body of spectroscopic data — effectively teaching the model general chemistry — before fine-tuning it to predict these intricate transformations,” said King-Smith. This approach allowed the team to overcome the limitation of low data: there are relatively few late-stage functionalisation reactions reported in the scientific literature. The team experimentally validated the model on a diverse set of drug-like molecules and was able to accurately predict the sites of reactivity under different conditions.
    “The application of machine learning to chemistry is often throttled by the problem that the amount of data is small compared to the vastness of chemical space,” said Lee. “Our approach — designing models that learn from large datasets that are similar but not the same as the problem we are trying to solve — resolve this fundamental low-data challenge and could unlock advances beyond late stage functionalisation.”
    The research was supported in part by Pfizer and the Royal Society. More

  • in

    Solid-state qubits: Forget about being clean, embrace mess

    New findings debunk previous wisdom that solid-state qubits need to be super dilute in an ultra-clean material to achieve long lifetimes. Instead, cram lots of rare-earth ions into a crystal and some will form pairs that act as highly coherent qubits, shows paper in Nature Physics.
    Clean lines and minimalism, or vintage shabby chic? It turns out that the same trends that occupy the world of interior design are important when it comes to designing the building blocks of quantum computers.
    How to make qubits that retain their quantum information long enough to be useful is one of the major barriers to practical quantum computing. It’s widely accepted that the key to qubits with long lifetimes, or ‘coherences’, is cleanliness. Qubits lose quantum information through a process known as decoherence when they start to interact with their environment. So, conventional wisdom goes, keep them away from each other and from other disturbing influences and they’ll hopefully survive a little longer.
    In practice such a ‘minimalistic’ approach to qubit design is problematic. Finding suitable ultra-pure materials is not easy. Furthermore, diluting qubits to the extreme makes scale-up of any resulting technology challenging. Now, surprising results from researchers at the Paul Scherrer Institute PSI, ETH Zurich and EPFL show how qubits with long lifetimes can exist in a cluttered environment.
    “In the long run, how to make it onto a chip is a question that’s universally discussed for all types of qubits. Instead of diluting more and more, we’ve demonstrated a new pathway by which we can squeeze qubits closer together,” states Gabriel Aeppli, head of the Photon Science Division at PSI and professor at ETH Zürich and EPFL, who led the study.
    Picking the gems from the junk
    The researchers created solid-state qubits from the rare-earth metal terbium, doped into crystals of yttrium lithium fluoride. They showed that within a crystal jam-packed with rare-earth ions were qubit gems with much longer coherences than would typically be expected in such a dense system.

    “For a given density of qubits, we show that it’s a much more effective strategy to throw in the rare-earth ions and pick the gems from the junk, rather than trying to separate the individual ions from each other by dilution,” explains Markus Müller, whose theoretical explanations were essential to understand bamboozling observations.
    Like classical bits that use 0 or 1 to store and process information, qubits also use systems that can exist in two states, albeit with the possibility of superpositions. When qubits are created from rare-earth ions, typically a property of the individual ions — such as the nuclear spin, which can point up or down — is used as this two-state system.
    Pairing up offers protection
    The reason the team could have such success with a radically different approach is that, rather than being formed from single ions, their qubits are formed from strongly interacting pairs of ions. Instead of using the nuclear spin of single ions, the pairs form qubits based on superpositions of different electron shell states.
    Within the matrix of the crystal, only a few of the terbium ions form pairs. “If you throw a lot of terbium into the crystal, by chance there are pairs of ions — our qubits. These are relatively rare, so the qubits themselves are quite dilute,” explains Adrian Beckert, lead author of the study.
    So why aren’t these qubits disturbed by their messy environment? It turns out that these gems, by their physical properties are shielded from the junk. Because they have a different characteristic energy at which they operate, they cannot exchange energy with the single terbium ions — in essence, they are blind to them.

    “If you make an excitation on a single terbium, it can easily hop over to another terbium, causing decoherence,” says Müller. “However, if the excitation is on a terbium pair, its state is entangled, so it lives at a different energy and cannot hop over to the single terbiums. It’d have to find another pair, but it can’t because the next one is a long distance away.”
    Shining light on qubits
    The researchers stumbled upon the phenomenon of qubit pairs when probing terbium doped yttrium lithium fluoride with microwave spectroscopy. The team also uses light to manipulate and measure quantum effects in materials, and the same kind of qubits are expected to operate at the higher frequencies of optical laser light. This is of interest as rare-earth metals possess optical transitions, which give an easy way in with light. “Eventually, our goal is to also use light from the X-ray Free Electron Laser SwissFEL or Swiss Light Source SLS to witness quantum information processing,” says Aeppli. This approach could be used to read out entire qubit ensembles with X-ray light.
    In the meantime, terbium is an attractive choice of dopant: it can be easily excited by frequencies in the microwave range used for telecommunications. It was during spin echo tests — a well-established technique to measure coherence times — that the team noticed funny peaks, corresponding to much longer coherences than those on the single ions. “There was something unexpected lurking,” remembers Beckert. With further microwave spectroscopy experiments and careful theoretical analysis, they could unpick these as pair states.
    “With the right material, the coherence could be even longer.”
    As the researchers delved into the nature of these qubits, they could understand the different ways in which they were protected from their environment and seek to optimise them. Although the excitations of the terbium pairs might be well shielded from the influence of other terbium ions, the nuclear spins on other atoms in the material could still interact with the qubits and cause them to decohere.
    To protect the qubits further from their environment, the researchers applied a magnetic field to the material that was tuned to exactly cancel out the effect of the nuclear spin of the terbium in the pairs. This resulted in essentially non-magnetic qubit states, which were only minimally sensitive to noise from the nuclear spins of surrounding ‘junk’ atoms.
    Once this level of protection was included, the qubit pairs had lifetimes of up to one hundred times longer than single ions in the same material.
    “If we’d set out to look for qubits based on terbium pairs, we wouldn’t have taken a material with so many nuclear spins,” says Aeppli. “What this shows is how powerful this approach can be. With the right material, the coherence could be even longer.” Armed with knowledge of this phenomenon, optimising the matrix is what the researchers will now do. More

  • in

    For surgery patients, AI could help reduce alcohol-related risks

    Using artificial intelligence to scan surgery patients’ medical records for signs of risky drinking might help spot those whose alcohol use raises their risk of problems during and after an operation, a new study suggests.
    The AI record scan tested in the study could help surgery teams know in advance which patients might need more education about such risks, or treatment to help them reduce their drinking or stop drinking for a period of time before and after surgery.
    The findings, published in Alcohol: Clinical and Experimental Research by a team from the University of Michigan, show that using a form of AI called natural language processing to analyze a patient’s entire medical record can spot signs of risky drinking documented in their charts, such as in doctor’s notes, even when they don’t have a diagnosis of an alcohol problem.
    Past research has shown that having more than a couple of drinks a day on average is associated with a higher risk of infections, wound complications, pulmonary complications and prolonged hospital stays in people having surgery.
    Many people who drink regularly don’t have a problem with alcohol, and when they do they may never receive a formal diagnosis for alcohol use disorder or addiction, which would be easy for a surgical team to spot in their chart.
    Scouring records and notes
    The researchers, from Michigan Medicine, U-M’s academic medical center, trained their AI model by letting it review 100 anonymous surgical patients’ records to look for risky drinking signs, and comparing its classifications with those of expert human reviewers.

    In all, the AI model matched the human expert classification most of the time. The AI model found signs of risky drinking in the notes of 87% of the patients who experts had identified as risky drinkers.
    Meanwhile, only 29% of these patients had a diagnosis code related to alcohol in their list of diagnoses. So, many patients with higher risk for complications would have slipped under the radar for their surgical team.
    The researchers then allowed the AI model to review more than 53,000 anonymous patient medical records compiled through the Michigan Genomics Initiative. The AI model identified three times more patients with risky alcohol use through this full-text search than the researchers found using diagnosis codes. In all, 15% of patients met criteria via the AI model, compared to 5% via diagnosis codes.
    “This evaluation of natural language processing to identify risky drinking in the records of surgical patients could lay the groundwork for efforts to identify other risks in primary care and beyond, with appropriate validation,” said V. G. Vinod Vydiswaran, Ph.D., lead author of the new paper and an associate professor of learning health sciences at the U-M Medical School. “Essentially, this is a way of highlighting for a provider what is already contained in the notes made by other providers, without them having to read the entire record.”
    “Given the excess surgical risk that can arise from even a moderate amount of daily alcohol use, and the challenges of implementing robust screening and treatment in the pre-op period, it’s vital that we explore other options for identifying patients who could most benefit from reducing use by themselves or with help, beyond those with a recorded diagnosis,” said senior author Anne Fernandez, Ph.D., an addiction psychologist at the U-M Addiction Center and Addiction Treatment Services and an associate professor of psychiatry.
    The new data suggest that surgical clinics that simply review the diagnosis codes listed in their incoming patients’ charts, and flag ones such as alcohol use disorder, alcohol dependence or alcohol-related liver conditions, would be missing many patients with elevated risk.

    Alcohol + surgery = added risk
    In addition to known risks of surgical complications, Fernandez and colleagues recently published data from a massive Michigan surgical database showing that people who both smoke and have two or more drinks a day were more likely to end up back in the hospital, or back in the operating room, than others. Those with risky drinking who didn’t smoke also were more likely to need a second operation.
    She and colleagues also found that 19% of people having surgery may have risky levels of alcohol use, in a review of detailed questionnaire data from people participating in two different studies that enroll people from Michigan Medicine surgery clinics.
    The new study used the NLP form of AI not to generate new information, but to look for clues in the pages and pages of provider notes and data that make up a person’s entire medical record.
    After validation, Vydiswaran said, the tool could potentially be run on a patient’s record before they are seen in a pre-operative appointment and identify their risk level.
    Just knowing that a person has a potentially risky level of drinking isn’t enough, of course.
    Fernandez is leading an effort to test a virtual coaching approach to help people scheduled for surgery understand the risks related to their level of drinking and support them in reducing their intake.
    “Our goal is to identify people who may be in need of more treatment services, including medication for alcohol use disorder and support during their surgical recovery when alcohol abstinence is necessary,” she said. “We are not aiming to replace the due diligence every provider must do, but to prompt them to talk with patients and get more information to act upon.”
    The risks of combining alcohol with the opioid pain medications often used to treat post-surgical pain are very high, she noted.
    In addition to current work to validate the model, the team hopes to make their model publicly available, though it would have to be trained on the electronic records system of any health system that seeks to use it.
    “These AI tools can do amazing things, but it’s important we use them to do things that could save time for busy clinicians, whether that’s related to alcohol or to drug use, disordered eating, or other chronic conditions,” said Fernandez. “And if we are going to use them to spot potential issues, we need to be ready to offer treatment options too.”
    In addition to Fernandez and Vydiswaran, who are members of the U-M Institute for Healthcare Policy and Innovation, the study’s authors are Asher Strayhorn and Katherine Weber of Learning Health Sciences, and Haley Stevens, Jessica Mellinger and G. Scott Winder of Psychiatry.
    The study was funded by the National Institute on Alcoholism and Alcohol Abuse, part of the National Institutes of Health (AA026333, AA028315) and by U-M Precision Health, which also runs the Michigan Genomics Initiative. The study used the U-M Data Office for Clinical and Translational Research. More

  • in

    Bioinformatics: Researchers develop a new machine learning approach

    To combat viruses, bacteria and other pathogens, synthetic biology offers new technological approaches whose performance is being validated in experiments. Researchers from the Würzburg Helmholtz Institute for RNA-based Infection Research and the Helmholtz AI Cooperative applied data integration and artificial intelligence (AI) to develop a machine learning approach that can predict the efficacy of CRISPR technologies more accurately than before. The findings were published today in the journal Genome Biology.
    The genome or DNA of an organism incorporates the blueprint for proteins and orchestrates the production of new cells. Aiming to combat pathogens, cure genetic diseases or achieve other positive effects, molecular biological CRISPR technologies are being used to specifically alter or silence genes and inhibit protein production.
    One of these molecular biological tools is CRISPRi (from “CRISPR interference”). CRISPRi blocks genes and gene expression without modifying the DNA sequence. As with the CRISPR-Cas system also known as “gene scissors,” this tool involves a ribonucleic acid (RNA), which serves as a guide RNA to direct a nuclease (Cas). In contrast to gene scissors, however, the CRISPRi nuclease only binds to the DNA without cutting it. This binding results in the corresponding gene not being transcribed and thus remaining silent.
    Until now, it has been challenging to predict the performance of this method for a specific gene. Researchers from the Würzburg Helmholtz Institute for RNA-based Infection Research (HIRI) in cooperation with the University of Würzburg and the Helmholtz Artificial Intelligence Cooperation Unit (Helmholtz AI) have now developed a machine learning approach using data integration and artificial intelligence (AI) to improve such predictions in the future.
    The approach
    CRISPRi screens are a highly sensitive tool that can be used to investigate the effects of reduced gene expression. In their study, published today in the journal Genome Biology, the scientists used data from multiple genome-wide CRISPRi essentiality screens to train a machine learning approach. Their goal: to better predict the efficacy of the engineered guide RNAs deployed in the CRISPRi system.
    “Unfortunately, genome-wide screens only provide indirect information about guide efficiency. Hence, we have applied a new machine learning method that disentangles the efficacy of the guide RNA from the impact of the silenced gene,” explains Lars Barquist. The computational biologist initiated the study and heads a bioinformatics research group at the Würzburg Helmholtz Institute, a site of the Braunschweig Helmholtz Centre for Infection Research in cooperation with the Julius-Maximilians-Universität Würzburg.

    Supported by additional AI tools (“Explainable AI”), the team established comprehensible design rules for future CRISPRi experiments. The study authors validated their approach by conducting an independent screen targeting essential bacterial genes, showing that their predictions were more accurate than previous methods.
    “The results have shown that our model outperforms existing methods and provides more reliable predictions of CRISPRi performance when targeting specific genes,” says Yanying Yu, PhD student in Lars Barquist’s research group and first author of the study.
    The scientists were particularly surprised to find that the guide RNA itself is not the primary factor in determining CRISPRi depletion in essentiality screens. “Certain gene-specific characteristics related to gene expression appear to have a greater impact than previously assumed,” explains Yu.
    The study also reveals that integrating data from multiple data sets significantly improves the predictive accuracy and enables a more reliable assessment of the efficiency of guide RNAs. “Expanding our training data by pulling together multiple experiments is essential to create better prediction models. Prior to our study, lack of data was a major limiting factor for prediction accuracy,” summarizes junior professor Barquist. The approach now published will be very helpful in planning more effective CRISPRi experiments in the future and serve both biotechnology and basic research. “Our study provides a blueprint for developing more precise tools to manipulate bacterial gene expression and ultimately help to better understand and combat pathogens,” says Barquist.
    The results at a glance
    • Gene features matter: The characteristics of targeted genes have a significant impact on guide RNA depletion in genome-wide screens.

    • Data integration improves predictions: Combining data from multiple CRISPRi screens significantly improves the accuracy of prediction models and enables more reliable estimates of guide RNA efficiency.
    • Designing better CRISPRi experiments: The study provides valuable insights for designing more effective CRISPRi experiments by predicting guide RNA efficiency, enabling precise gene-silencing strategies.
    Funding
    The study was supported by funds from the Bavarian State Ministry of Science and Art through the bayresq.net research network. More