More stories

  • in

    Using artificial intelligence to find new uses for existing medications

    Scientists have developed a machine-learning method that crunches massive amounts of data to help determine which existing medications could improve outcomes in diseases for which they are not prescribed.
    The intent of this work is to speed up drug repurposing, which is not a new concept — think Botox injections, first approved to treat crossed eyes and now a migraine treatment and top cosmetic strategy to reduce the appearance of wrinkles.
    But getting to those new uses typically involves a mix of serendipity and time-consuming and expensive randomized clinical trials to ensure that a drug deemed effective for one disorder will be useful as a treatment for something else.
    The Ohio State University researchers created a framework that combines enormous patient care-related datasets with high-powered computation to arrive at repurposed drug candidates and the estimated effects of those existing medications on a defined set of outcomes.
    Though this study focused on proposed repurposing of drugs to prevent heart failure and stroke in patients with coronary artery disease, the framework is flexible — and could be applied to most diseases.
    “This work shows how artificial intelligence can be used to ‘test’ a drug on a patient, and speed up hypothesis generation and potentially speed up a clinical trial,” said senior author Ping Zhang, assistant professor of computer science and engineering and biomedical informatics at Ohio State. “But we will never replace the physician — drug decisions will always be made by clinicians.”
    The research is published today (Jan. 4, 2021) in Nature Machine Intelligence.

    advertisement

    Drug repurposing is an attractive pursuit because it could lower the risk associated with safety testing of new medications and dramatically reduce the time it takes to get a drug into the marketplace for clinical use.
    Randomized clinical trials are the gold standard for determining a drug’s effectiveness against a disease, but Zhang noted that machine learning can account for hundreds — or thousands — of human differences within a large population that could influence how medicine works in the body. These factors, or confounders, ranging from age, sex and race to disease severity and the presence of other illnesses, function as parameters in the deep learning computer algorithm on which the framework is based.
    That information comes from “real-world evidence,” which is longitudinal observational data about millions of patients captured by electronic medical records or insurance claims and prescription data.
    “Real-world data has so many confounders. This is the reason we have to introduce the deep learning algorithm, which can handle multiple parameters,” said Zhang, who leads the Artificial Intelligence in Medicine Lab and is a core faculty member in the Translational Data Analytics Institute at Ohio State. “If we have hundreds or thousands of confounders, no human being can work with that. So we have to use artificial intelligence to solve the problem.
    “We are the first team to introduce use of the deep learning algorithm to handle the real-world data, control for multiple confounders, and emulate clinical trials,” Zhang said.

    advertisement

    The research team used insurance claims data on nearly 1.2 million heart-disease patients, which provided information on their assigned treatment, disease outcomes and various values for potential confounders. The deep learning algorithm also has the power to take into account the passage of time in each patient’s experience — for every visit, prescription and diagnostic test. The model input for drugs is based on their active ingredients.
    Applying what is called causal inference theory, the researchers categorized, for the purposes of this analysis, the active drug and placebo patient groups that would be found in a clinical trial. The model tracked patients for two years — and compared their disease status at that end point to whether or not they took medications, which drugs they took and when they started the regimen.
    “With causal inference, we can address the problem of having multiple treatments. We don’t answer whether drug A or drug B works for this disease or not, but figure out which treatment will have the better performance,” Zhang said.
    Their hypothesis: that the model would identify drugs that could lower the risk for heart failure and stroke in coronary artery disease patients.
    The model yielded nine drugs considered likely to provide those therapeutic benefits, three of which are currently in use — meaning the analysis identified six candidates for drug repurposing. Among other findings, the analysis suggested that a diabetes medication, metformin, and escitalopram, used to treat depression and anxiety, could lower risk for heart failure and stroke in the model patient population. As it turns out, both of those drugs are currently being tested for their effectiveness against heart disease.
    Zhang stressed that what the team found in this case study is less important than how they got there.
    “My motivation is applying this, along with other experts, to find drugs for diseases without any current treatment. This is very flexible, and we can adjust case-by-case,” he said. “The general model could be applied to any disease if you can define the disease outcome.”
    The research was supported by the National Center for Advancing Translational Sciences, which funds the Center for Clinical and Translational Science at Ohio State. More

  • in

    Stretching diamond for next-generation microelectronics

    Diamond is the hardest material in nature. It also has great potential as an excellent electronic material. A research team has demonstrated for the first time the large, uniform tensile elastic straining of microfabricated diamond arrays through the nanomechanical approach. Their findings have shown the potential of strained diamonds as prime candidates for advanced functional devices in microelectronics, photonics, and quantum information technologies. More

  • in

    Spontaneous robot dances highlight a new kind of order in active matter

    Predicting when and how collections of particles, robots, or animals become orderly remains a challenge across science and engineering.
    In the 19th century, scientists and engineers developed the discipline of statistical mechanics, which predicts how groups of simple particles transition between order and disorder, as when a collection of randomly colliding atoms freezes to form a uniform crystal lattice.
    More challenging to predict are the collective behaviors that can be achieved when the particles become more complicated, such that they can move under their own power. This type of system — observed in bird flocks, bacterial colonies and robot swarms — goes by the name “active matter.”
    As reported in the January 1, 2021 issue of the journal Science, a team of physicists and engineers have proposed a new principle by which active matter systems can spontaneously order, without need for higher level instructions or even programmed interaction among the agents. And they have demonstrated this principle in a variety of systems, including groups of periodically shape-changing robots called “smarticles” — smart, active particles.
    The theory, developed by Dr. Pavel Chvykov at the Massachusetts Institute of Technology while a student of Prof. Jeremy England, who is now a researcher in the School of Physics at Georgia Institute of Technology, posits that certain types of active matter with sufficiently messy dynamics will spontaneously find what the researchers refer to as “low rattling” states.
    “Rattling is when matter takes energy flowing into it and turns it into random motion,” England said. “Rattling can be greater either when the motion is more violent, or more random. Conversely, low rattling is either very slight or highly organized — or both. So, the idea is that if your matter and energy source allow for the possibility of a low rattling state, the system will randomly rearrange until it finds that state and then gets stuck there. If you supply energy through forces with a particular pattern, this means the selected state will discover a way for the matter to move that finely matches that pattern.”
    To develop their theory, England and Chvykov took inspiration from a phenomenon — dubbed dubbed — discovered by the Swiss physicist Charles Soret in the late 19th century. In Soret’s experiments, he discovered that subjecting an initially uniform salt solution in a tube to a difference in temperature would spontaneously lead to an increase in salt concentration in the colder region — which corresponds to an increase in order of the solution.
    Chvykov and England developed numerous mathematical models to demonstrate the low rattling principle, but it wasn’t until they connected with Daniel Goldman, Dunn Family Professor of Physics at the Georgia Institute of Technology, that they were able to test their predictions.
    Said Goldman, “A few years back, I saw England give a seminar and thought that some of our smarticle robots might prove valuable to test this theory.” Working with Chvykov, who visited Goldman’s lab, Ph.D. students William Savoie and Akash Vardhan used three flapping smarticles enclosed in a ring to compare experiments to theory. The students observed that instead of displaying complicated dynamics and exploring the container completely, the robots would spontaneously self-organize into a few dances — for example, one dance consists of three robots slapping each other’s arms in sequence. These dances could persist for hundreds of flaps, but suddenly lose stability and be replaced by a dance of a different pattern.
    After first demonstrating that these simple dances were indeed low rattling states, Chvykov worked with engineers at Northwestern University, Prof. Todd Murphey and Ph.D. student Thomas Berrueta, who developed more refined and better controlled smarticles. The improved smarticles allowed the researchers to test the limits of the theory, including how the types and number of dances varied for different arm flapping patterns, as well as how these dances could be controlled. “By controlling sequences of low rattling states, we were able to make the system reach configurations that do useful work,” Berrueta said. The Northwestern University researchers say that these findings may have broad practical implications for microrobotic swarms, active matter, and metamaterials.
    As England noted: “For robot swarms, it’s about getting many adaptive and smart group behaviors that you can design to be realized in a single swarm, even though the individual robots are relatively cheap and computationally simple. For living cells and novel materials, it might be about understanding what the ‘swarm’ of atoms or proteins can get you, as far as new material or computational properties.” More

  • in

    Development of fusion energy

    The U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) is collaborating with private industry on cutting-edge fusion research aimed at achieving commercial fusion energy. This work, enabled through a public-private DOE grant program, supports efforts to develop high-performance fusion grade plasmas. In one such project PPPL is working in coordination with MIT’s Plasma Science and Fusion Center (PSFC) and Commonwealth Fusion Systems, a start-up spun out of MIT that is developing a tokamak fusion device called “SPARC.”
    The goal of the project is to predict the leakage of fast “alpha” particles produced during the fusion reactions in SPARC, given the size and potential misalignments of the superconducting magnets that confine the plasma. These particles can create a largely self-heated or “burning plasma” that fuels fusion reactions. Development of burning plasma is a major scientific goal for fusion energy research. However, leakage of alpha particles could slow or halt the production of fusion energy and damage the interior of the SPARC facility.
    New superconducting magnets
    Key features of the SPARC machine include its compact size and powerful magnetic fields enabled by the ability of new superconducting magnets to operate at higher fields and stresses than existing superconducting magnets. These features will enable design and construction of smaller and less-expensive fusion facilities, as described in recent publications by the SPARC team — assuming that the fast alpha particles created in fusion reactions can be contained long enough to keep the plasma hot.
    “Our research indicates that they can be,” said PPPL physicist Gerrit Kramer, who participates in the project through the DOE Innovation Network for Fusion Energy (INFUSE) program. The two-year-old program, which PPPL physicist Ahmed Diallo serves as deputy director, aims to speed private-sector development of fusion energy through partnerships with national laboratories.

    advertisement

    Well-confined
    “We found that the alpha particles are indeed well confined in the SPARC design,” said Kramer, coauthor of a paper in the Journal of Plasma Physics that reports the findings. He worked closely with the lead author Steven Scott, a consultant to Commonwealth Fusion Systems and former long-time physicist at PPPL.
    Kramer used the SPIRAL computer code developed at PPPL to verify the particle confinement. “The code, which simulates the wavy pattern, or ripples, in a magnetic field that could allow the escape of fast particles, showed good confinement and lack of damage to the SPARC walls,” Kramer said. Moreover, he added, “the SPIRAL code agreed well with the ASCOT code from Finland. While the two codes are completely different, the results were similar.”
    The findings gladdened Scott. “It’s gratifying to see the computational validation of our understanding of ripple-induced losses,” he said, “since I studied the issue experimentally back in the early 1980s for my doctoral dissertation.”
    Fusion reactions combine light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei, or ions, that comprises 99 percent of the visible universe — to generate massive amounts of energy. Scientists around the world are seeking to create fusion as a virtually unlimited source of power for generating electricity.
    Key guidance
    Kramer and colleagues noted that misalignment of the SPARC magnets will increase the ripple-induced losses of fusion particles leading to increased power striking the walls. Their calculations should provide key guidance to the SPARC engineering team about how well the magnets must be aligned to avoid excessive power loss and wall damage. Properly aligned magnets will enable studies of plasma self-heating for the first time and development of improved techniques for plasma control in future fusion power plants. More

  • in

    A pursuit of better testing to sort out the complexities of ADHD

    The introduction of computer simulation to the identification of symptoms in children with attention deficit/hyperactivity disorder (ADHD) has potential to provide an additional objective tool to gauge the presence and severity of behavioral problems, Ohio State University researchers suggest in a new publication.
    Most mental health disorders are diagnosed and treated based on clinical interviews and questionnaires — and, for about a century, data from cognitive tests has been added to the diagnostic process to help clinicians learn more about how and why people behave in a certain way.
    Cognitive testing in ADHD is used to identify a variety of symptoms and deficits, including selective attention, poor working memory, altered time perception, difficulties in maintaining attention and impulsive behavior. In the most common class of performance tests, children are told to either press a computer key or avoid hitting a key when they see a certain word, symbol or other stimulus.
    For ADHD, however, these cognitive tests often don’t capture the complexity of symptoms. The advent of computational psychiatry — comparing a computer-simulated model of normal brain processes to dysfunctional processes observed in tests — could be an important supplement to the diagnostic process for ADHD, the Ohio State researchers report in a new review published in the journal Psychological Bulletin.
    The research team reviewed 50 studies of cognitive tests for ADHD and described how three common types of computational models could supplement these tests.
    It is widely recognized that children with ADHD take longer to make decisions while performing tasks than children who don’t have the disorder, and tests have relied on average response times to explain the difference. But there are intricacies to that dysfunction that a computational model could help pinpoint, providing information clinicians, parents and teachers could use to make life easier for kids with ADHD.

    advertisement

    “We can use models to simulate the decision process and see how decision-making happens over time — and do a better job of figuring out why children with ADHD take longer to make decisions,” said Nadja Ging-Jehli, lead author of the review and a graduate student in psychology at Ohio State.
    Ging-Jehli completed the review with Ohio State faculty members Roger Ratcliff, professor of psychology, and L. Eugene Arnold, professor emeritus of psychiatry and behavioral health.
    The researchers offer recommendations for testing and clinical practice to achieve three principal goals: better characterizing ADHD and any accompanying mental health diagnoses such as anxiety and depression, improving treatment outcomes (about one-third of patients with ADHD do not respond to medical treatment), and potentially predicting which children will “lose” the ADHD diagnosis as adults.
    Decision-making behind the wheel of a car helps illustrate the problem: Drivers know that when a red light turns green, they can go through an intersection — but not everyone hits the gas pedal at the same time. A common cognitive test of this behavior would repeatedly expose drivers to the same red light-green light scenario to arrive at an average reaction time and use that average, and deviations from it, to categorize the typical versus disordered driver.
    This approach has been used to determine that individuals with ADHD are typically slower to “start driving” than those without ADHD. But that determination leaves out a range of possibilities that help explain why they’re slower — they could be distracted, daydreaming, or feeling nervous in a lab setting. The broad distribution of reactions captured by computer modeling could provide more, and useful, information.

    advertisement

    “In our review, we show that this method has multiple problems that prevent us from understanding the underlying characteristics of a mental-health disorder such as ADHD, and that also prevent us from finding the best treatment for different individuals,” Ging-Jehli said. “We can use computational modeling to think about the factors that generate the observed behavior. These factors will broaden our understanding of a disorder, acknowledging that there are different types of individuals who have different deficits that also call for different treatments.
    “We are proposing using the entire distribution of the reaction times, taking into consideration the slowest and the fastest reaction times to distinguish between different types of ADHD.”
    The review also identified a complicating factor for ADHD research going forward — a broader range of externally evident symptoms as well as subtle characteristics that are hard to detect with the most common testing methods. Understanding that children with ADHD have so many biologically based differences suggests that a single task-based test is not sufficient to make a meaningful ADHD diagnosis, the researchers say.
    “ADHD is not only the child who is fidgeting and restless in a chair. It’s also the child who is inattentive because of daydreaming. Even though that child is more introverted and doesn’t express as many symptoms as a child with hyperactivity, that doesn’t mean that child doesn’t suffer,” Ging-Jehli said. Daydreaming is especially common in girls, who are not enrolled in ADHD studies nearly as frequently as boys, she said.
    Ging-Jehli described computational psychiatry as a tool that could also take into account — continuing the analogy — mechanical differences in the car, and how that could influence driver behavior. These dynamics can make it harder to understand ADHD, but also open the door to a broader range of treatment options.
    “We need to account for the different types of drivers and we need to understand the different conditions to which we expose them. Based on only one observation, we cannot make conclusions about diagnosis and treatment options,” she said.
    “However, cognitive testing and computational modeling should not be seen as an attempt to replace existing clinical interviews and questionnaire-based procedures, but as complements that add value by providing new information.”
    According to the researchers, a battery of tasks gauging social and cognitive characteristics should be assigned for a diagnosis rather than just one, and more consistency is needed across studies to ensure the same cognitive tasks are used to assess the appropriate cognitive concepts.
    Finally, combining cognitive testing with physiological tests — especially eye-tracking and EEGs that record electrical activity in the brain — could provide powerful objective and quantifiable data to make a diagnosis more reliable and help clinicians better predict which medicines would be most effective.
    Ging-Jehli is putting these suggestions to the test in her own research, applying a computational model in a study of a specific neurological intervention in children with ADHD.
    “The purpose of our analysis was to show there’s a lack of standardization and so much complexity, and symptoms are hard to measure with existing tools,” Ging-Jehli said. “We need to understand ADHD better for children and adults to have a better quality of life and get the treatment that is most appropriate.”
    This research was supported by the Swiss National Science Foundation and the National Institute on Aging. More

  • in

    More effective training model for robots

    Multi-domain operations, the Army’s future operating concept, requires autonomous agents with learning components to operate alongside the warfighter. New Army research reduces the unpredictability of current training reinforcement learning policies so that they are more practically applicable to physical systems, especially ground robots.
    These learning components will permit autonomous agents to reason and adapt to changing battlefield conditions, said Army researcher Dr. Alec Koppel from the U.S. Army Combat Capabilities Development Command, now known as DEVCOM, Army Research Laboratory.
    The underlying adaptation and re-planning mechanism consists of reinforcement learning-based policies. Making these policies efficiently obtainable is critical to making the MDO operating concept a reality, he said.
    According to Koppel, policy gradient methods in reinforcement learning are the foundation for scalable algorithms for continuous spaces, but existing techniques cannot incorporate broader decision-making goals such as risk sensitivity, safety constraints, exploration and divergence to a prior.
    Designing autonomous behaviors when the relationship between dynamics and goals are complex may be addressed with reinforcement learning, which has gained attention recently for solving previously intractable tasks such as strategy games like go, chess and videogames such as Atari and Starcraft II, Koppel said.
    Prevailing practice, unfortunately, demands astronomical sample complexity, such as thousands of years of simulated gameplay, he said. This sample complexity renders many common training mechanisms inapplicable to data-starved settings required by MDO context for the Next-Generation Combat Vehicle, or NGCV.

    advertisement

    “To facilitate reinforcement learning for MDO and NGCV, training mechanisms must improve sample efficiency and reliability in continuous spaces,” Koppel said. “Through the generalization of existing policy search schemes to general utilities, we take a step towards breaking existing sample efficiency barriers of prevailing practice in reinforcement learning.”
    Koppel and his research team developed new policy search schemes for general utilities, whose sample complexity is also established. They observed that the resulting policy search schemes reduce the volatility of reward accumulation, yield efficient exploration of an unknown domains and a mechanism for incorporating prior experience.
    “This research contributes an augmentation of the classical Policy Gradient Theorem in reinforcement learning,” Koppel said. “It presents new policy search schemes for general utilities, whose sample complexity is also established. These innovations are impactful to the U.S. Army through their enabling of reinforcement learning objectives beyond the standard cumulative return, such as risk sensitivity, safety constraints, exploration and divergence to a prior.”
    Notably, in the context of ground robots, he said, data is costly to acquire.
    “Reducing the volatility of reward accumulation, ensuring one explores an unknown domain in an efficient manner, or incorporating prior experience, all contribute towards breaking existing sample efficiency barriers of prevailing practice in reinforcement learning by alleviating the amount of random sampling one requires in order to complete policy optimization,” Koppel said.
    The future of this research is very bright, and Koppel has dedicated his efforts towards making his findings applicable for innovative technology for Soldiers on the battlefield.
    “I am optimistic that reinforcement-learning equipped autonomous robots will be able to assist the warfighter in exploration, reconnaissance and risk assessment on the future battlefield,” Koppel said. “That this vision is made a reality is essential to what motivates which research problems I dedicate my efforts.”
    The next step for this research is to incorporate the broader decision-making goals enabled by general utilities in reinforcement learning into multi-agent settings and investigate how interactive settings between reinforcement learning agents give rise to synergistic and antagonistic reasoning among teams.
    According to Koppel, the technology that results from this research will be capable of reasoning under uncertainty in team scenarios. More