More stories

  • in

    New studies suggest social isolation is a risk factor for dementia in older adults, point to ways to reduce risk

    In two studies using nationally representative data from the National Health and Aging Trends Study gathered on thousands of Americans, researchers from the Johns Hopkins University School of Medicine and Bloomberg School of Public Health have significantly added to evidence that social isolation is a substantial risk factor for dementia in community-dwelling (noninstitutionalized) older adults, and identified technology as an effective way to intervene.
    Collectively, the studies do not establish a direct cause and effect between dementia and social isolation, defined as lack of social contact and interactions with people on a regular basis. But, the researchers say, the studies strengthen observations that such isolation increases the risk of dementia, and suggest that relatively simple efforts to increase social support of older adults — such as texting and use of email — may reduce that risk. In the United States, an estimated 1 in 4 people over age 65 experience social isolation, according to the National Institute on Aging.
    “Social connections matter for our cognitive health, and it is potentially easily modifiable for older adults without the use of medication,” says Thomas Cudjoe, M.D., M.P.H., assistant professor of medicine at the Johns Hopkins University School of Medicine and senior author of both of the new studies.
    The first study, described Jan. 11 in the Journal of the American Geriatrics Society, used data collected on a group of 5,022 Medicare beneficiaries for a long-term study known as the National Health and Aging Trends, which began in 2011. All participants were 65 or older, and were asked to complete an annual two-hour, in-person interview to assess cognitive function, health status and overall well-being.
    At the initial interview, 23% of the 5,022 participants were socially isolated and showed no signs of dementia. However, by the end of this nine-year study, 21% of the total sample of participants had developed dementia. The researchers concluded that risk of developing dementia over nine years was 27% higher among socially isolated older adults compared with older adults who were not socially isolated.
    “Socially isolated older adults have smaller social networks, live alone and have limited participation in social activities,” says Alison Huang, Ph.D., M.P.H., senior research associate at the Johns Hopkins Bloomberg School of Public Health. “One possible explanation is that having fewer opportunities to socialize with others decreases cognitive engagement as well, potentially contributing to increased risk of dementia.”
    Interventions to reduce that risk are possible, according to results of the second study, published Dec. 15 in the Journal of the American Geriatrics Society. Specifically, researchers found the use of communications technology such as telephone and email lowered the risk for social isolation.
    Researchers for the second study used data from participants in the same National Health and Aging Trends study, and found that more than 70% of people age 65 and up who were not socially isolated at their initial appointment had a working cellphone and/or computer, and regularly used email or texting to initiate and respond to others. Over the four-year research period for this second study, older adults who had access to such technology consistently showed a 31% lower risk for social isolation than the rest of the cohort.
    “Basic communications technology is a great tool to combat social isolation,” says Mfon Umoh, M.D., Ph.D., postdoctoral fellow in geriatric medicine at the Johns Hopkins University School of Medicine. “This study shows that access and use of simple technologies are important factors that protect older adults against social isolation, which is associated with significant health risks. This is encouraging because it means simple interventions may be meaningful.”
    Social isolation has gained significant attention in the past decade, especially due to restrictions implemented for the COVID-19 pandemic, but more work needs to be done to identify at-risk populations and create tools for providers and caregivers to minimize risk, the researchers say. Future research in this area should focus on increased risks based on biological sex, physical limitations, race and income level.
    Other scientists who contributed to this research are Laura Prichett, Cynthia Boyd, David Roth, Tom Cidav, Shang-En Chung, Halima Amjad, and Roland Thorpe of the Johns Hopkins University School of Medicine and Bloomberg School of Public Health.
    This research was funded by the Caryl & George Bernstein Human Aging Project, the Johns Hopkins University Center for Innovative Medicine, the National Center for Advancing Translational Sciences, the National Institute on Aging, the Secunda Family Foundation, the Patient-Centered Care for Older Adults with Multiple Chronic Conditions, and the National Institute on Minority Health and Health Disparities. More

  • in

    Computers that power self-driving cars could be a huge driver of global carbon emissions

    In the future, the energy needed to run the powerful computers on board a global fleet of autonomous vehicles could generate as many greenhouse gas emissions as all the data centers in the world today.
    That is one key finding of a new study from MIT researchers that explored the potential energy consumption and related carbon emissions if autonomous vehicles are widely adopted.
    The data centers that house the physical computing infrastructure used for running applications are widely known for their large carbon footprint: They currently account for about 0.3 percent of global greenhouse gas emissions, or about as much carbon as the country of Argentina produces annually, according to the International Energy Agency. Realizing that less attention has been paid to the potential footprint of autonomous vehicles, the MIT researchers built a statistical model to study the problem. They determined that 1 billion autonomous vehicles, each driving for one hour per day with a computer consuming 840 watts, would consume enough energy to generate about the same amount of emissions as data centers currently do.
    The researchers also found that in over 90 percent of modeled scenarios, to keep autonomous vehicle emissions from zooming past current data center emissions, each vehicle must use less than 1.2 kilowatts of power for computing, which would require more efficient hardware. In one scenario — where 95 percent of the global fleet of vehicles is autonomous in 2050, computational workloads double every three years, and the world continues to decarbonize at the current rate — they found that hardware efficiency would need to double faster than every 1.1 years to keep emissions under those levels.
    “If we just keep the business-as-usual trends in decarbonization and the current rate of hardware efficiency improvements, it doesn’t seem like it is going to be enough to constrain the emissions from computing onboard autonomous vehicles. This has the potential to become an enormous problem. But if we get ahead of it, we could design more efficient autonomous vehicles that have a smaller carbon footprint from the start,” says first author Soumya Sudhakar, a graduate student in aeronautics and astronautics.
    Sudhakar wrote the paper with her co-advisors Vivienne Sze, associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Research Laboratory of Electronics (RLE); and Sertac Karaman, associate professor of aeronautics and astronautics and director of the Laboratory for Information and Decision Systems (LIDS). The research appears in the January-February issue of IEEE Micro.

    Modeling emissions
    The researchers built a framework to explore the operational emissions from computers on board a global fleet of electric vehicles that are fully autonomous, meaning they don’t require a back-up human driver.
    The model is a function of the number of vehicles in the global fleet, the power of each computer on each vehicle, the hours driven by each vehicle, and the carbon intensity of the electricity powering each computer.
    “On its own, that looks like a deceptively simple equation. But each of those variables contains a lot of uncertainty because we are considering an emerging application that is not here yet,” Sudhakar says.
    For instance, some research suggests that the amount of time driven in autonomous vehicles might increase because people can multitask while driving and the young and the elderly could drive more. But other research suggests that time spent driving might decrease because algorithms could find optimal routes that get people to their destinations faster.

    In addition to considering these uncertainties, the researchers also needed to model advanced computing hardware and software that doesn’t exist yet.
    To accomplish that, they modeled the workload of a popular algorithm for autonomous vehicles, known as a multitask deep neural network because it can perform many tasks at once. They explored how much energy this deep neural network would consume if it were processing many high-resolution inputs from many cameras with high frame rates, simultaneously.
    When they used the probabilistic model to explore different scenarios, Sudhakar was surprised by how quickly the algorithms’ workload added up.
    For example, if an autonomous vehicle has 10 deep neural networks processing images from 10 cameras, and that vehicle drives for one hour a day, it will make 21.6 million inferences each day. One billion vehicles would make 21.6 quadrillion inferences. To put that into perspective, all of Facebook’s data centers worldwide make a few trillion inferences each day (1 quadrillion is 1,000 trillion).
    “After seeing the results, this makes a lot of sense, but it is not something that is on a lot of people’s radar. These vehicles could actually be using a ton of computer power. They have a 360-degree view of the world, so while we have two eyes, they may have 20 eyes, looking all over the place and trying to understand all the things that are happening at the same time,” Karaman says.
    Autonomous vehicles would be used for moving goods, as well as people, so there could be a massive amount of computing power distributed along global supply chains, he says. And their model only considers computing — it doesn’t take into account the energy consumed by vehicle sensors or the emissions generated during manufacturing.
    Keeping emissions in check
    To keep emissions from spiraling out of control, the researchers found that each autonomous vehicle needs to consume less than 1.2 kilowatts of energy for computing. For that to be possible, computing hardware must become more efficient at a significantly faster pace, doubling in efficiency about every 1.1 years.
    One way to boost that efficiency could be to use more specialized hardware, which is designed to run specific driving algorithms. Because researchers know the navigation and perception tasks required for autonomous driving, it could be easier to design specialized hardware for those tasks, Sudhakar says. But vehicles tend to have 10- or 20-year lifespans, so one challenge in developing specialized hardware would be to “future-proof” it so it can run new algorithms.
    In the future, researchers could also make the algorithms more efficient, so they would need less computing power. However, this is also challenging because trading off some accuracy for more efficiency could hamper vehicle safety.
    Now that they have demonstrated this framework, the researchers want to continue exploring hardware efficiency and algorithm improvements. In addition, they say their model can be enhanced by characterizing embodied carbon from autonomous vehicles — the carbon emissions generated when a car is manufactured — and emissions from a vehicle’s sensors.
    While there are still many scenarios to explore, the researchers hope that this work sheds light on a potential problem people may not have considered.
    “We are hoping that people will think of emissions and carbon efficiency as important metrics to consider in their designs. The energy consumption of an autonomous vehicle is really critical, not just for extending the battery life, but also for sustainability,” says Sze.
    This research was funded, in part, by the National Science Foundation and the MIT-Accenture Fellowship. More

  • in

    Screen-printing method can make wearable electronics less expensive

    The glittering, serpentine structures that power wearable electronics can be created with the same technology used to print rock concert t-shirts, new research shows.
    The study, led by Washington State University researchers, demonstrates that electrodes can be made using just screen printing, creating a stretchable, durable circuit pattern that can be transferred to fabric and worn directly on human skin. Such wearable electronics can be used for health monitoring in hospitals or at home.
    “We wanted to make flexible, wearable electronics in a way that is much easier, more convenient and lower cost,” said corresponding author Jong-Hoon Kim, associate professor at the WSU Vancouver’s School of Engineering and Computer Science. “That’s why we focused on screen printing: it’s easy to use. It has a simple setup, and it is suitable for mass production.”
    Current commercial manufacturing of wearable electronics requires expensive processes involving clean rooms. While some use screen printing for parts of the process, this new method relies wholly on screen printing, which has advantages for manufacturers and ultimately, consumers.
    In the study, published in the ACS Applied Materials and Interfaces journal, Kim and his colleagues detail the electrode screen-printing process and demonstrate how the resulting electrodes can be used for electrocardiogram monitoring, also known as ECG.
    They used a multi-step process to layer polymer and metal inks to create snake-like structures of the electrode. While the resulting thin pattern appears delicate, the electrodes are not fragile. The study showed they could be stretched by 30% and bend to 180 degrees.
    Multiple electrodes are printed onto a pre-treated glass slide, which allows them to be easily peeled off and transferred onto fabric or other material. After printing the electrodes, the researchers transferred them onto an adhesive fabric that was then worn directly on the skin by volunteers. The wireless electrodes accurately recorded heart and respiratory rates, sending the data to a mobile phone.
    While this study focused on ECG monitoring, the screen-printing process can be used to create electrodes for a range of uses, including those that serve similar functions to smart watches or fitness trackers, Kim said.
    Kim’s lab is currently working on expanding this technology to print different electrodes as well as entire electronic chips and even potentially whole circuit boards.
    In addition to Kim, co-authors on the study includes researchers from the Georgia Institute of Technology and Pukyong National University in South Korea as well as others from WSU Vancouver. This research received support from the National Science Foundation. More

  • in

    Computer models determine drug candidate's ability to bind to proteins

    Combing computational physics with experimental data, University of Arkansas researchers have developed computer models for determining a drug candidate’s ability to target and bind to proteins within cells.
    If accurate, such an estimator could computationally demonstrate binding affinity and thus prevent experimental researchers from needing to investigate millions of chemical compounds. The work could substantially reduce the cost and time associated with developing new drugs.
    “We developed a theoretical framework for estimating ligand-protein binding,” said Mahmoud Moradi, associate professor of chemistry and biochemistry in the Fulbright College of Arts and Sciences. “The proposed method assigns an effective energy to the ligand at every grid point in a coordinate system, which has its origin at the most likely location of the ligand when it is in its bound state.”
    A ligand is a substance — an ion or molecule — such as a drug that binds to another molecule, such as a protein, to form a complex system that may cause or prevent a biological function.
    Moradi’s research focuses on computational simulations of diseases, including coronavirus. For this project, he collaborated with Suresh Thallapuranam, professor of biochemistry and the Cooper Chair of Bioinformatics Research.
    Moradi and Thallapuranam used biased simulations — as well as non-parametric re-weighting techniques to account for the bias — to create a binding estimator that was computationally efficient and accurate. They then used a mathematically robust technique called orientation quaternion formalism to further describe the ligand’s conformational changes as it bound to targeted proteins.
    The researchers tested this approach by estimating the binding affinity between human fibroblast growth factor 1 — a specific signaling protein — and heparin hexasaccharide 5, a popular medication.
    The project was conceived because Moradi and Thallapuranam were studying human fibroblast growth factor 1 protein and its mutants in the absence and presence of heparin. They found strong qualitative agreement between simulations and experimental results.
    “When it came to binding affinity, we knew that the typical methods we had at our disposal would not work for such a difficult problem,” Moradi said. “This is why we decided to develop a new method. We had a joyous moment when the experimental and computational data were compared with each other, and the two numbers matched almost perfectly.”
    The researchers’ work was published in Nature Computational Science.
    Moradi previously received attention for developing computational simulations of the behavior of SARS-CoV-2 spike proteins prior to fusion with human cell receptors. SARS-CoV-2 is the virus that causes COVID-19. More

  • in

    Now on the molecular scale: Electric motors

    Electric vehicles, powered by macroscopic electric motors, are increasingly prevalent on our streets and highways. These quiet and eco-friendly machines got their start nearly 200 years ago when physicists took the first tiny steps to bring electric motors into the world.
    Now a multidisciplinary team led by Northwestern University has made an electric motor you can’t see with the naked eye: an electric motor on the molecular scale.
    This early work — a motor that can convert electrical energy into unidirectional motion at the molecular level — has implications for materials science and particularly medicine, where the electric molecular motor could team up with biomolecular motors in the human body.
    “We have taken molecular nanotechnology to another level,” said Northwestern’s Sir Fraser Stoddart, who received the 2016 Nobel Prize in Chemistry for his work in the design and synthesis of molecular machines. “This elegant chemistry uses electrons to effectively drive a molecular motor, much like a macroscopic motor. While this area of chemistry is in its infancy, I predict one day these tiny motors will make a huge difference in medicine.”
    Stoddart, Board of Trustees Professor of Chemistry at the Weinberg College of Arts and Sciences, is a co-corresponding author of the study. The research was done in close collaboration with Dean Astumian, a molecular machine theorist and professor at the University of Maine, and William Goddard, a computational chemist and professor at the California Institute of Technology. Long Zhang, a postdoctoral fellow in Stoddart’s lab, is the paper’s first author and a co-corresponding author.
    “We have taken molecular nanotechnology to another level.” — Sir Fraser Stoddart, chemist
    Only 2 nanometers wide, the molecular motor is the first to be produced en masse in abundance. The motor is easy to make, operates quickly and does not produce any waste products.

    The study and a corresponding news brief were published today (Jan. 11) by the journal Nature.
    The research team focused on a certain type of molecule with interlocking rings known as catenanes held together by powerful mechanical bonds, so the components could move freely relative to each other without falling apart. (Stoddart decades ago played a key role in the creation of the mechanical bond, a new type of chemical bond that has led to the development of molecular machines.)
    The electric molecular motor specifically is based on a [3]catenane whose components ― a loop interlocked with two identical rings ― are redox active, i.e. they undergo unidirectional motion in response to changes in voltage potential. The researchers discovered that two rings are needed to achieve this unidirectional motion. Experiments showed that a [2]catenane, which has one loop interlocked with one ring, does not run as a motor.
    The synthesis and operation of molecules that perform the function of a motor ― converting external energy into directional motion ― has challenged scientists in the fields of chemistry, physics and molecular nanotechnology for some time.
    To achieve their breakthrough, Stoddart, Zhang and their Northwestern team spent more than four years on the design and synthesis of their electric molecular motor. This included a year working with UMaine’s Astumian and Caltech’s Goddard to complete the quantum mechanical calculations to explain the working mechanism behind the motor.

    “Controlling the relative movement of components on a molecular scale is a formidable challenge, so collaboration was crucial,” Zhang said. “Working with experts in synthesis, measurements, computational chemistry and theory enabled us to develop an electric molecular motor that works in solution.”
    A few examples of single-molecule electric motors have been reported, but they require harsh operating conditions, such as the use of an ultrahigh vacuum, and also produce waste.
    The next steps for their electric molecular motor, the researchers said, is to attach many of the motors to an electrode surface to influence the surface and ultimately do some useful work.
    “The achievement we report today is a testament to the creativity and productivity of our young scientists as well as their willingness to take risks,” Stoddart said. “This work gives me and the team enormous satisfaction.”
    Stoddart is a member of the International Institute for Nanotechnology and the Robert H. Lurie Comprehensive Cancer Center of Northwestern University. More

  • in

    Project aims to expand language technologies

    Only a fraction of the 7,000 to 8,000 languages spoken around the world benefit from modern language technologies like voice-to-text transcription, automatic captioning, instantaneous translation and voice recognition. Carnegie Mellon University researchers want to expand the number of languages with automatic speech recognition tools available to them from around 200 to potentially 2,000.
    “A lot of people in this world speak diverse languages, but language technology tools aren’t being developed for all of them,” said Xinjian Li, a Ph.D. student in the School of Computer Science’s Language Technologies Institute (LTI). “Developing technology and a good language model for all people is one of the goals of this research.”
    Li is part of a research team aiming to simplify the data requirements languages need to create a speech recognition model. The team — which also includes LTI faculty members Shinji Watanabe, Florian Metze, David Mortensen and Alan Black — presented their most recent work, “ASR2K: Speech Recognition for Around 2,000 Languages Without Audio,” at Interspeech 2022 in South Korea.
    Most speech recognition models require two data sets: text and audio. Text data exists for thousands of languages. Audio data does not. The team hopes to eliminate the need for audio data by focusing on linguistic elements common across many languages.
    Historically, speech recognition technologies focus on a language’s phoneme. These distinct sounds that distinguish one word from another — like the “d” that differentiates “dog” from “log” and “cog” — are unique to each language. But languages also have phones, which describe how a word sounds physically. Multiple phones might correspond to a single phoneme. So even though separate languages may have different phonemes, their underlying phones could be the same.
    The LTI team is developing a speech recognition model that moves away from phonemes and instead relies on information about how phones are shared between languages, thereby reducing the effort to build separate models for each language. Specifically, it pairs the model with a phylogenetic tree — a diagram that maps the relationships between languages — to help with pronunciation rules. Through their model and the tree structure, the team can approximate the speech model for thousands of languages without audio data.
    “We are trying to remove this audio data requirement, which helps us move from 100 or 200 languages to 2,000,” Li said. “This is the first research to target such a large number of languages, and we’re the first team aiming to expand language tools to this scope.”
    Still in an early stage, the research has improved existing language approximation tools by a modest 5%, but the team hopes it will serve as inspiration not only for their future work but also for that of other researchers.
    For Li, the work means more than making language technologies available to all. It’s about cultural preservation.
    “Each language is a very important factor in its culture. Each language has its own story, and if you don’t try to preserve languages, those stories might be lost,” Li said. “Developing this kind of speech recognition system and this tool is a step to try to preserve those languages.”
    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Aaron Aupperlee. Note: Content may be edited for style and length. More

  • in

    The optical fiber that keeps data safe even after being twisted or bent

    Optical fibres are the backbone of our modern information networks. From long-range communication over the internet to high-speed information transfer within data centres and stock exchanges, optical fibre remains critical in our globalised world.
    Fibre networks are not, however, structurally perfect, and information transfer can be compromised when things go wrong. Tßo address this problem, physicists at the University of Bath in the UK have developed a new kind of fibre designed to enhance the robustness of networks. This robustness could prove to be especially important in the coming age of quantum networks.
    The team has fabricated optical fibres (the flexible glass channels through which information is sent) that can protect light (the medium through which data is transmitted) using the mathematics of topology. Best of all, these modified fibres are easily scalable, meaning the structure of each fibre can be preserved over thousands of kilometres.
    The Bath study is published in the latest issue of Science Advances.
    Protecting light against disorder
    At its simplest, optical fibre, which typically has a diameter of 125 µm (similar to a thick strand of hair) comprises a core of solid glass surrounded by cladding. Light travels through the core, where it bounces along as though reflecting off a mirror.

    However, the pathway taken by an optical fibre as it crisscrosses the landscape is rarely straight and undisturbed: turns, loops, and bends are the norm. Distortions in the fibre can cause information to degrade as it moves between sender and receiver. “The challenge was to build a network that takes robustness into account,” said Physics PhD student Nathan Roberts, who led the research.
    “Whenever you fabricate a fibre-optic cable, small variations in the physical structure of the fibre are inevitably present. When deployed in a network, the fibre can also get twisted and bent. One way to counter these variations and defects is to ensure the fibre design process includes a real focus on robustness. This is where we found the ideas of topology useful.”
    To design this new fibre, the Bath team used topology, which is the mathematical study of quantities that remain unchanged despite continuous distortions to the geometry. Its principles are already applied to many areas of physics research. By connecting physical phenomena to unchanging numbers, the destructive effects of a disordered environment can be avoided.
    The fibre designed by the Bath team deploys topological ideas by including several light-guiding cores in a fibre, linked together in a spiral. Light can hop between these cores but becomes trapped within the edge thanks to the topological design. These edge states are protected against disorder in the structure.
    Bath physicist Dr Anton Souslov, who co-authored the study as theory lead, said: “Using our fibre, light is less influenced by environmental disorder than it would be in an equivalent system lacking topological design.

    “By adopting optical fibres with topological design, researchers will have the tools to pre-empt and forestall signal-degrading effects by building inherently robust photonic systems.”
    Theory meets practical expertise
    Bath physicist Dr Peter Mosley, who co-authored the study as experimental lead, said: “Previously, scientists have applied the complex mathematics of topology to light, but here at the University of Bath we have lots of experience physically making optical fibres, so we put the mathematics together with our expertise to create topological fibre.”
    The team, which also includes PhD student Guido Baardink and Dr Josh Nunn from the Department of Physics, are now looking for industry partners to develop their concept further.
    “We are really keen to help people build robust communication networks and we are ready for the next phase of this work,” said Dr Souslov.
    Mr Roberts added: “We have shown that you can make kilometres of topological fibre wound around a spool. We envision a quantum internet where information will be transmitted robustly across continents using topological principles.”
    He also pointed out that this research has implications that go beyond communications networks. He said: “Fibre development is not only a technological challenge, but also an exciting scientific field in its own right.
    “Understanding how to engineer optical fibre has led to light sources from bright ‘supercontinuum’ that spans the entire visible spectrum right down to quantum light sources that produce individual photons — single particles of light.”
    The future is quantum
    Quantum networks are widely expected to play an important technological role in years to come. Quantum technologies have the capacity to store and process information in more powerful ways than ‘classical’ computers can today, as well as sending messages securely across global networks without any chance of eavesdropping.
    But the quantum states of light that transmit information are easily impacted by their environment and finding a way to protect them is a major challenge. This work may be a step towards maintaining quantum information in fibre optics using topological design. More

  • in

    Scientists use machine learning to fast-track drug formulation development

    Scientists at the University of Toronto have successfully tested the use of machine learning models to guide the design of long-acting injectable drug formulations. The potential for machine learning algorithms to accelerate drug formulation could reduce the time and cost associated with drug development, making promising new medicines available faster.
    The study was published today in Nature Communications and is one of the first to apply machine learning techniques to the design of polymeric long-acting injectable drug formulations.
    The multidisciplinary researchis led by Christine Allen from the University of Toronto’s department of pharmaceutical sciences and Alán Aspuru-Guzik, from thedepartments of chemistry and computer science. Both researchers are also members of the Acceleration Consortium, a global initiative that uses artificial intelligence and automation to accelerate the discovery of materials and molecules needed for a sustainable future.
    “This study takes a critical step towards data-driven drug formulation development with an emphasis on long-acting injectables,” said Christine Allen, professor in pharmaceutical sciences at the Leslie Dan Faculty of Pharmacy, University of Toronto. “We’ve seen how machine learning has enabled incredible leap-step advances in the discovery of new molecules that have the potential to become medicines. We are now working to apply the same techniques to help us design better drug formulations and, ultimately, better medicines.”
    Considered one of the most promising therapeutic strategies for the treatment of chronic diseases, long-acting injectables (LAI) are a class of advanced drug delivery systems that are designed to release their cargo over extended periods of time to achieve a prolonged therapeutic effect. This approach can help patients better adhere to their medication regimen, reduce side effects, and increase efficacy when injected close to the site of action in the body. However, achieving the optimal amount of drug release over the desired period of time requires the development and characterization of a wide array of formulation candidates through extensive and time-consuming experiments. This trial-and-error approach has created a significant bottleneck in LAI development compared to more conventional types of drug formulation.
    “AI is transforming the way we do science. It helps accelerate discovery and optimization. This is a perfect example of a ‘Before AI’ and an ‘After AI’ moment and shows how drug delivery can be impacted by this multidisciplinary research,” said Alán Aspuru-Guzik, professor in chemistry and computer science, University of Toronto who also holds the CIFAR Artificial Intelligence Research Chair at the Vector Institute in Toronto.
    To investigate whether machine learning tools could accurately predict the rate of drug release, the research team trained and evaluated a series of eleven different models, including multiple linear regression (MLR), random forest (RF), light gradient boosting machine (lightGBM), and neural networks (NN). The data set used to train the selected panel of machine learning models was constructed from previously published studies by the authors and other research groups.
    “Once we had the data set, we split it into two subsets: one used for training the models and one for testing. We then asked the models to predict the results of the test set and directly compared with previous experimental data. We found that the tree-based models, and specifically lightGBM, delivered the most accurate predictions,” said Pauric Bannigan, research associate with the Allen research group at the Leslie Dan Faculty of Pharmacy, University of Toronto.
    As a next step, the team worked to apply these predictions and illustrate how machine learning models might be used to inform the design of new LAIs, the team used advanced analytical techniques to extract design criteria from the lightGBM model. This allowed the design of a new LAI formulation for a drug currently used to treat ovarian cancer. “Once you have a trained model, you can then work to interpret what the machine has learned and use that to develop design criteria for new systems,” said Bannigan. Once prepared, the drug release rate was tested and further validated the predictions made by the lightGBM model. “Sure enough, the formulation had the slow-release rate that we were looking for. This was significant because in the past it might have taken us several iterations to get to a release profile that looked like this, with machine learning we got there in one,” he said.
    The results of the current study are encouraging and signal the potential for machine learning to reduce reliance on trial-and-error testing slowing the pace of development for long-acting injectables. However, the study’s authors identify that the lack of available open-source data sets in pharmaceutical sciences represents a significant challenge to future progress. “When we began this project, we were surprised by the lack of data reported across numerous studies using polymeric microparticles,” said Allen. “This meant the studies and the work that went into them couldn’t be leveraged to develop the machine learning models we need to propel advances in this space,” said Allen. “There is a real need to create robust databases in pharmaceutical sciences that are open access and available for all so that we can work together to advance the field,” she said.
    To promote the move toward the accessible databases needed to support the integration of machine learning into pharmaceutical sciences more broadly, Allen and the research team have made their datasets and code and available on the open-source platform Zenodo.
    “For this study our goal was to lower the barrier of entry to applying machine learning in pharmaceutical sciences,” said Bannigan. “We’ve made our data sets fully available so others can hopefully build on this work. We want this to be the start of something and not the end of the story for machine learning in drug formulation.” More