More stories

  • in

    Improving hospital stays and outcomes for older patients with dementia through AI

    By using artificial intelligence, Houston Methodist researchers are able to predict hospitalization outcomes of geriatric patients with dementia on the first or second day of hospital admission. This early assessment of outcomes means more timely interventions, better care coordination, more judicious resource allocation, focused care management and timely treatment for these more vulnerable, high-risk patients.
    Because geriatric patients with dementia have longer hospital stays and incur higher health care costs than other patients, the team sought to solve this problem by identifying modifiable risk factors and developing an artificial intelligence model that improves patient outcomes, enhances their quality of life and reduces their hospital readmission risk, as well as reducing hospitalization costs once the model is put into practice.
    The study, appearing online Sept. 29 in Alzheimer’s & Dementia: Translational Research and Clinical Interventions, a journal of the Alzheimer’s Association, looked at the hospital records of 8,407 geriatric patients with dementia over 10 years within Houston Methodist’s system of eight hospitals, identifying risk factors for poor outcomes among subgroups of patients with different types of dementia that stem from diseases such as Alzheimer’s, Parkinson’s, vascular dementia and Huntington’s, among others. From this data, the researchers developed a machine learning model to quickly recognize the predictive risk factors and their ranked importance for undesirable hospitalization outcomes early in the course of these patients’ hospital stays.
    With an accuracy of 95.6%, their model outperformed all other prevalent methods of risk assessment for these multiple types of dementia. The researchers add that none of the other current methods have applied AI to comprehensively predict hospitalization outcomes of elderly patients with dementia in this way nor do they identify specific risk factors that can be modifiable by additional clinical procedures or precautions to reduce the risks.
    “The study showed that if we can identify geriatric patients with dementia as soon as they are hospitalized and recognize the significant risk factors, then we can implement some suitable interventions right away,” said Eugene C. Lai, M.D., Ph.D., the Robert W. Hervey Distinguished Endowed Chair for Parkinson’s Research and Treatment in the Stanley H. Appel Department of Neurology. “By mitigating and correcting the modifiable risk factors for undesirable outcomes immediately, we are able to improve outcomes and shorten their hospital stays.”
    Lai, a neurologist, has worked for many years with these patients and wanted to look at ways to better understand how they’re managed and their behavior when hospitalized, so clinicians could improve care and quality of life for them. He approached Stephen T.C. Wong, Ph.D., P.E., a bioinformatics expert and Director of the T. T. and W. F. Chao Center for BRAIN at Houston Methodist, with this idea, because he had previously collaborated with Wong and knew his team had access to the large clinical data warehouse of Houston Methodist patients and the ability to use AI to analyze big data.
    Risk factors for each type of dementia were identified, including those amenable to interventions. Top identified hospitalization outcome risk factors included encephalopathy, number of medical problems at admission, pressure ulcers, urinary tract infections, falls, admission source, age, race and anemia, with several overlaps in multi-dementia groups.
    Ultimately, the researchers aim to implement mitigation measures to guide clinical interventions to reduce these negative outcomes. Wong says the emerging strategy of applying powerful AI predictions to trigger the implementation of “smart” clinical paths in hospitals is novel and will not only improve clinical outcomes and patient experiences, but also reduce hospitalization costs.
    “Our next steps will be to implement the validated AI model into a mobile app for the ICU and main hospital staff to alert them to geriatric patients with dementia who are at high risk of poor hospitalization outcomes and to guide them on interventional steps to reduce such risks,” said Wong, the paper’s corresponding author and the John S. Dunn Presidential Distinguished Chair in Biomedical Engineering with the Houston Methodist Research Institute. “We will work with hospital IT to integrate this app seamlessly into EPIC as part of a system-wide implementation for routine clinical use.”
    He said this will follow the same smart clinical pathway strategy they have been working on to integrate two other novel AI apps his team developed into the EPIC system for routine clinical use to guide interventions that reduce the risk of patient falls with injuries and better assess breast cancer risk to reduce unnecessary biopsies and overdiagnoses.
    Wong and Lai’s collaborators on this study were Xin Wang, Chika F. Ezeana, Lin Wang, Mamta Puppala, Yunjie He, Xiaohui Yu, Zheng Yin and Hong Zhao, all with the T.T. & W.F. Chao Center for BRAIN at the Houston Methodist Academic Institute, and Yan-Siang Huang with the Far Eastern Memorial Hospital in Taiwan.
    This study was supported by grants from the National Institutes of Health (R01AG057635 and R01AG069082), the T.T. and W.F. Chao Foundation, John S. Dunn Research Foundation, Houston Methodist Cornerstone Award and the Paul Richard Jeanneret Research Fund. More

  • in

    Neural net computing in water

    Microprocessors in smartphones, computers, and data centers process information by manipulating electrons through solid semiconductors but our brains have a different system. They rely on the manipulation of ions in liquid to process information.
    Inspired by the brain, researchers have long been seeking to develop ‘ionics’ in an aqueous solution. While ions in water move slower than electrons in semiconductors, scientists think the diversity of ionic species with different physical and chemical properties could be harnessed for richer and more diverse information processing.
    Ionic computing, however, is still in its early days. To date, labs have only developed individual ionic devices such as ionic diodes and transistors, but no one has put many such devices together into a more complex circuit for computing — until now.
    A team of researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), in collaboration with DNA Script, a biotech startup, have developed an ionic circuit comprising hundreds of ionic transistors and performed a core process of neural net computing.
    The research is published in Advanced Materials.
    The researchers began by building a new type of ionic transistor from a technique they recently pioneered. The transistor consists of an aqueous solution of quinone molecules, interfaced with two concentric ring electrodes with a center disk electrode, like a bullseye. The two ring electrodes electrochemically lower and tune the local pH around the center disk by producing and trapping hydrogen ions. A voltage applied to the center disk causes an electrochemical reaction to generate an ionic current from the disk into the water. The reaction rate can be sped up or down — increasing or decreasing the ionic current — by tuning the local pH. In other words, the pH controls, or gates, the disk’s ionic current in the aqueous solution, creating an ionic counterpart of the electronic transistor.
    They then engineered the pH-gated ionic transistor in such a way that the disk current is an arithmetic multiplication of the disk voltage and a “weight” parameter representing the local pH gating the transistor. They organized these transistors into a 16 × 16 array to expand the analog arithmetic multiplication of individual transistors into an analog matrix multiplication, with the array of local pH values serving as a weight matrix encountered in neural networks.
    “Matrix multiplication is the most prevalent calculation in neural networks for artificial intelligence,” said Woo-Bin Jung, a postdoctoral fellow at SEAS and the first author of the paper. “Our ionic circuit performs the matrix multiplication in water in an analog manner that is based fully on electrochemical machinery.”
    “Microprocessors manipulate electrons in a digital fashion to perform matrix multiplication,” said Donhee Ham, the Gordon McKay Professor of Electrical Engineering and Applied Physics at SEAS and the senior author of the paper. “While our ionic circuit cannot be as fast or accurate as the digital microprocessors, the electrochemical matrix multiplication in water is charming in its own right, and has a potential to be energy efficient.”
    Now, the team looks to enrich the chemical complexity of the system.
    “So far, we have used only 3 to 4 ionic species, such as hydrogen and quinone ions, to enable the gating and ionic transport in the aqueous ionic transistor,” said Jung. “It will be very interesting to employ more diverse ionic species and to see how we can exploit them to make rich the contents of information to be processed.”
    The research was co-authored by Han Sae Jung, Jun Wang, Henry Hinton, Maxime Fournier, Adrian Horgan, Xavier Godron, and Robert Nicol. It was supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), under grant 2019-19081900002. More

  • in

    For the longest time: Quantum computing engineers set new standard in silicon chip performance

    Two milliseconds — or two thousandths of a second — is an extraordinarily long time in the world of quantum computing.
    On these timescales the blink of an eye — at one 10th of a second — is like an eternity.
    Now a team of researchers at UNSW Sydney has broken new ground in proving that ‘spin qubits’ — properties of electrons representing the basic units of information in quantum computers — can hold information for up to two milliseconds. Known as ‘coherence time’, the duration of time that qubits can be manipulated in increasingly complicated calculations, the achievement is 100 times longer than previous benchmarks in the same quantum processor.
    “Longer coherence time means you have more time over which your quantum information is stored — which is exactly what you need when doing quantum operations,” says PhD student Ms Amanda Seedhouse, whose work in theoretical quantum computing contributed to the achievement.
    “The coherence time is basically telling you how long you can do all of the operations in whatever algorithm or sequence you want to do before you’ve lost all the information in your qubits.”
    In quantum computing, the more you can keep spins in motion, the better the chance that the information can be maintained during calculations. When spin qubits stop spinning, the calculation collapses and the values represented by each qubit are lost. The concept of extending coherence was already confirmed experimentally by quantum engineers at UNSW in 2016. More

  • in

    Machine learning helps scientists peer (a second) into the future

    The past may be a fixed and immutable point, but with the help of machine learning, the future can at times be more easily divined.
    Using a new type of machine learning method called next generation reservoir computing, researchers at The Ohio State University have recently found a new way to predict the behavior of spatiotemporal chaotic systems — such as changes in Earth’s weather — that are particularly complex for scientists to forecast.
    The study, published today in the journal Chaos: An Interdisciplinary Journal of Nonlinear Science, utilizes a new and highly efficient algorithm that, when combined with next generation reservoir computing, can learn spatiotemporal chaotic systems in a fraction of the time of other machine learning algorithms.
    Researchers tested their algorithm on a complex problem that has been studied many times in the past — forecasting the behavior of an atmospheric weather model. In comparison to traditional machine learning algorithms that can solve the same tasks, the Ohio State team’s algorithm is more accurate, and uses 400 to 1,250 times less training data to make better predictions than its counterpart. Their method is also less computationally expensive; while solving complex computing problems previously required a supercomputer, they used a laptop running Windows 10 to make predictions in about a fraction of a second — about 240,000 times faster than traditional machine learning algorithms.
    “This is very exciting, as we believe it’s a substantial advance in terms of data processing efficiency and prediction accuracy in the field of machine learning,” said Wendson De Sa Barbosa, lead author and a postdoctoral researcher in physics at Ohio State. He said that learning to predict these extremely chaotic systems is a “physics grand challenge,” and understanding them could pave the way to new scientific discoveries and breakthroughs.
    “Modern machine learning algorithms are especially well-suited for predicting dynamical systems by learning their underlying physical rules using historical data,” said De Sa Barbosa. “Once you have enough data and computational power, you can make predictions with machine learning models about any real-world complex system.” Such systems can include any physical process, from the bob of a clock’s pendulum to disruptions in power grids.
    Even heart cells display chaotic spatial patterns when they oscillate at an abnormally higher frequency than a normal heartbeat, said De Sa Barbosa. That means this research could one day be used to provide better insight to controlling and interpreting heart disease, as well as a bevy of other “real-world” problems.
    “If one knows the equations that accurately describe how these unique processes for a system will evolve, then its behavior could be reproduced and predicted,” he said. Simple movements, like the swing position of a clock, can be predicted easily using only its current position and velocity. Yet more complex systems, like Earth’s weather, are far more difficult to foresee due to how many variables actively dictate its chaotic behavior.
    To make precise predictions of the entire system, scientists would have to have accurate information about every single one of these variables, and the model equations that describe how these many variables are related, which is altogether impossible, said De Sa Barbosa. But with their machine learning algorithm, the almost 500,000 historical training data points used in previous works for the atmospheric weather example used in this study could be reduced to only 400, while still achieving the same or better accuracy.
    Going forward, De Sa Barbosa aims to further his research by using their algorithm to possibly speed up spatiotemporal simulations, he said.
    “We live in a world that we still know so little about, so it’s important to recognize these high-dynamical systems and learn how to more efficiently predict them.”
    The co-author of the study was Daniel J. Gauthier, a professor of physics at Ohio State. Their work was supported by the Air Force Office of Scientific Research.
    Story Source:
    Materials provided by Ohio State University. Original written by Tatyana Woodall. Note: Content may be edited for style and length. More

  • in

    Full control of a six-qubit quantum processor in silicon

    Researchers at QuTech — a collaboration between the Delft University of Technology and TNO — have engineered a record number of six, silicon-based, spin qubits in a fully interoperable array. Importantly, the qubits can be operated with a low error-rate that is achieved with a new chip design, an automated calibration procedure, and new methods for qubit initialization and readout. These advances will contribute to a scalable quantum computer based on silicon. The results are published in Nature today.
    Different materials can be used to produce qubits, the quantum analogue to the bit of the classical computer, but no one knows which material will turn out to be best to build a large-scale quantum computer. To-date there have only been smaller demonstrations of silicon quantum chips with high quality qubit operations. Now, researchers from QuTech, led by Prof. Lieven Vandersypen, have produced a six qubit chip in silicon that operates with low error-rates. This is a major step towards a fault-tolerant quantum computer using silicon.
    To make the qubits, individual electrons are placed in a linear array of six ‘quantum dots’ spaced 90 nanometers apart. The array of quantum dots is made in a silicon chip with structures that closely resemble the transistor — a common component in every computer chip. A quantum mechanical property called spin is used to define a qubit with its orientation defining the 0 or 1 logical state. The team used finely-tuned microwave radiation, magnetic fields, and electric potentials to control and measure the spin of individual electrons and make them interact with each other.
    “The quantum computing challenge today consists of two parts,” explained first author Mr. Stephan Philips. “Developing qubits that are of good enough quality, and developing an architecture that allows one to build large systems of qubits. Our work fits into both categories. And since the overall goal of building a quantum computer is an enormous effort, I think it is fair to say we have made a contribution in the right direction.”
    The electron’s spin is a delicate property. Tiny changes in the electromagnetic environment cause the direction of spin to fluctuate, and this increases the error rate. The QuTech team built upon their previous experience engineering quantum dots with new methods for preparing, controlling, and reading the spin states of electrons. Using this new arrangement of qubits they could create logic gates and entangle systems of two or three electrons, on demand.
    Quantum arrays with over 50 qubits have been produced using superconducting qubits. It is the global availability of silicon engineering infrastructure however, which gives silicon quantum devices the promise of easier migration from research to industry. Silicon brings certain engineering challenges, and until this work from the QuTech team only arrays of up to three qubits could be engineered in silicon without sacrificing quality.
    “This paper shows that with careful engineering, it is possible to increase the silicon spin qubit count while keeping the same precision as for single qubits. The key building block developed in this research could be used to add even more qubits in the next iterations of study,” said co-author Dr. Mateusz Madzik.
    “In this research we push the envelope of the number of qubits in silicon, and achieve high initialization fidelities, high readout fidelities, high single-qubit gate fidelities, and high two-qubit state fidelities,” said Prof. Vandersypen. “What really stands out though is that we demonstrate all these characteristics together in one single experiment on a record number of qubits.”
    Story Source:
    Materials provided by Delft University of Technology. Note: Content may be edited for style and length. More

  • in

    New algorithm for reconstructing particles at the Large Hadron Collider

    A team of researchers from CERN, Massachusetts Institute of Technology, and Staffordshire University have implemented a ground-breaking algorithm for reconstructing particles at the Large Hadron Collider.
    The Large Hadron Collider (LHC) is the most powerful particle accelerator ever built which sits in a tunnel 100 metres underground at CERN, the European Organisation for Nuclear Research, near Geneva in Switzerland. It is the site of long-running experiments which enable physicists worldwide to learn more about the nature of the Universe.
    The project is part of the Compact Muon Solenoid (CMS) experiment — one of seven installed experiments which uses detectors to analyse the particles produced by collisions in the accelerator.
    The subject of a new academic paper End-to-end multiple-particle reconstruction in high occupancy imaging calorimeters with graph neural networks published in European Physical Journal C, the project has been carried out ahead of the high luminosity upgrade of the Large Hadron Collider. The High Luminosity Large Hadron Collider (HL-LHC) project aims to crank up the performance of the LHC in order to increase the potential for discoveries after 2029. The HL-LHC will increase the number of proton-proton interactions in an event from 40 to 200.
    Professor Raheel Nawaz, Pro Vice-Chancellor for Digital Transformation, at Staffordshire University, has supervised the research. He explained: “Limiting the increase of computing resource consumption at large pileups is a necessary step for the success of the HL-LHC physics programme and we are advocating the use of modern machine learning techniques to perform particle reconstruction as a possible solution to this problem.”
    He added: “This project has been both a joy and a privilege to work on and is likely to dictate the future direction of research on particle reconstruction by using a more advanced AI-based solution.”
    Dr Jan Kieseler from the Experimental Physics Department at CERN added: “This is the first single-shot reconstruction of about 1000 particles from and in an unprecedentedly challenging environment with 200 simultaneous interactions each proton-proton collision. Showing that this novel approach, combining dedicated graph neural network layers (GravNet) and training methods (Object Condensation), can be extended to such challenging tasks while staying within resource constraints represents an important milestone towards future particle reconstruction.”
    Story Source:
    Materials provided by Staffordshire University. Note: Content may be edited for style and length. More

  • in

    Active matter, curved spaces: Mini robots learn to 'swim' on stretchy surfaces

    When self-propelling objects interact with each other, interesting phenomena can occur. Birds align with each other when they flock together. People at a concert spontaneously create vortices when they nudge and bump into each other. Fire ants work together to create rafts that float on the water’s surface.
    While many of these interactions happen through direct contact, like the concert-goers’ nudging, some interactions can transmit through the material the objects are on or in — these are known as indirect interactions. For example, a bridge with pedestrians on it can transmit vibrations, like in the famous Millennium Bridge “wobbly bridge” instance.
    While the results of direct interactions (like nudging) are of increasing interest and study, and the results of indirect interactions through mechanisms like vision are well-studied, researchers are still learning about indirect mechanical interactions (for example, how two rolling balls might influence each other’s movement on a trampoline by indenting the trampoline’s surface with their weight, thus exerting mechanical forces without touching).
    Physicists are using small wheeled robots to better understand these indirect mechanical interactions, how they play a role in active matter, and how we can control them. Their findings, “Field-mediated locomotor dynamics on highly deformable surfaces” are recently published in the The Proceedings of the National Academy of Sciences (PNAS).
    In the paper, led by Shengkai Li, former Ph.D. student in the School of Physics at Georgia Tech, now a Center for the Physics of Biological Function (CPBF) fellow at Princeton University, researchers illustrated that active matter on deformable surfaces can interact with others through non-contact force — then created a model to allow control of the collective behavior of moving objects on deformable surfaces through simple changes in the engineering of the robots.
    Co-authors include Georgia Tech School of Physics co-authors Daniel Goldman, Dunn Family Professor; Gongjie Li, assistant professor; and graduate student Hussain Gynai — along with Pablo Laguna and Gabriella Small (University of Texas at Austin), Yasemin Ozkan-Aydin (University of Notre Dame), Jennifer Rieser (Emory University), Charles Xiao (University of California, Santa Barbara). More

  • in

    Robotic drug capsule can deliver drugs to gut

    One reason that it’s so difficult to deliver large protein drugs orally is that these drugs can’t pass through the mucus barrier that lines the digestive tract. This means that insulin and most other “biologic drugs” — drugs consisting of proteins or nucleic acids — have to be injected or administered in a hospital.
    A new drug capsule developed at MIT may one day be able to replace those injections. The capsule has a robotic cap that spins and tunnels through the mucus barrier when it reaches the small intestine, allowing drugs carried by the capsule to pass into cells lining the intestine.
    “By displacing the mucus, we can maximize the dispersion of the drug within a local area and enhance the absorption of both small molecules and macromolecules,” says Giovanni Traverso, the Karl van Tassel Career Development Assistant Professor of Mechanical Engineering at MIT and a gastroenterologist at Brigham and Women’s Hospital.
    In a study appearing today in Science Robotics, the researchers demonstrated that they could use this approach to deliver insulin as well as vancomycin, an antibiotic peptide that currently has to be injected.
    Shriya Srinivasan, a research affiliate at MIT’s Koch Institute for Integrative Cancer Research and a junior fellow at the Society of Fellows at Harvard University, is the lead author of the study.
    Tunneling through
    For several years, Traverso’s lab has been developing strategies to deliver protein drugs such as insulin orally. This is a difficult task because protein drugs tend to be broken down in acidic environment of the digestive tract, and they also have difficulty penetrating the mucus barrier that lines the tract. More