More stories

  • in

    Quantum physicists simulate super diffusion on a quantum computer

    Trinity’s quantum physicists in collaboration with IBM Dublin have successfully simulated super diffusion in a system of interacting quantum particles on a quantum computer.
    This is the first step in doing highly challenging quantum transport calculations on quantum hardware and, as the hardware improves over time, such work promises to shed new light in condensed matter physics and materials science.
    The work is one of the first outputs of the TCD-IBM predoctoral scholarship programme which was recently established where IBM hires PhD students as employees while being co-supervised at Trinity. The paper was published recently in leading Nature journal NPJ Quantum Information.
    IBM is a global leader in the exciting field of quantum computation. The early stage quantum computer used in this study consists of 27 superconducting qubits (qubits are the building blocks of quantum logic) and is physically located in IBMs lab in Yorktown Heights in New York and programmed remotely from Dublin.
    Quantum computing is currently one of the most exciting technologies and is expected to be edging closer towards commercial applications in the next decade. Commercial applications aside there are fascinating fundamental questions which quantum computers can help with. The team at Trinity and IBM Dublin tackled one such question concerning quantum simulation.
    Explaining the significance of the work and the idea of quantum simulation in general, Trinity’s Professor John Goold, Director of the newly established Trinity Quantum Alliance, who led the research, explains:
    “Generally speaking the problem of simulating the dynamics of a complex quantum system with many interacting constituents is a formidable challenge for conventional computers. Consider the 27 qubits on this particular device. In quantum mechanics the state of such a system is described mathematically by an object called a wave function. In order to use a standard computer to describe this object you require a huge number of coefficients to be stored in memory and the demands scale exponentially with the number of qubits; roughly 134 million coefficients, in the case of this simulation. More

  • in

    Scientists trap light inside a magnet

    A new study led by Vinod M. Menon and his group at the City College of New York shows that trapping light inside magnetic materials may dramatically enhance their intrinsic properties. Strong optical responses of magnets are important for the development of magnetic lasers and magneto-optical memory devices, as well as for emerging quantum transduction applications.
    In their new article in Nature, Menon and his team report the properties of a layered magnet that hosts strongly bound excitons — quasiparticles with particularly strong optical interactions. Because of that, the material is capable of trapping light — all by itself. As their experiments show, the optical responses of this material to magnetic phenomena are orders of magnitude stronger than those in typical magnets. “Since the light bounces back and forth inside the magnet, interactions are genuinely enhanced,” said Dr. Florian Dirnberger, the lead-author of the study. “To give an example, when we apply an external magnetic field the near-infrared reflection of light is altered so much, the material basically changes its color. That’s a pretty strong magneto-optic response.”
    “Ordinarily, light does not respond so strongly to magnetism,” said Menon. “This is why technological applications based on magneto-optic effects often require the implementation of sensitive optical detection schemes.”
    On how the advances can benefit ordinary people, study co-author Jiamin Quan said: “Technological applications of magnetic materials today are mostly related to magneto-electric phenomena. Given such strong interactions between magnetism and light, we can now hope to one day create magnetic lasers and may reconsider old concepts of optically controlled magnetic memory.” Rezlind Bushati, a graduate student in the Menon group, also contributed to the experimental work.
    The study conducted in close collaboration with Andrea Alù and his group at CUNY Advanced Science Research Center is the result of a major international collaboration. Experiments conducted at CCNY and ASRC were complemented by measurements taken at the University of Washington in the group of Prof. Xiaodong Xu by Dr. Geoffrey Diederich. Theoretical support was provided by Dr. Akashdeep Kamra and Prof. Francisco J. Garcia-Vidal from the Universidad Autónoma de Madrid and Dr. Matthias Florian from the University of Michigan. The materials were grown by Prof. Zdenek Sofer and Kseniia Mosina at the UCT Prague and the project was further supported by Dr. Julian Klein at MIT. The work at CCNY was supported through the US Air Force Office of Scientific Research, the National Science Foundation (NSF) — Division of Materials Research, the NSF CREST IDEALS center, DARPA and the German Research Foundation. More

  • in

    Magnonic computing: Faster spin waves could make novel computing systems possible

    Research is underway around the world to find alternatives to our current electronic computing technology, as great, electron-based systems have limitations. A new way of transmitting information is emerging from the field of magnonics: instead of electron exchange, the waves generated in magnetic media could be used for transmission, but magnonics-based computing has been (too) slow to date. Scientists at the University of Vienna have now discovered a significant new method: When the intensity is increased, the spin waves become shorter and faster — another step towards magnon computing. The results were published in the journal Science Advances.
    Magnonics is a relatively new field of research in magnetism. Spin waves play a central role: A local disturbance in the magnetic order of a magnet can propagate as waves through a material. These waves are called spin waves, and the associated quasiparticles are called magnons. They carry information in the form of angular momentum pulses. Because of this property, they can be used as low-power data carriers in smaller and more energy-efficient computers of the future. The main challenge in magnonics is wavelength. The larger it is, the slower magnon-based data processing units are. Until now, the wavelength could only be shortened with very complex hybrid structures or a synchrotron. The research group “Nanomagnetism and Magnononics” at the University of Vienna, together with colleagues from Germany, the Czech Republic, Ukraine and China, has developed a simpler alternative. First author Qi Wang made the crucial observation after months of work in the Brillouin light scattering spectroscopy laboratory at the University of Vienna’s Faculty of Physics: if you increase the intensity, the spin waves become shorter and faster — a breakthrough method for magnonic computing.
    Co-author of the study and leader of the Vienna NanoMag team, Andrii Chumak, explains the discovery with a metaphor: “It is helpful to imagine the method with light. If you change the wavelength of light, its color changes. But if you change the intensity, only the luminosity changes. In this case, we found a way to change the color by changing the intensity of the spin waves. This phenomenon allowed us to excite much shorter and much better spin waves,” Chumak said.
    The current wavelength found with this system is about 200 nanometers. According to numerical simulations, it would be possible to excite even smaller wavelengths, but at this stage it is very difficult to excite or measure these orders of magnitude.
    The amplitudes of the spin waves are also crucial for future magnetic integrated circuits. The discovered system exhibits a self-locking nonlinear shift, which means that the amplitude of the excited spin waves is constant. This property is very relevant for integrated circuits, as it allows different magnetic elements to work together with the same amplitude. This, in turn, is fundamental to the construction of more complex systems and to the realization of the distant goal of a magnon-based computer. The ultimate goal, a fully functional magnon computer, has not yet been achieved. Nevertheless, this solid milestone brings researchers a good deal closer to their goal. More

  • in

    Switching ‘spin’ on and off (and up and down) in quantum materials at room temperature

    Researchers have found a way to control the interaction of light and quantum ‘spin’ in organic semiconductors, that works even at room temperature.
    Spin is the term for the intrinsic angular momentum of electrons, which is referred to as up or down. Using the up/down spin states of electrons instead of the 0 and 1 in conventional computer logic could transform the way in which computers process information. And sensors based on quantum principles could vastly improve our abilities to measure and study the world around us.
    An international team of researchers, led by the University of Cambridge, has found a way to use particles of light as a ‘switch’ that can connect and control the spin of electrons, making them behave like tiny magnets that could be used for quantum applications.
    The researchers designed modular molecular units connected by tiny ‘bridges’. Shining a light on these bridges allowed electrons on opposite ends of the structure to connect to each other by aligning their spin states. Even after the bridge was removed, the electrons stayed connected through their aligned spins.
    This level of control over quantum properties can normally only be achieved at ultra-low temperatures. However, the Cambridge-led team has been able to control the quantum behaviour of these materials at room temperature, which opens up a new world of potential quantum applications by reliably coupling spins to photons. The results are reported in the journal Nature.
    Almost all types of quantum technology — based on the strange behaviour of particles at the subatomic level — involve spin. As they move, electrons usually form stable pairs, with one electron spin up and one spin down. However, it is possible to make molecules with unpaired electrons, called radicals. Most radicals are very reactive, but with careful design of the molecule, they can be made chemically stable.
    “These unpaired spins change the rules for what happens when a photon is absorbed and electrons are moved up to a higher energy level,” said first author Sebastian Gorgon, from Cambridge’s Cavendish Laboratory. “We’ve been working with systems where there is one net spin, which makes them good for light emission and making LEDs.”
    Gorgon is a member of Professor Sir Richard Friend’s research group, where they have been studying radicals in organic semiconductors for light generation, and identified a stable and bright family of materials a few years ago. These materials can beat the best conventional OLEDs for red light generation. More

  • in

    AI models are powerful, but are they biologically plausible?

    Artificial neural networks, ubiquitous machine-learning models that can be trained to complete many tasks, are so called because their architecture is inspired by the way biological neurons process information in the human brain.
    About six years ago, scientists discovered a new type of more powerful neural network model known as a transformer. These models can achieve unprecedented performance, such as by generating text from prompts with near-human-like accuracy. A transformer underlies AI systems such as ChatGPT and Bard, for example. While incredibly effective, transformers are also mysterious: Unlike with other brain-inspired neural network models, it hasn’t been clear how to build them using biological components.
    Now, researchers from MIT, the MIT-IBM Watson AI Lab, and Harvard Medical School have produced a hypothesis that may explain how a transformer could be built using biological elements in the brain. They suggest that a biological network composed of neurons and other brain cells called astrocytes could perform the same core computation as a transformer.
    Recent research has shown that astrocytes, non-neuronal cells that are abundant in the brain, communicate with neurons and play a role in some physiological processes, like regulating blood flow. But scientists still lack a clear understanding of what these cells do computationally.
    With the new study, published this week in open-access format in the Proceedings of the National Academy of Sciences, the researchers explored the role astrocytes play in the brain from a computational perspective, and crafted a mathematical model that shows how they could be used, along with neurons, to build a biologically plausible transformer.
    Their hypothesis provides insights that could spark future neuroscience research into how the human brain works. At the same time, it could help machine-learning researchers explain why transformers are so successful across a diverse set of complex tasks.
    “The brain is far superior to even the best artificial neural networks that we have developed, but we don’t really know exactly how the brain works. There is scientific value in thinking about connections between biological hardware and large-scale artificial intelligence networks. This is neuroscience for AI and AI for neuroscience,” says Dmitry Krotov, a research staff member at the MIT-IBM Watson AI Lab and senior author of the research paper. More

  • in

    Researchers use mathematical modeling and dynamic biomarkers to characterize metastatic disease during adaptive therapy

    Most cancer deaths are due to the metastatic spread and growth of tumor cells at distant sites. Identifying appropriate treatments for patients with metastatic disease is challenging because of limited biomarkers and detection capabilities, and poor characterization of metastatic tumors. In a new study published and included on the cover of the journal Cancer Research, Moffitt Cancer Center researchers demonstrate how mathematical modeling combined with dynamic biomarkers can be used to characterize metastatic disease and identify appropriate therapeutic approaches to improve patient outcomes.
    Metastatic tumors can vary in size, location and composition. Metastases also vary within an individual patient, making treatment decisions challenging. Physicians use biomarkers collected from blood specimens, tissue biopsies and images to identify appropriate treatment strategies. However, these techniques are limited by single timepoints, poor resolution of smaller lesions and an inability to provide information on individual metastases. Scientists are investigating the potential of dynamic biomarkers to overcome the limitations of standard biomarker approaches.
    “Dynamic markers base prognosis not on the absolute value of measurement at a single time point, but on the relative change over time. For example, PSA doubling time may stratify patients with prostate cancer who are likely to respond better to chemotherapy, progress to metastatic disease or die from the disease,” said Jill Gallaher, Ph.D., a research scientist in the Department of Integrated Mathematical Oncology at Moffitt.
    Moffitt researchers used mathematical modeling and dynamic biomarkers to identify the characteristics of metastatic disease that are associated with better patient outcomes to treatment. They focused their analysis on a biomarker called prostate-specific antigen (PSA) that is commonly used in the diagnosis and treatment of patients with prostate cancer. The researchers performed their study with data from 16 patients treated in a clinical trial of adaptive therapy. During adaptive therapy, patients are given breaks from treatment based on their change in PSA levels. This approach is designed to prevent the development and growth of drug resistant tumors that cause treatment failure.
    The researchers assessed dynamic PSA biomarkers during the first cycle of adaptive therapy, which includes the time it takes to reduce the PSA burden by 50% when treatment is on and the time it takes to regrow to the original value when the treatment is off. They identified several key relationships between metastatic disease and the biomarkers, including metastases that were larger in size had longer treatment cycles; metastases with a higher proportion of drug-resistant cells slowed the cycle; and metastases that had a faster cell turnover rate had a faster drug response time and a slower time to regrow. They also relate PSA dynamics to clinical variables, including Gleason score, a grading system used to show how abnormal the prostate cancer cells look and how likely the cancer will advance; the change in the number of metastases during a cycle; and the total number of cycles over the course of treatment.
    The team performed additional modeling to compare adaptive therapy to continuous therapy during which no treatment breaks are given. They discovered that differences between metastatic tumor compositions favored continuous treatment, and differences within metastatic tumor compositions favored adaptive schedules. These observations suggest that it may be possible to use mathematical modeling approaches combined with dynamic and standard biomarkers to improve the characterization of patients’ disease and help identify appropriate treatment options.
    “Multiscale mathematical models, such as the one proposed here, can help disentangle the multidimensional nature of cancer and lead to a better understanding of the drivers of treatment success and failure. While this work is only a first step, it shows that model-informed analysis of biomarkers and visible metastases during a single cycle of adaptive therapy may be useful in identifying important features of the metastatic population to provide further support for treatment decision-making,” explained Alexander Anderson, Ph.D., chair of the Department of Integrated Mathematical Oncology at Moffitt.
    This study was supported by the National Cancer Institute (U01CA232382) and the Moffitt Center of Excellence for Evolutionary Therapy. More

  • in

    AI method uses transformer models to study human cells

    Researchers in Carnegie Mellon University’s School of Computer Science have developed a method that uses artificial intelligence to augment how cells are studied and could help scientists better understand and eventually treat disease.
    Images of organ or tissue samples contain millions of cells. And while analyzing these cells in situ is an important part of biological research, such images make it nearly impossible to identify individual cells, determine their function and understand their organization. A technique called spatial transcriptomics brings these cells into focus by combining imaging with the ability to quantify the level of genes in each cell — giving researchers the ability to study in detail several key biological mechanisms, ranging from how immune cells fight cancer to the cellular impact of drugs and aging.
    Many current spatial transcriptomics platforms still lack the resolution required for closer, more detailed analysis. These technologies often group cells in clusters that range from several to 50 cells for each measurement, a resolution that may be sufficient for well-represented large cells but that is problematic for small cells or ones that aren’t well represented. These rare cells may be the most critical for the disease or condition being studied.
    In a new paper published in Nature Methods, Computational Biology Department researchers Hao Chen, Dongshunyi Li and Ziv Bar-Joseph unveiled a method that uses artificial intelligence to augment the latest spatial transcriptomics technologies.
    The CMU research focuses on more recent technologies that produce images at a much closer scale, allowing for subcellular resolution (or multiple measurements per cell). While these techniques solve the resolution issue, they present new challenges because the resulting images are so close-up that rather than capturing 15 to 50 cells per image, they capture only a few genes. This reversal of the previous problem creates difficulties in identifying the individual components and determining how to group these measurements to learn about specific cells. It also obscures the big picture.
    The algorithm developed by the CBD researchers, called subcellular spatial transcriptomics cell segmentation (SCS), harnesses AI and advanced deep neural networks to adaptively identify cells and their constituent parts. SCS uses transformer models, similar to those used by large language models like ChatGPT, to gather information from the area surrounding each measurement. Just as ChatGPT uses the entire context of a sentence or paragraph for word completion, the SCS method fills in missing information for a specific measurement by incorporating information from the cells around it.
    When applied to images of brain and liver samples with hundreds of thousands of cells, SCS accurately identified the exact location and type of each cell. SCS also identified several cells missed by current analysis approaches, such as rare and small cells that may play a crucial role in specific diseases or processes, including aging. SCS also provided information on location of molecules within cells, greatly improving the resolution at which researchers can study cellular organization.
    “The ability to use the most recent advances in AI to aid the study of the human body opens the door to several downstream applications of spatial transcriptomics to improve human health,” said Ziv Bar-Joseph, the FORE Systems Professor of Machine Learning and Computational Biology at CMU. Such downstream applications are already being investigated by several large consortiums, including the Human BioMolecular Atlas Program (HuBMAP), that are using spatial transcriptomics to create a detailed, 3D map of the human body.
    “By integrating state-of the-art biotechnology and AI, SCS helps unlock several open questions about cellular organization that are key to our ability to understand, and ultimately treat, disease,” added Hao Chen, a Lane Postdoctoral Fellow in CBD.
    SCS is available free on GitHub and was supported by grants from the National Institutes of Health and the National Science Foundation. More

  • in

    Robotic exoskeletons and neurorehabilitation for acquired brain injury: Determining the potential for recovery of overground walking

    A team of New Jersey researchers reviewed the evidence for the impact of robotic exoskeleton devices on recovery of ambulation among individua5ls with acquired brain injury, laying out a systematic framework for the evaluation of such devices that is needed for rigorous research studies. The open access article, “Lower extremity robotic exoskeleton devices for overground ambulation recovery in acquired brain injury — A review”, was published May 25, 2023 in Frontiers in Neurorobotics.
    The authors are Kiran Karunakaran, PhD, Sai Pamula, Caitlyn Bach, Soha Saleh, PhD, and Karen Nolan, PhD, from the Center for Mobility and Rehabilitation Engineering Research at Kessler Foundation, and Eliana Legelen, MA, from Montclair State University.
    Acquired brain injury was defined as cerebral palsy, traumatic brain injury or stroke. The review focused on 57 published studies of overground training in wearable robotic exoskeleton devices. The manuscript provides a comprehensive review of clinical and pre-clinical research on the therapeutic effects of various devices.
    “Despite rapid progress in robotic exoskeleton design and technology, the efficacy of such devices is not fully understood. This review lays the foundation to understand the knowledge gaps that currently exist in robotic rehabilitation research,” said lead and corresponding author Dr. Karunakaran, citing the many variables among the devices and the clinical characteristics of acquired brain injury. “The control mechanisms vary widely among these devices, for example, which has a major influence on how training is delivered,” she added. “There’s also wide variability in other factors that affect the trajectory of recovery, including the timing, duration, dosing, and intensity of training in these devices.”
    Developing a framework for future research requires a comprehensive approach based on diagnosis, stage of recovery, and domain, according to co-author Karen J. Nolan, PhD, associate director of the Center for Mobility and Rehabilitation Engineering Research and director of the Acquired Brain Injury Mobility Laboratory. “Through this approach, we will find the optimal ways to use lower extremity robotic exoskeletons to improve mobility in individuals with acquired brain injury,” said Dr. Nolan.
    “It’s important to note that our review is unique in presenting both the downstream (functional, biomechanical, physiological) and upstream (cortical) evaluations after rehabilitation using various robotic devices for different types of acquired brain injury,” Dr. Karunakaran noted. “Each device needs to be evaluated by domain in each population and throughout all stages of recovery. This is the necessary scope for determining the response to treatment.” More