More stories

  • in

    Sci­en­tists develop fermionic quan­tum pro­ces­sor

    Researchers from Austria and USA have designed a new type of quantum computer that uses fermionic atoms to simulate complex physical systems. The processor uses programmable neutral atom arrays and is capable of simulating fermionic models in a hardware-efficient manner using fermionic gates. The team led by Peter Zoller demonstrated how the new quantum processor can efficiently simulate fermionic models from quantum chemistry and particle physics.
    Fermionic atoms are atoms that obey the Pauli exclusion principle, which means that no two of them can occupy the same quantum state simultaneously. This makes them ideal for simulating systems where fermionic statistics play a crucial role, such as molecules, superconductors and quark-gluon plasmas. “In qubit-based quantum computers extra resources need to be dedicated to simulate these properties, usually in the form of additional qubits or longer quantum circuits,” explains Daniel Gonzalez Cuadra from the research group led by Peter Zoller at the Institute for Quantum Optics and Quantum Information (IQOQI) of the Austrian Academy of Sciences (ÖAW) and the Department of Theoretical Physics at the University of Innsbruck, Austria.
    Quantum information in fermionic particles
    A fermionic quantum processor is composed of a fermionic register and a set of fermionic quantum gates. “The register consists on a set of fermionic modes, which can be either empty or occupied by a single fermion, and these two states form the local unit of quantum information,” says Daniel Gonzalez Cuadra. “The state of the system we want to simulate, such as a molecule composed of many electrons, will be in general a superposition of many occupation patterns, which can be directly encoded into this register.” This information is then processed using a fermionic quantum circuit, designed to simulate for example the time evolution of a molecule. Any such circuit can be decomposed into a sequence of just two types of fermionic gates, a tunneling and an interaction gate.
    The researchers propose to trap fermionic atoms in an array of optical tweezers, which are highly focused laser beams that can hold and move atoms with high precision. “The required set of fermionic quantum gates can be natively implemented in this platform: tunneling gates can be obtained by controlling the tunneling of an atom between two optical tweezers, while interaction gates are implemented by first exciting the atoms to Rydberg states, carrying a strong dipole moment,” says Gonzalez Cuadra.
    Quantum chemistry to particle physics
    Fermionic quantum processing is particularly useful to simulate the properties of systems composed of many interacting fermions, such as electrons in a molecule or in a material, or quarks inside a proton, and has therefore applications in many fields, ranging from quantum chemistry to particle physics. The researchers demonstrate how their fermionic quantum processor can efficiently simulate fermionic models from quantum chemistry and lattice gauge theory, which are two important fields of physics that are hard to solve with classical computers. “By using fermions to encode and process quantum information, some properties of the simulated system are intrinsically guaranteed at the hardware level, which would require additional resources in a standard qubit-based quantum computer,” says Daniel Gonzalez Cuadra. “I am very excited about the future of the field, and I would like to keep contributing to it by identifying the most promising applications for fermionic quantum processing, and by designing tailored algorithms that can run in near-term devices.”
    The current results were published in the Proceedings of the National Academy of Sciences (PNAS). The research was financially supported by the Austrian Science Fund FWF, European Union and Simons Foundation, among others. More

  • in

    How artificial intelligence gave a paralyzed woman her voice back

    Researchers at UC San Francisco and UC Berkeley have developed a brain-computer interface (BCI) that has enabled a woman with severe paralysis from a brainstem stroke to speak through a digital avatar.
    It is the first time that either speech or facial expressions have been synthesized from brain signals. The system can also decode these signals into text at nearly 80 words per minute, a vast improvement over commercially available technology.
    Edward Chang, MD, chair of neurological surgery at UCSF, who has worked on the technology, known as a brain computer interface, or BCI, for more than a decade, hopes this latest research breakthrough, appearing Aug. 23, 2023, in Nature, will lead to an FDA-approved system that enables speech from brain signals in the near future.
    “Our goal is to restore a full, embodied way of communicating, which is really the most natural way for us to talk with others,” said Chang, who is a member of the UCSF Weill Institute for Neuroscience and the Jeanne Robertson Distinguished Professor in Psychiatry. “These advancements bring us much closer to making this a real solution for patients.”
    Chang’s team previously demonstrated it was possible to decode brain signals into text in a man who had also experienced a brainstem stroke many years earlier. The current study demonstrates something more ambitious: decoding brain signals into the richness of speech, along with the movements that animate a person’s face during conversation.
    Chang implanted a paper-thin rectangle of 253 electrodes onto the surface of the woman’s brain over areas his team has discovered are critical for speech. The electrodes intercepted the brain signals that, if not for the stroke, would have gone to muscles in her, tongue, jaw and larynx, as well as her face. A cable, plugged into a port fixed to her head, connected the electrodes to a bank of computers.
    For weeks, the participant worked with the team to train the system’s artificial intelligence algorithms to recognize her unique brain signals for speech. This involved repeating different phrases from a 1,024-word conversational vocabulary over and over again, until the computer recognized the brain activity patterns associated with the sounds. More

  • in

    Adding immunity to human kidney-on-a-chip advances cancer drug testing

    A growing repertoire of cell and molecule-based immunotherapies is offering patients with indomitable cancers new hope by mobilizing their immune systems against tumor cells. An emerging class of such immunotherapeutics, known as T cell bispecific antibodies (TCBs), are of growing importance with several TCBs that the U.S. Food and Drug Administration (FDA) approved for the treatment of leukemias, lymphomas, and myelomas. These antibody drugs label tumor cells with one of their ends, and attract immune cells with another end to coerce them into tumor cell killing.
    One major challenge in the development of TCBs and other immunotherapy drugs is that the antigens targeted by TCBs can be present not only on tumor cells, but also healthy cells in the body. This can lead to “on-target, off-tumor” cell killing and unwanted injury of vital organs, such as the kidney, liver, and others, that can put patients participating in clinical trials at risk. Currently, there are no human in vitro models of the kidney that sufficiently recapitulate the 3D architecture, cell diversity, and functionality of organs needed to assess on-target, off-tumor effects at a preclinical stage.
    Now, a new cross-disciplinary, cross-organizational study created an immune-infiltrated kidney tissue model for investigating on-target, off-tumor effects of TCBs and potentially other immunotherapy drugs. The team of bioengineers and immune-oncologists who performed the study at the Wyss Institute for Biologically Inspired Engineering at Harvard University, Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), Harvard Medical School (HMS), and the Roche Innovation Centers in Switzerland and Germany, developed an immune-infiltrated human kidney organoid-on-chip model composed of tiny kidney tissue segments that contain vasculatureand forming nephrons, which can be infiltrated by circulating immune cells. They used this model to understand the specific toxicity of a pre-clinical TCB tool compound that targets the well-characterized tumor antigen Wilms’ Tumor 1 (WT-1) in certain tumors. Importantly, WT-1 is also expressed at much lower levels in the kidney, making it an important organ to study its potential on-target, off-tumor effects in. Their findings are published in PNAS.
    “Together with our collaborators at Roche, we extended our vascularized kidney organoid-on-chip model to include an immune cell population that contains cytotoxic T cells with the potential to kill not only tumor cells, but also other cells that present target antigens,” said Wyss Core Faculty member Jennifer Lewis, Sc.D., the study’s senior author. “Our pre-clinical human in vitro model provides important insights regarding which cells are targeted by a given TCB and what, if any, off-target damage arises.” Lewis is also the Hansjörg Wyss Professor of Biologically Inspired Engineering at SEAS and co-leader of the Wyss Institute’s 3D Organ Engineering Initiative.
    Incorporating immunity into a kidney organoid-on-chip
    In 2019, Lewis’ group, together with that of Joseph Bonventre, M.D., Ph.D. at Brigham and Women’s hospital along with co-author Ryuji Morizane, M.D., Ph.D., found that exposing kidney organoids created from human pluripotent stem cells to the constant flow of fluids during their differentiation enhanced their on-chip vascularization and maturation of glomeruli and tubular compartments, relative to static controls. The researchers’ observations were enabled by a 3D printed millifluidic chip, in which kidney organoids are subjected to nutrient and differentiation factor-laden media flowed at controlled rates during their differentiation. The chip device allows researchers to directly observe the tissue using confocal microscopy through a transparent window in real-time.
    “Given that this in vitro model represents most of the cell types in the kidney and incorporates the immune system, itcould support the assessment of on and off-target effects from TCBs as well as complex cellular interactions,” said Kimberly Homan, Ph.D., a former postdoctoral researcher in Lewis’ lab, first author of the initial work, and a co-corresponding author of this new study. Homan has since left Lewis’ lab to join Genentech as Director of the Complex in vitro Systems lab where she continued to provide expertise to the project collaborators. More

  • in

    Planning algorithm enables high-performance flight

    A tailsitter is a fixed-wing aircraft that takes off and lands vertically (it sits on its tail on the landing pad), and then tilts horizontally for forward flight. Faster and more efficient than quadcopter drones, these versatile aircraft can fly over a large area like an airplane but also hover like a helicopter, making them well-suited for tasks like search-and-rescue or parcel delivery.
    MIT researchers have developed new algorithms for trajectory planning and control of a tailsitter that take advantage of the maneuverability and versatility of this type of aircraft. Their algorithms can execute challenging maneuvers, like sideways or upside-down flight, and are so computationally efficient that they can plan complex trajectories in real-time.
    Typically, other methods either simplify the system dynamics in their trajectory planning algorithm or use two different models, one for helicopter mode and one for airplane mode. Neither approach can plan and execute trajectories that are as aggressive as those demonstrated by the MIT team.
    “We wanted to really exploit all the power the system has. These aircraft, even if they are very small, are quite powerful and capable of exciting acrobatic maneuvers. With our approach, using one model, we can cover the entire flight envelope — all the conditions in which the vehicle can fly,” says Ezra Tal, a research scientist in the Laboratory for Information and Decision Systems (LIDS) and lead author of a new paper describing the work.
    Tal and his collaborators used their trajectory generation and control algorithms to demonstrate tailsitters that perform complex maneuvers like loops, rolls, and climbing turns, and they even showcased a drone race where three tailsitters sped through aerial gates and performed several synchronized, acrobatic maneuvers.
    These algorithms could potentially enable tailsitters to autonomously perform complex moves in dynamic environments, such as flying into a collapsed building and avoiding obstacles while on a rapid search for survivors.
    Joining Tal on the paper are Gilhyun Ryou, a graduate student in the Department of Electrical Engineering and Computer Science (EECS); and senior author Sertac Karaman, associate professor of aeronautics and astronautics and director of LIDS.The research appears in IEEE Transactions on Robotics.
    Tackling tailsitter trajectories More

  • in

    AI can predict certain forms of esophageal and stomach cancer

    In the United States and other western countries, a form of esophageal and stomach cancer has risen dramatically over the last five decades. Rates of esophageal adenocarcinoma, or EAC, and gastric cardia adenocarcinoma, or GCA, are both highly fatal.
    However, Joel Rubenstein, M.D., M.S., a research scientist at the Lieutenant Colonel Charles S. Kettles Veterans Affairs Center for Clinical Management Research and professor of internal medicine at Michigan Medicine, says that preventative measures can be a saving grace.
    “Screening can identify pre-cancerous changes in patients, Barrett’s esophagus, which is sometimes diagnosed in individuals who have long-term gastroesophageal reflux disease, or GERD,” he said.
    “When early detection occurs, patients can take additional steps to help prevent cancer.”
    While current guidelines already consider screening in high-risk patients, Rubenstein notes that many providers are still unfamiliar with this recommendation.
    “Many individuals who develop these types of cancer never had screening to begin with,” he said.
    “But a new automated tool embedded in the electronic health record holds the potential to bridge the gap between provider awareness and patients who are at an increased risk of developing esophageal adenocarcinoma and gastric cardia adenocarcinoma.”
    Rubenstein and a team of researchers used a type of artificial intelligence to examine data regarding EAC and GCA rates in over 10 million U.S. veterans. More

  • in

    Topology’s role in decoding energy of amorphous systems

    How is a donut similar to a coffee cup? This question often serves as an illustrative example to explain the concept of topology. Topology is a field of mathematics that examines the properties of objects that remain consistent even when they are stretched or deformed — provided they are not torn or stitched together. For instance, both a donut and a coffee cup have a single hole. This means, theoretically, if either were pliable enough, it could be reshaped into the other. This branch of mathematics provides a more flexible way to describe shapes in data, such as the connections between individuals in a social network or the atomic coordinates of materials. This understanding has led to the development of a novel technique: topological data analysis.
    In a study published this month in The Journal of Chemical Physics, researchers from SANKEN (The Institute of Scientific and Industrial Research) at Osaka University and two other universities have used topological data analysis and machine learning to formulate a new method to predict the properties of amorphous materials.
    A standout technique in the realm of topological data analysis is persistent homology. This method offers insights into topological features, specifically the “holes” and “cavities” within data. When applied to material structures, it allows us to identify and quantify their crucial structural characteristics.
    Now, these researchers have employed a method that combines persistent homology and machine learning to predict the properties of amorphous materials. Amorphous materials, which include substances like glass, consist of disordered particles that lack repeating patterns.
    A crucial aspect of using machine-learning models to predict the physical properties of amorphous substances lies in finding an appropriate method to convert atomic coordinates into a list of vectors. Merely utilizing coordinates as a list of vectors is inadequate because the energies of amorphous systems remain unchanged with rotation, translation, and permutation of the same type of atoms. Consequently, the representation of atomic configurations should embody these symmetry constraints. Topological methods are inherently well-suited for such challenges. “Using conventional methods to extract information about the connections between numerous atoms characterizing amorphous structures was challenging. However, the task has become more straightforward with the application of persistent homology,” explains Emi Minamitani, the lead author of the study.
    The researchers discovered that by integrating persistent homology with basic machine-learning models, they could accurately predict the energies of disordered structures composed of carbon atoms at varying densities. This strategy demands significantly less computational time compared to quantum mechanics-based simulations of these amorphous materials.
    The techniques showcased in this study hold potential for facilitating more efficient and rapid calculations of material properties in other disordered systems, such as amorphous glasses or metal alloys. More

  • in

    Deciphering the molecular dynamics of complex proteins

    Proteins consist of amino acids, which are linked to form long amino acid chains as specified by our genetic material. In our cells, these chains are not simply rolled up like strings of pearls, but fold into complex, three-dimensional structures. How a protein is folded decisively influences its function: It determines, for example, which other molecules a protein can interact with in the cell. Knowledge of the three-dimensional structure of proteins is therefore of great interest to the life sciences and plays a role in drug development, among other things.
    “Unfortunately, elucidating the structure of a protein is anything but trivial, and focusing on a single state does not always provide meaningful information, especially if the protein is highly flexible in terms of its structure,” says Tobias Schneider, a member of Michael Kovermann’s lab team in the Department of Chemistry at the University of Konstanz. The reason: complex proteins often fold into several compact subunits, called domains, which in turn may be connected by flexible linkers. The more flexibly connected subunits are present, the more different three-dimensional structures a protein can theoretically adopt. “This means that a protein in solution, for example inside our cells, has several equal states and constantly switches between them,” Schneider explains.
    Tracking down the structural ensemble
    A simple snapshot is not sufficient to fully describe the structural features of such multi-domain proteins, as it would capture only one of many states at a time. To get a detailed picture of the possible structures of such proteins, a smart combination of different methods is needed. In an article published in the journal Structure, Konstanz biophysicists led by Michael Kovermann and Christine Peter (also Department of Chemistry) present a corresponding approach using complementary methods.
    “Through NMR spectroscopy, for example, we get information about the dynamic properties of such proteins. Complex computer simulations, on the other hand, provide a good overview of the range of possible folds,” explains Kovermann. “So far, no general approach that comprehensively maps the dynamic and structural properties of multi-domain proteins had existed.” The researchers from Konstanz therefore devised a workflow that combines NMR spectroscopy and computer simulations, allowing them to obtain information on both properties with high temporal and spatial resolution.
    Proof of feasibility included
    The researchers also provided evidence that the method works: They examined various ubiquitin dimers. These consist of two units of the protein ubiquitin that are linked by a flexible bond, just like the situation in cells. It is thus a prime example of a multi-domain protein for which different structural models have been suggested so far and which is of great scientific interest.
    The researchers were able to show that the ubiquitin dimers they studied exhibit a high structural variability and that this can be described in detail using the developed combination of methods. The results also explain the different structural models that currently exist of ubiquitin dimers. “We are convinced that our approach — combining complementary methods — will work not only for ubiquitin dimers but also for other multi-domain proteins,” Schneider says. “Our study opens new avenues to better understand the high structural diversity of these complex proteins that plays a crucial role in their biological functions.” More

  • in

    Sharing chemical knowledge between human and machine

    Structural formulae show how chemical compounds are constructed, i.e., which atoms they consist of, how these are arranged spatially and how they are connected. Chemists can deduce from a structural formula, among other things, which molecules can react with each other and which cannot, how complex compounds can be synthesised or which natural substances could have a therapeutic effect because they fit together with target molecules in cells.
    Developed in the 19th century, the representation of molecules as structural formulae has stood the test of time and is still used in every chemistry textbook. But what makes the chemical world intuitively comprehensible for humans is just a collection of black and white pixels for software. “To make the information from structural formulae usable in databases that can be searched automatically, they have to be translated into a machine-readable code,” explains Christoph Steinbeck, Professor for Analytical Chemistry, Cheminformatics and Chemometrics at the University of Jena.
    An image becomes a code
    And that is precisely what can be done using the Artificial Intelligence tool “DECIMER,” developed by the team led by Prof. Steinbeck and his colleague Prof. Achim Zielesny from the Westphalian University of Applied Sciences. DECIMER stands for “Deep Learning for Chemical Image Recognition.” It is an open-source platform that is freely available to everyone on the Internet and can be used in a standard web browser. Scientific articles containing chemical structural formulae can be uploaded there simply by dragging and dropping, and the AI tool will immediately get to work.
    “First, the entire document is searched for images,” explains Steinbeck. The algorithm then identifies the image information contained and classifies it according to whether it is a chemical structural formula or some other image. Finally, the structural formulae recognised are translated into the chemical structure code or displayed in a structure editor, so that they can be further processed. “This step is the core of the project and the real achievement,” adds Steinbeck.
    In this way, the chemical structural formula for the caffeine molecule becomes the machine-readable structure code CN1C=NC2=C1C(=O)N(C(=O)N2C)C. This can then be uploaded directly into a database and linked to further information on the molecule.
    To develop DECIMER, the researchers used modern AI methods that have only recently become established and are also used, for example, in the Large Language Models (such as ChatGPT) that are currently the subject of much discussion. To train its AI tool, the team generated structural formulas from the existing machine-readable databases and used them as training data — some 450 million structural formulas to date. In addition to researchers, companies are also already using the AI tool, for example to transfer structural formulae from patent specifications into databases. More