More stories

  • in

    Record-breaking simulations of large-scale structure formation in the Universe

    Current simulations of cosmic structure formation do not accurately reproduce the properties of ghost-like particles called neutrinos that have been present in the Universe since its beginning. But now, a research team from Japan has devised an approach that solves this problem.
    In a study published this month in SC ’21: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, researchers at the University of Tsukuba, Kyoto University, and the University of Tokyo report simulations that precisely follow the dynamics of such cosmic relic neutrinos. This study was selected as a finalist for the 2021 ACM Gordon Bell Prize, which recognizes outstanding achievement in high-performance computing.
    Neutrinos are much lighter than all other known particles, but their exact mass remains a mystery. Measuring this mass could help scientists develop theories that go beyond the standard model of particle physics and test explanations for how the Universe evolved. One promising way to pin down this mass is to study the impact of cosmic relic neutrinos on large-scale structure formation using simulations and compare the results with observations. But these simulations need to be extremely accurate.
    “Standard simulations use techniques known as particle-based N-body methods, which have two main drawbacks when it comes to massive neutrinos,” explains Dr. Naoki Yoshida, Principal Investigator at the Kavli Institute for the Physics and Mathematics of the Universe, the University of Tokyo. “First, the simulation results are susceptible to random fluctuations called shot noise. And second, these particle-based methods cannot accurately reproduce collisionless damping — a key process in which fast-moving neutrinos suppress the growth of structure in the Universe.”
    To avoid these issues, the researchers followed the dynamics of the massive neutrinos by directly solving a central equation in plasma physics known as the Vlasov equation. Unlike previous studies, they solved this equation in full six-dimensional phase space, which means that all six dimensions associated with space and velocity were considered. The team coupled this Vlasov simulation with a particle-based N-body simulation of cold dark matter — the main component of matter in the Universe. They performed their hybrid simulations on the supercomputer Fugaku at the RIKEN Center for Computational Science.
    “Our largest simulation self-consistently combines the Vlasov simulation on 400 trillion grids with 330 billion-body calculations, and it accurately reproduces the complex dynamics of cosmic neutrinos,” says lead author of the study, Professor Koji Yoshikawa. “Moreover, the time-to-solution for our simulation is substantially shorter than that for the largest N-body simulations, and the performance scales extremely well with up to 147,456 nodes (7 million CPU cores) on Fugaku.”
    In addition to helping determine the neutrino mass, the researchers suggest that their scheme could be used to study, for example, phenomena involving electrostatic and magnetized plasma and self-gravitating systems.
    Story Source:
    Materials provided by University of Tsukuba. Note: Content may be edited for style and length. More

  • in

    Thriving in non-equilibrium

    Equilibrium may be hard to achieve in our lives, but it is the standard state of nature.
    From the perspective of chemistry and physics, equilibrium is a bit dull — at least to Cheng-Chien Chen, assistant professor of physics at the University of Alabama at Birmingham. His research tries to engineering new states of matter and control these states by probing the possibilities of non-equilibrium.
    “One of our main goals is to see if, when we drive the electron system to non-equilibrium, we can stabilize new phases that are absent in equilibrium, but that can become dominant at non-equilibrium,” Chen said. “This is one of the holy grails in non-equilibrium studies.”
    Recently, with support from the National Science Foundation (NSF), Chen has been studying the effects of pump probe spectroscopy, which uses ultrashort laser pulses to excite (pump) the electrons in a sample, generating a non-equilibrium state, while a weaker beam (probe) monitors the pump-induced changes.
    Chen’s theoretical work suggests it is possible to generate superconductivity at higher temperature than previously possible using this method, opening the door to revolutionary new electronics and energy devices.
    Writing in Physical Review Letters in 2018, Chen and collaborator Yao Wang from Clemson University showed that it was possible to generate d-wave superconductivity and make it the dominant phase using pump-probe systems. More

  • in

    Deep learning dreams up new protein structures

    Just as convincing images of cats can be created using artificial intelligence, new proteins can now be made using similar tools. In a report in Nature, researchers describe the development of a neural network that “hallucinates” proteins with new, stable structures.
    Proteins, which are string-like molecules found in every cell, spontaneously fold into intricate three-dimensional shapes. These folded shapes are key to nearly every biological process, including cellular development, DNA repair, and metabolism. But the complexity of protein shapes makes them difficult to study. Biochemists often use computers to predict how protein strings, or sequences, might fold. In recent years, deep learning has revolutionized the accuracy of this work.
    “For this project, we made up completely random protein sequences and introduced mutations into them until our neural network predicted that they would fold into stable structures,” said co-lead author Ivan Anishchenko, He is an acting instructor of biochemisty at the University of Washington School of Medicine and a researcher in David Baker’s laboratory at the UW Medicine Institute for Protein Design.
    “At no point did we guide the software toward a particular outcome,” Anishchenko said, ” These new proteins are just what a computer dreams up.”
    In the future, the team believes it should be possible to steer the artificial intelligence so that it generates new proteins with useful features.
    “We’d like to use deep learning to design proteins with function, including protein-based drugs, enzymes, you name it,” said co-lead author Sam Pellock, a postdoctoral scholar in the Baker lab.
    The research team, which included scientists from UW Medicine, Harvard University, and Rensselaer Polytechnic Institute (RPI), generated two thousand new protein sequences that were predicted to fold. Over 100 of these were produced in the laboratory and studied. Detailed analysis on three such proteins confirmed that the shapes predicted by the computer were indeed realized in the lab.
    “Our NMR [nuclear magnetic resonance] studies, along with X-ray crystal structures determined by the University of Washington team, demonstrate the remarkable accuracy of protein designs created by the hallucination approach,” said co-author Theresa Ramelot, a senior research scientist at RPI in Troy, New York.
    Gaetano Montelione, a co-author and professor of chemistry and chemical biology at RPI, noted. “The hallucination approach builds on observations we made together with the Baker lab revealing that protein structure prediction with deep learning can be quite accurate even for a single protein sequence with no natural relatives. The potential to hallucinate brand new proteins that bind particular biomolecules or form desired enzymatic active sites is very exciting.”
    “This approach greatly simplifies protein design,” said senior author David Baker, a professor of biochemistry at the UW School of Medicine who received a 2021 Breakthrough Prize in Life Sciences. “Before, to create a new protein with a particular shape, people first carefully studied related structures in nature to come up with a set of rules that were then applied in the design process. New sets of rules were needed for each new type of fold. Here, by using a deep-learning network that already captures general principles of protein structure, we eliminate the need for fold-specific rules and open up the possibility of focusing on just the functional parts of a protein directly.”
    “Exploring how to best use this strategy for specific applications is now an active area of research, and this is where I expect the next breakthroughs,” said Baker.
    Funding was provided by the National Science Foundation, National Institutes of Health, Department of Energy, Open Philanthropy, Eric and Wendy Schmidt by recommendation of the Schmidt Futures program, Audacious Project, Washington Research Foundation, Novo Nordisk Foundation, and Howard Hughes Medical Institute. The authors also acknowledge computing resources from the University of Washington and Rosetta@Home volunteers. More

  • in

    Machine learning helps mathematicians make new connections

    For the first time, mathematicians have partnered with artificial intelligence to suggest and prove new mathematical theorems. The work was done in a collaboration between the University of Oxford, the University of Sydney in Australia and DeepMind, Google’s artificial intelligence sister company.
    While computers have long been used to generate data for mathematicians, the task of identifying interesting patterns has relied mainly on the intuition of the mathematicians themselves. However, it’s now possible to generate more data than any mathematician can reasonably expect to study in a lifetime. Which is where machine learning comes in.
    A paper, published today in Nature, describes how DeepMind was set the task of discerning patterns and connections in the fields of knot theory and representation theory. To the surprise of the mathematicians, new connections were suggested; the mathematicians were then able to examine these connections and prove the conjecture suggested by the AI. These results suggest that machine learning can complement mathematical research, guiding intuition about a problem.
    Using the patterns identified by machine learning, mathematicians from the University of Oxford discovered a surprising connection between algebraic and geometric invariants of knots, establishing a completely new theorem in the field. The University of Sydney, meanwhile, used the connections made by the AI to bring them close to proving an old conjecture about Kazhdan-Lusztig polynomials, which has been unsolved for 40 years.
    Professor Andras Juhasz, of the Mathematical Institute at the University of Oxford and co-author on the paper, said: ‘Pure mathematicians work by formulating conjectures and proving these, resulting in theorems. But where do the conjectures come from?
    ‘We have demonstrated that, when guided by mathematical intuition, machine learning provides a powerful framework that can uncover interesting and provable conjectures in areas where a large amount of data is available, or where the objects are too large to study with classical methods.’
    Professor Marc Lackeby, of the Mathematical Institute at the University of Oxford and co-author, said: ‘It has been fascinating to use machine learning to discover new and unexpected connections between different areas of mathematics. I believe that the work that we have done in Oxford and in Sydney in collaboration with DeepMind demonstrates that machine learning can be a genuinely useful tool in mathematical research.’
    Professor Geordie Williamson, Professor of Mathematics at the University of Sydney and director of the Sydney Mathematical Research Institute and co-author, said: ‘AI is an extraordinary tool. This work is one of the first times it has demonstrated its usefulness for pure mathematicians, like me.
    ‘Intuition can take us a long way, but AI can help us find connections the human mind might not always easily spot.’
    Story Source:
    Materials provided by University of Oxford. Note: Content may be edited for style and length. More

  • in

    Shrinking qubits for quantum computing with atom-thin materials

    For quantum computers to surpass their classical counterparts in speed and capacity, their qubits — which are superconducting circuits that can exist in an infinite combination of binary states — need to be on the same wavelength. Achieving this, however, has come at the cost of size. Whereas the transistors used in classical computers have been shrunk down to nanometer scales, superconducting qubits these days are still measured in millimeters — one millimeter is one million nanometers.
    Combine qubits together into larger and larger circuit chips, and you end up with, relatively speaking, a big physical footprint, which means quantum computers take up a lot of physical space. These are not yet devices we can carry in our backpacks or wear on our wrists.
    To shrink qubits down while maintaining their performance, the field needs a new way to build the capacitors that store the energy that “powers” the qubits. In collaboration with Raytheon BBN Technologies, Wang Fong-Jen Professor James Hone’s lab at Columbia Engineering recently demonstrated a superconducting qubit capacitor built with 2D materials that’s a fraction of previous sizes.
    To build qubit chips previously, engineers have had to use planar capacitors, which set the necessary charged plates side by side. Stacking those plates would save space, but the metals used in conventional parallel capacitors interfere with qubit information storage. In the current work, published on November 18 in Nano Letters, Hone’s PhD students Abhinandan Antony and Anjaly Rajendra sandwiched an insulating layer of boron nitride between two charged plates of superconducting niobium dieselenide. These layers are each just a single atom thick and held together by van der Waals forces, the weak interaction between electrons. The team then combined their capacitors with aluminum circuits to create a chip containing two qubits with an area of 109 square micrometers and just 35 nanometers thick — that’s 1,000 times smaller than chips produced under conventional approaches.
    When they cooled their qubit chip down to just above absolute zero, the qubits found the same wavelength. The team also observed key characteristics that showed that the two qubits were becoming entangled and acting as a single unit, a phenomenon known as quantum coherence; that would mean the qubit’s quantum state could be manipulated and read out via electrical pulses, said Hone. The coherence time was short — a little over 1 microsecond, compared to about 10 microseconds for a conventionally built coplanar capacitor, but this is only a first step in exploring the use of 2D materials in this area, he said.
    Separate work published on arXiv in August from researchers at MIT also took advantage of niobium diselenide and boron nitride to build parallel-plate capacitors for qubits. The devices studied by the MIT team showed even longer coherence times — up to 25 microseconds — indicating that there is still room to further improve performance.
    From here, Hone and his team will continue refining their fabrication techniques and test other types of 2D materials to increase coherence times, which reflect how long the qubit is storing information. New device designs should be able to shrink things down even further, said Hone, by combining the elements into a single van der Waals stack or by deploying 2D materials for other parts of the circuit.
    “We now know that 2D materials may hold the key to making quantum computers possible,” Hone said. “It is still very early days, but findings like these will spur researchers worldwide to consider novel applications of 2D materials. We hope to see a lot more work in this direction going forward.”
    Story Source:
    Materials provided by Columbia University School of Engineering and Applied Science. Original written by Ellen Neff. Note: Content may be edited for style and length. More

  • in

    Time crystal in a quantum computer

    There is a huge global effort to engineer a computer capable of harnessing the power of quantum physics to carry out computations of unprecedented complexity. While formidable technological obstacles still stand in the way of creating such a quantum computer, today’s early prototypes are still capable of remarkable feats.
    For example, the creation of a new phase of matter called a “time crystal.” Just as a crystal’s structure repeats in space, a time crystal repeats in time and, importantly, does so infinitely and without any further input of energy — like a clock that runs forever without any batteries. The quest to realize this phase of matter has been a longstanding challenge in theory and experiment — one that has now finally come to fruition.
    In research published Nov. 30 in Nature, a team of scientists from Stanford University, Google Quantum AI, the Max Planck Institute for Physics of Complex Systems and Oxford University detail their creation of a time crystal using Google’s Sycamore quantum computing hardware.
    “The big picture is that we are taking the devices that are meant to be the quantum computers of the future and thinking of them as complex quantum systems in their own right,” said Matteo Ippoliti, a postdoctoral scholar at Stanford and co-lead author of the work. “Instead of computation, we’re putting the computer to work as a new experimental platform to realize and detect new phases of matter.”
    For the team, the excitement of their achievement lies not only in creating a new phase of matter but in opening up opportunities to explore new regimes in their field of condensed matter physics, which studies the novel phenomena and properties brought about by the collective interactions of many objects in a system. (Such interactions can be far richer than the properties of the individual objects.)
    “Time-crystals are a striking example of a new type of non-equilibrium quantum phase of matter,” said Vedika Khemani, assistant professor of physics at Stanford and a senior author of the paper. “While much of our understanding of condensed matter physics is based on equilibrium systems, these new quantum devices are providing us a fascinating window into new non-equilibrium regimes in many-body physics.”
    What a time crystal is and isn’t More

  • in

    Grouping of immune cell receptors could help decode patients' personal history of infection

    Grouping of pathogen-recognising proteins on immune T cells may be key to identifying if someone has had an infection in the past, suggests a study published today in eLife.
    While tests measuring antibodies against a pathogen are often used to detect signs of a previous infection, it is more difficult for researchers to measure the strength and targets of a person’s T-cell response to infection or vaccination, but the findings hint at a potential new approach. This patient information could one day be useful for detecting infections, guiding treatments or supporting the research and development of new therapies and vaccines.
    Immune T cells help the body find and destroy harmful viruses and bacteria. Proteins on the outer surface of T cells — called receptors — allow the T cells to recognise and eliminate human cells that have been infected by specific pathogens.
    “While the abundance of specific receptors could provide clues about past infection, the enormous molecular diversity of T-cell receptors makes it incredibly challenging to assess which receptors recognise which pathogens. Not only is each pathogen recognised by a distinct set of receptors, but each individual develops a personalised set of receptors for each pathogen,” explains first author Koshlan Mayer-Blackwell, Senior Data Scientist at Fred Hutchinson Cancer Research Center, Seattle, Washington, US. “We developed a new computational approach that allows us to find similarities among pathogen-specific T-cell receptors across individuals. Ultimately, we hope this will help develop signatures of past infection despite the enormous diversity of T-cell receptors.”
    The team tested their approach using data from the immuneRACE study of T-cell receptors in patients with COVID-19. Using their new software for rapidly comparing large sets of receptors, they were able to generate 1,831 T-cell receptor groupings based on similarities in the receptors’ amino acid sequences that suggest they have similar functions.
    In an independent group of COVID-19 patients, the team found that the common molecular patterns associated with receptor groupings were more robustly detected than individual receptor sequences that were previously hypothesised to recognise parts of the SARS-CoV-2 virus, demonstrating a major improvement on existing approaches.
    “Our study introduces and validates a flexible approach to identify sets of similar T-cell receptors, which we hope will be broadly useful for scientists studying T-cell immunity,” Mayer-Blackwell says. “Grouping receptors together in this way makes it possible to compare responses to infection or vaccination across a diverse population.”
    To help other researchers use this approach to develop T-cell biomarkers with their own data, the team has created free customisable software called tcrdist3.
    “Our software provides flexible tools that will enable scientists to analyse and integrate the rapidly growing libraries of T-cell receptor sequencing data that are needed to identify the features of pathogen-specific T-cell receptors,” concludes senior author Andrew Fiore-Gartland, Co-Director of the Vaccines and Immunology Statistical Center at the Fred Hutchinson Cancer Research Center. “We hope it will open new opportunities not only to identify patients’ immunological memories of past infections and vaccinations but also to predict their future immune responses.”
    Story Source:
    Materials provided by eLife. Note: Content may be edited for style and length. More

  • in

    Constraining quantum measurement

    The quantum world and our everyday world are very different places. In a publication that appeared as the “Editor’s Suggestion” in Physical Review A this week, UvA physicists Jasper van Wezel and Lotte Mertens and their colleagues investigate how the act of measuring a quantum particle transforms it into an everyday object.
    Quantum mechanics is the theory that describes the tiniest objects in the world around us, ranging from the constituents of single atoms to small dust particles. This microscopic realm behaves remarkably differently from our everyday experience — despite the fact that all objects in our human-scale world are made of quantum particles themselves. This leads to intriguing physical questions: why are the quantum world and the macroscopic world so different, where is the dividing line between them, and what exactly happens there?
    Measurement problem
    One particular area where the distinction between quantum and classical becomes essential is when we use an everyday object to measure a quantum system. The division between the quantum and everyday worlds then amounts to asking how ‘big’ the measurement device should be to be able to show quantum properties using a display in our everyday world. Finding out the details of measurement, such as how many quantum particles it takes to create a measurement device, is called the quantum measurement problem.
    As experiments probing the world of quantum mechanics become ever more advanced and involve ever larger quantum objects, the invisible line where pure quantum behaviour crosses over into classical measurement outcomes is rapidly being approached. In an article that was highlighted as “Editor’s Suggestion” in Physical Review A this week, UvA physicists Jasper van Wezel and Lotte Mertens and their colleagues take stock of current models that attempt to solve the measurement problem, and particularly those that do so by proposing slight modifications to the one equation that rules all quantum behaviour: Schrödinger’s equation.
    Born’s rule
    The researchers show that such amendments can in principle lead to consistent proposals for solving the measurement problem. However, it turns out to be difficult to create models that satisfy Born’s rule, which tells us how to use Schrödinger’s equation for predicting measurement outcomes. The researchers show that only models with sufficient mathematical complexity (in technical terms: models that are non-linear and non-unitary) can give rise to Born’s rule and therefore have a chance of solving the measurement problem and teaching us about the elusive crossover between quantum physics and the everyday world.
    Story Source:
    Materials provided by Universiteit van Amsterdam. Note: Content may be edited for style and length. More