More stories

  • in

    Computer-assisted biology: Decoding noisy data to predict cell growth

    Scientists from The University of Tokyo Institute of Industrial Science have designed a machine learning algorithm to predict the size of an individual cell as it grows and divides. By using an artificial neural network that does not impose the assumptions commonly employed in biology, the computer was able to make more complex and accurate forecasts than previously possible. This work may help advance the field of quantitative biology as well as improve the industrial production of medications or fermented products.
    As in all of the natural sciences, biology has developed mathematical models to help fit data and make predictions about the future. However, because of the inherent complexities of living systems, many of these equations rely on simplifying assumptions that do not always reflect the actual underlying biological processes. Now, researchers at The University of Tokyo Institute of Industrial Science have implemented a machine learning algorithm that can use the measured size of single cells over time to predict their future size. Because the computer automatically recognizes patterns in the data, it is not constrained like conventional methods.
    “In biology, simple models are often used based on their capacity to reproduce the measured data,” first author Atsushi Kamimura says. “However, the models may fail to capture what is really going on because of human preconceptions,.”
    The data for this latest study were collected from either an Escherichia coli bacterium or a Schizosaccharomyces pombe yeast cell held in a microfluidic channel at various temperatures. The plot of size over time looked like a “sawtooth” as exponential growth was interrupted by division events. Human biologists usually use a “sizer” model, based on the absolute size of the cell, or “adder” model, based on the increase in size since birth, to predict when divisions will occur. The computer algorithm found support for the “adder” principle, but as part of a complex web of biochemical reactions and signaling.
    “Our deep-learning neural network can effectively separate the history-dependent deterministic factors from the noise in given data,” senior author Tetsuya Kobayashi says.
    This method can be extended to many other aspects of biology besides predicting cell size. In the future, life science may be driven more by objective artificial intelligence than human models. This may lead to more efficient control of microorganisms we use to ferment products and produce drugs.
    Story Source:
    Materials provided by Institute of Industrial Science, The University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Physicists take big step in race to quantum computing

    A team of physicists from the Harvard-MIT Center for Ultracold Atoms and other universities has developed a special type of quantum computer known as a programmable quantum simulator capable of operating with 256 quantum bits, or “qubits.”
    The system marks a major step toward building large-scale quantum machines that could be used to shed light on a host of complex quantum processes and eventually help bring about real-world breakthroughs in material science, communication technologies, finance, and many other fields, overcoming research hurdles that are beyond the capabilities of even the fastest supercomputers today. Qubits are the fundamental building blocks on which quantum computers run and the source of their massive processing power.
    “This moves the field into a new domain where no one has ever been to thus far,” said Mikhail Lukin, the George Vasmer Leverett Professor of Physics, co-director of the Harvard Quantum Initiative, and one of the senior authors of the study published today in the journal Nature. “We are entering a completely new part of the quantum world.”
    According to Sepehr Ebadi, a physics student in the Graduate School of Arts and Sciences and the study’s lead author, it is the combination of system’s unprecedented size and programmability that puts it at the cutting edge of the race for a quantum computer, which harnesses the mysterious properties of matter at extremely small scales to greatly advance processing power. Under the right circumstances, the increase in qubits means the system can store and process exponentially more information than the classical bits on which standard computers run.
    “The number of quantum states that are possible with only 256 qubits exceeds the number of atoms in the solar system,” Ebadi said, explaining the system’s vast size.
    Already, the simulator has allowed researchers to observe several exotic quantum states of matter that had never before been realized experimentally, and to perform a quantum phase transition study so precise that it serves as the textbook example of how magnetism works at the quantum level. More

  • in

    First study of nickelate's magnetism finds a strong kinship with cuprate superconductors

    Ever since the 1986 discovery that copper oxide materials, or cuprates, could carry electrical current with no loss at unexpectedly high temperatures, scientists have been looking for other unconventional superconductors that could operate even closer to room temperature. This would allow for a host of everyday applications that could transform society by making energy transmission more efficient, for instance.
    Nickel oxides, or nickelates, seemed like a promising candidate. They’re based on nickel, which sits next to copper on the periodic table, and the two elements have some common characteristics. It was not unreasonable to think that superconductivity would be one of them.
    But it took years of trying before scientists at the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University finally created the first nickelate that showed clear signs of superconductivity.
    Now SLAC, Stanford and Diamond Light Source researchers have made the first measurements of magnetic excitations that spread through the new material like ripples in a pond. The results reveal both important similarities and subtle differences between nickelates and cuprates. The scientists published their results in Science today.
    “This is exciting, because it gives us a new angle for exploring how unconventional superconductors work, which is still an open question after 30-plus years of research,” said
    Haiyu Lu, a Stanford graduate student who did the bulk of the research with Stanford postdoctoral researcher Matteo Rossi and SLAC staff scientist Wei-Sheng Lee. More

  • in

    The pressure is off and high temperature superconductivity remains

    In a critical next step toward room-temperature superconductivity at ambient pressure, Paul Chu, Founding Director and Chief Scientist at the Texas Center for Superconductivity at the University of Houston (TcSUH), Liangzi Deng, research assistant professor of physics at TcSUH, and their colleagues at TcSUH conceived and developed a pressure-quench (PQ) technique that retains the pressure-enhanced and/or -induced high transition temperature (Tc) phase even after the removal of the applied pressure that generates this phase.
    Pengcheng Dai, professor of physics and astronomy at Rice University and his group, and Yanming Ma, Dean of the College of Physics at Jilin University, and his group contributed toward successfully demonstrating the possibility of the pressure-quench technique in a model high temperature superconductor, iron selenide (FeSe). The results were published in the journal Proceedings of the National Academy of Sciences USA.
    “We derived the pressure-quench method from the formation of the human-made diamond by Francis Bundy from graphite in 1955 and other metastable compounds,” said Chu. “Graphite turns into a diamond when subjected to high pressure at high temperatures. Subsequent rapid pressure quench, or removal of pressure, leaves the diamond phase intact without pressure.”
    Chu and his team applied this same concept to a superconducting material with promising results.
    “Iron selenide is considered a simple high-temperature superconductor with a transition temperature (Tc) for transitioning to a superconductive state at 9 Kelvin (K) at ambient pressure,” said Chu.
    “When we applied pressure, the Tc increased to ~ 40 K, more than quadrupling that at ambient, enabling us to unambiguously distinguish the superconducting PQ phase from the original un-PQ phase. We then tried to retain the high-pressure enhanced superconducting phase after removing pressure using the PQ method, and it turns out we can.”
    Dr. Chu and colleagues’ achievement brings scientists a step closer to realizing the dream of room-temperature superconductivity at ambient pressure, recently reported in hydrides only under extremely high pressure. More

  • in

    Handwriting beats typing and watching videos for learning to read

    Though writing by hand is increasingly being eclipsed by the ease of computers, a new study finds we shouldn’t be so quick to throw away the pencils and paper: handwriting helps people learn certain skills surprisingly faster and significantly better than learning the same material through typing or watching videos.
    “The question out there for parents and educators is why should our kids spend any time doing handwriting,” says senior author Brenda Rapp, a Johns Hopkins University professor of cognitive science. “Obviously, you’re going to be a better hand-writer if you practice it. But since people are handwriting less then maybe who cares? The real question is: Are there other benefits to handwriting that have to do with reading and spelling and understanding? We find there most definitely are.”
    The work appears in the journal Psychological Science.
    Rapp and lead author Robert Wiley, a former Johns Hopkins University Ph.D. student who is now a professor at the University of North Carolina, Greensboro, conducted an experiment in which 42 people were taught the Arabic alphabet, split into three groups of learners: writers, typers and video watchers.
    Everyone learned the letters one at a time by watching videos of them being written along with hearing names and sounds. After being introduced to each letter, the three groups would attempt to learn what they just saw and heard in different ways. The video group got an on-screen flash of a letter and had to say if it was the same letter they’d just seen. The typers would have to find the letter on the keyboard. The writers had to copy the letter with pen and paper.
    At the end, after as many as six sessions, everyone could recognize the letters and made few mistakes when tested. But the writing group reached this level of proficiency faster than the other groups — a few of them in just two sessions. More

  • in

    Simulations of turbulence's smallest structures

    When you pour cream into a cup of coffee, the viscous liquid seems to lazily disperse throughout the cup. Take a mixing spoon or straw to the cup, though, and the cream and coffee seem to quickly and seamlessly combine into a lighter color and, at least for some, a more enjoyable beverage.
    The science behind this relatively simple anecdote actually speaks to a larger truth about complex fluid dynamics and underpins many of the advancements made in transportation, power generation, and other technologies since the industrial era — the seemingly random chaotic motions known as turbulence play a vital role in chemical and industrial processes that rely on effective mixing of different fluids.
    While scientists have long studied turbulent fluid flows, their inherent chaotic natures have prevented researchers from developing an exhaustive list of reliable “rules,” or universal models for accurately describing and predicting turbulence. This tall challenge has left turbulence as one of the last major unsolved “grand challenges” in physics.
    In recent years, high-performance computing (HPC) resources have played an increasingly important role in gaining insight into how turbulence influences fluids under a variety of circumstances. Recently, researchers from the RWTH Aachen University and the CORIA (CNRS UMR 6614) research facility in France have been using HPC resources at the Jülich Supercomputing Centre (JSC), one of the three HPC centres comprising the Gauss Centre for Supercomputing (GCS), to run high-resolution direct numerical simulations (DNS) of turbulent setups including jet flames. While extremely computationally expensive, DNS of turbulence allows researchers to develop better models to run on more modest computing resources that can help academic or industrial researchers using turbulence’s effects on a given fluid flow.
    “The goal of our research is to ultimately improve these models, specifically in the context of combustion and mixing applications,” said Dr. Michael Gauding, CORIA scientist and researcher on the project. The team’s recent work was just named the distinguished paper from the “Turbulent Flames” colloquium, which happened as part of the 38th International Symposium on Combustion.
    Starts and stops
    Despite its seemingly random, chaotic characteristics, researchers have identified some important properties that are universal, or at least very common, for turbulence under specific conditions. Researchers studying how fuel and air mix in a combustion reaction, for instance, rely on turbulence to ensure a high mixing efficiency. Much of that important turbulent motion may stem from what happens in a thin area near the edge of the flame, where its chaotic motions collide with the smoother-flowing fluids around it. This area, the turbulent-non-turbulent interface (TNTI), has big implications for understanding turbulent mixing. More

  • in

    Researchers record brainwaves to measure 'cybersickness'

    If a virtual world has ever left you feeling nauseous or disorientated, you’re familiar with cybersickness, and you’re hardly alone. The intensity of virtual reality (VR) — whether that’s standing on the edge of a waterfall in Yosemite or engaging in tank combat with your friends — creates a stomach-churning challenge for 30-80% of users.
    In a first-of-its kind study, researchers at the University of Maryland recorded VR users’ brain activity using electroencephalography (EEG) to better understand and work toward solutions to prevent cybersickness. The research was conducted by Eric Krokos, who received his Ph.D. in computer science in 2018, and Amitabh Varshney, a professor of computer science and dean of UMD’s College of Computer, Mathematical, and Natural Sciences.
    Their study, “Quantifying VR cybersickness using EEG,” was recently published in the journal Virtual Reality.
    The term cybersickness derives from motion sickness, but instead of physical movement, it’s the perception of movement in a virtual environment that triggers physical symptoms such as nausea and disorientation. While there are several theories about why it occurs, the lack of a systematic, quantified way of studying cybersickness has hampered progress that could help make VR accessible to a broader population.
    Krokos and Varshney are among the first to use EEG — which records brain activity through sensors on the scalp — to measure and quantify cybersickness for VR users. They were able to establish a correlation between the recorded brain activity and self-reported symptoms of their participants. The work provides a new benchmark — helping cognitive psychologists, game developers and physicians as they seek to learn more about cybersickness and how to alleviate it.
    “Establishing a strong correlation between cybersickness and EEG-measured brain activity is the first step toward interactively characterizing and mitigating cybersickness, and improving the VR experience for all,” Varshney said. More

  • in

    Machine learning tool sorts the nuances of quantum data

    An interdisciplinary team of Cornell and Harvard University researchers developed a machine learning tool to parse quantum matter and make crucial distinctions in the data, an approach that will help scientists unravel the most confounding phenomena in the subatomic realm.
    The Cornell-led project’s paper, “Correlator Convolutional Neural Networks as an Interpretable Architecture for Image-like Quantum Matter Data,” published June 23 in Nature Communications. The lead author is doctoral student Cole Miles.
    The Cornell team was led by Eun-Ah Kim, professor of physics in the College of Arts and Sciences, who partnered with Kilian Weinberger, associate professor of computing and information science in the Cornell Ann S. Bowers College of Computing and Information Science and director of the TRIPODS Center for Data Science for Improved Decision Making.
    The collaboration with the Harvard team, led by physics professor Markus Greiner, is part of the National Science Foundation’s 10 Big Ideas initiative, “Harnessing the Data Revolution.” Their project, “Collaborative Research: Understanding Subatomic-Scale Quantum Matter Data Using Machine Learning Tools,” seeks to address fundamental questions at the frontiers of science and engineering by pairing data scientists with researchers who specialize in traditional areas of physics, chemistry and engineering.
    The project’s central aim is to find ways to extract new information about quantum systems from snapshots of image-like data. To that end, they are developing machine learning tools that can identify relationships among microscopic properties in the data that otherwise would be impossible to determine at that scale.
    Convolutional neural networks, a kind of machine learning often used to analyze visual imagery, scan an image with a filter to find characteristic features in the data irrespective of where they occur — a step called “convolution.” The convolution is then sent through nonlinear functions that make the convolutional neural networks learn all sorts of correlations among the features. More