More stories

  • in

    ‘Dolomite Problem’: 200-year-old geology mystery resolved

    For 200 years, scientists have failed to grow a common mineral in the laboratory under the conditions believed to have formed it naturally. Now, a team of researchers from the University of Michigan and Hokkaido University in Sapporo, Japan have finally pulled it off, thanks to a new theory developed from atomic simulations.
    Their success resolves a long-standing geology mystery called the “Dolomite Problem.” Dolomite — a key mineral in the Dolomite mountains in Italy, Niagara Falls, the White Cliffs of Dover and Utah’s Hoodoos — is very abundant in rocks older than 100 million years, but nearly absent in younger formations.
    “If we understand how dolomite grows in nature, we might learn new strategies to promote the crystal growth of modern technological materials,” said Wenhao Sun, the Dow Early Career Professor of Materials Science and Engineering at U-M and the corresponding author of the paper published today in Science.
    The secret to finally growing dolomite in the lab was removing defects in the mineral structure as it grows. When minerals form in water, atoms usually deposit neatly onto an edge of the growing crystal surface. However, the growth edge of dolomite consists of alternating rows of calcium and magnesium. In water, calcium and magnesium will randomly attach to the growing dolomite crystal, often lodging into the wrong spot and creating defects that prevent additional layers of dolomite from forming. This disorder slows dolomite growth to a crawl, meaning it would take 10 million years to make just one layer of ordered dolomite.
    Luckily, these defects aren’t locked in place. Because the disordered atoms are less stable than atoms in the correct position, they are the first to dissolve when the mineral is washed with water. Repeatedly rinsing away these defects — for example, with rain or tidal cycles — allows a dolomite layer to form in only a matter of years. Over geologic time, mountains of dolomite can accumulate.
    To simulate dolomite growth accurately, the researchers needed to calculate how strongly or loosely atoms will attach to an existing dolomite surface. The most accurate simulations require the energy of every single interaction between electrons and atoms in the growing crystal. Such exhaustive calculations usually require huge amounts of computing power, but software developed at U-M’s Predictive Structure Materials Science (PRISMS) Center offered a shortcut.
    “Our software calculates the energy for some atomic arrangements, then extrapolates to predict the energies for other arrangements based on the symmetry of the crystal structure,” said Brian Puchala, one of the software’s lead developers and an associate research scientist in U-M’s Department of Materials Science and Engineering.

    That shortcut made it feasible to simulate dolomite growth over geologic timescales.
    “Each atomic step would normally take over 5,000 CPU hours on a supercomputer. Now, we can do the same calculation in 2 milliseconds on a desktop,” said Joonsoo Kim, a doctoral student of materials science and engineering and the study’s first author.
    The few areas where dolomite forms today intermittently flood and later dry out, which aligns well with Sun and Kim’s theory. But such evidence alone wasn’t enough to be fully convincing. Enter Yuki Kimura, a professor of materials science from Hokkaido University, and Tomoya Yamazaki, a postdoctoral researcher in Kimura’s lab. They tested the new theory with a quirk of transmission electron microscopes.
    “Electron microscopes usually use electron beams just to image samples,” Kimura said. “However, the beam can also split water, which makes acid that can cause crystals to dissolve. Usually this is bad for imaging, but in this case, dissolution is exactly what we wanted.”
    After placing a tiny dolomite crystal in a solution of calcium and magnesium, Kimura and Yamazaki gently pulsed the electron beam 4,000 times over two hours, dissolving away the defects. After the pulses, dolomite was seen to grow approximately 100 nanometers — around 250,000 times smaller than an inch. Although this was only 300 layers of dolomite, never had more than five layers of dolomite been grown in the lab before.
    The lessons learned from the Dolomite Problem can help engineers manufacture higher-quality materials for semiconductors, solar panels, batteries and other tech.
    “In the past, crystal growers who wanted to make materials without defects would try to grow them really slowly,” Sun said. “Our theory shows that you can grow defect-free materials quickly, if you periodically dissolve the defects away during growth.”
    The research was funded by the American Chemical Society PRF New Doctoral Investigator grant, the U.S. Department of Energy and the Japanese Society for the Promotion of Science. More

  • in

    AI recognizes the tempo and stages of embryonic development

    Animal embryos go through a series of characteristic developmental stages on their journey from a fertilized egg cell to a functional organism. This biological process is largely genetically controlled and follows a similar pattern across different animal species. Yet, there are differences in the details — between individual species and even among embryos of the same species. For example, the tempo at which individual embryonic stages are passed through can vary. Such variations in embryonic development are considered an important driver of evolution, as they can lead to new characteristics, thus promoting evolutionary adaptations and biodiversity.
    Studying the embryonic development of animals is therefore of great importance to better understand evolutionary mechanisms. But how can differences in embryonic development, such as the timing of developmental stages, be recorded objectively and efficiently? Researchers at the University of Konstanz led by systems biologist Patrick Müller are developing and using methods based on artificial intelligence (AI). In their current article in Nature Methods, they describe a novel approach that automatically captures the tempo of development processes and recognizes characteristic stages without human input — standardized and across species boundaries.
    Every embryo is a little different
    Our current knowledge of animal embryogenesis and individual developmental stages is based on studies in which embryos of different ages were observed under the microscope and described in detail. Thanks to this painstaking manual work, reference books with idealized depictions of individual embryonic stages are available for many animal species today. “However, embryos often do not look exactly the same under the microscope as they do in the schematic drawings. And the transitions between individual stages are not abrupt, but more gradual,” explains Müller. Manually assigning an embryo to the various stages of development is therefore not trivial even for experts and a bit subjective.
    What makes it even more difficult: Embryonic development does not always follow the expected timetable. “Various factors can influence the timing of embryonic development, such as temperature,” explains Müller. The AI-supported method he and his colleagues developed is a substantial step forward. For a first application example, the researchers trained their Twin Network with more than 3 million images of zebrafish embryos that were developing healthily. They then used the resulting AI model to automatically determine the developmental age of other zebrafish embryos.
    Objective, accurate and generalizable
    The researchers were able to demonstrate that the AI is capable of identifying key steps in zebrafish embryogenesis and detecting individual stages of development fully automatically and without human input. In their study, the researchers used the AI system to compare the developmental stage of embryos and describe the temperature dependence of embryonic development in zebrafish. Although the AI was trained with images of normally developing embryos, it was also able to identify malformations that can occur spontaneously in a certain percentage of embryos or that may be triggered by environmental toxins.
    In a final step, the researchers transferred the method to other animal species, such as sticklebacks or the worm Caenorhabditis elegans, which is evolutionarily quite distant from zebrafish. “Once the necessary image material is available, our Twin Network-based method can be used to analyze the embryonic development of various animal species in terms of time and stages. Even if no comparative data for the animal species exists, our system works in an objective, standardized way,” Müller explains. The method therefore holds great potential for studying the development and evolution of previously uncharacterized animal species. More

  • in

    Autonomous excavator constructs a 6-meter-high dry-stone wall

    Until today, dry stone wall construction has involved vast amounts of manual labour. A multidisciplinary team of ETH Zurich researchers developed a method of using an autonomous excavator to construct a dry-​stone wall that is six metres high and sixty-​five metres long. Dry stone walls are resource efficient as they use locally sourced materials, such as concrete slabs that are low in embodied energy.
    ETH Zurich researchers deployed an autonomous excavator, called HEAP, to build a six metre-high and sixty-five-metre-long dry-stone wall. The wall is embedded in a digitally planned and autonomously excavated landscape and park.
    The team of researchers included: Gramazio Kohler Research, the Robotics Systems Lab, Vision for Robotics Lab, and the Chair of Landscape Architecture. They developed this innovative design application as part of the National Centre of Competence in Research for Digital Fabrication (NCCR dfab).
    Using sensors, the excavator can autonomously draw a 3D map of the construction site and localise existing building blocks and stones for the wall’s construction. Specifically designed tools and machine vision approaches enable the excavator to scan and grab large stones in its immediate environment. It can also register their approximate weight as well as their centre of gravity.
    An algorithm determines the best position for each stone, and the excavator then conducts the task itself by placing the stones in the desired location. The autonomous machine can place 20 to 30 stones in a single consignment — about as many as one delivery could supply. More

  • in

    Hybrid transistors set stage for integration of biology and microelectronics

    Your phone may have more than 15 billion tiny transistors packed into its microprocessor chips. The transistors are made of silicon, metals like gold and copper, and insulators that together take an electric current and convert it to 1s and 0s to communicate information and store it. The transistor materials are inorganic, basically derived from rock and metal.
    But what if you could make these fundamental electronic components part biological, able to respond directly to the environment and change like living tissue?
    This is what a team at Tufts University Silklab did when they created transistors replacing the insulating material with biological silk. They reported their findings in Advanced Materials.
    Silk fibroin — the structural protein of silk fibers — can be precisely deposited onto surfaces and easily modified with other chemical and biological molecules to change its properties. Silk functionalized in this manner can pick up and detect a wide range of components from the body or environment.
    The team’s first demonstration of a prototype device used the hybrid transistors to make a highly sensitive and ultrafast breath sensor, detecting changes in humidity. Further modifications of the silk layer could enable devices to detect some cardiovascular and pulmonary diseases, as well as sleep apnea, or pick up carbon dioxide levels and other gases and molecules in the breath that might provide diagnostic information. Used with blood plasma, they could potentially provide information on levels of oxygenation and glucose, circulating antibodies, and more.
    Prior to the development of the hybrid transistors, the Silklab, led by Fiorenzo Omenetto, the Frank C. Doble Professor of engineering, had already used fibroin to make bioactive inks for fabrics that can detect changes in the environment or on the body, sensing tattoos that can be placed under the skin or on the teeth to monitor health and diet, and sensors that can be printed on any surface to detect pathogens like the virus responsible for COVID19.
    How It Works
    A transistor is simply an electrical switch, with a metal electrical lead coming in and another going out. In between the leads is the semiconductor material, so-called because it’s not able to conduct electricity unless coaxed.

    Another source of electrical input called a gate is separated from everything else by an insulator. The gate acts as the “key” to turn the transistor on and off. It triggers the on-state when a threshold voltage- which we will call “1” — creates an electric field across the insulator, priming electron movement in the semiconductor and starting the flow of current through the leads.
    In a biological hybrid transistor, a silk layer is used as the insulator, and when it absorbs moisture, it acts like a gel carrying whatever ions (electrically charged molecules) are contained within. The gate triggers the on-state by rearranging ions in the silk gel. By changing the ionic composition in the silk, the transistor operation changes, allowing it to be triggered by any gate value between zero and one.
    “You could imagine creating circuits that make use of information that is not represented by the discrete binary levels used in digital computing, but can process variable information as in analog computing, with the variation caused by changing what’s inside the silk insulator” said Omenetto. “This opens up the possibility of introducing biology into computing within modern microprocessors,” said Omenetto. Of course, the most powerful known biological computer is the brain, which processes information with variable levels of chemical and electrical signals.
    The technical challenge in creating hybrid biological transistors was to achieve silk processing at the nanoscale, down to 10nm or less than 1/10000th the diameter of a human hair. “Having achieved that, we can now make hybrid transistors with the same fabrication processes that are used for commercial chip manufacturing,” said Beom Joon Kim, postdoctoral researcher at the School of Engineering. “This means you can make a billion of these with capabilities available today.”
    Having billions of transistor nodes with connections reconfigured by biological processes in the silk could lead to microprocessors which could act like the neural networks used in AI. “Looking ahead, one could imagine have integrated circuits that train themselves, respond to environmental signals, and record memory directly in the transistors rather than sending it to separate storage,” said Omenetto.
    Devices detecting and responding to more complex biological states, as well as large-scale analog and neuromorphic computing are yet to be created. Omenetto is optimistic for future opportunities. “This opens up a new way of thinking about the interface between electronics and biology, with many important fundamental discoveries and applications ahead.” More

  • in

    AI for perovskite solar cells: Key to better manufacturing

    Tandem solar cells based on perovskite semiconductors convert sunlight to electricity more efficiently than conventional silicon solar cells. In order to make this technology ready for the market, further improvements with regard to stability and manufacturing processes are required. Researchers of Karlsruhe Institute of Technology (KIT) and of two Helmholtz platforms — Helmholtz Imaging at the German Cancer Research Center (DKFZ) and Helmholtz AI — have succeeded in finding a way to predict the quality of the perovskite layers and consequently that of the resulting solar cells: Assisted by Machine Learning and new methods in Artificial Intelligence (AI), it is possible assess their quality from variations in light emission already in the manufacturing process.
    Perovskite tandem solar cells combine a perovskite solar cell with a conventional solar cell, for example based on silicon. These cells are considered a next-generation technology: They boast an efficiency of currently more than 33 percent, which is much higher than that of conventional silicon solar cells. Moreover, they use inexpensive raw materials and are easily manufactured. To achieve this level of efficiency, an extremely thin high-grade perovskite layer, whose thickness is only a fraction of that of human hair, has to be produced. “Manufacturing these high-grade, multi-crystalline thin layers without any deficiencies or holes using low-cost and scalable methods is one of the biggest challenges,” says tenure-track professor Ulrich W. Paetzold who conducts research at the Institute of Microstructure Technology and the Light Technology Institute of KIT. Even under apparently perfect lab conditions, there may be unknown factors that cause variations in semiconductor layer quality: “This drawback eventually prevents a quick start of industrial-scale production of these highly efficient solar cells, which are needed so badly for the energy turnaround,” explains Paetzold.
    AI Finds Hidden Signs of Effective Coating
    To find the factors that influence coating, an interdisciplinary team consisting of the perovskite solar cell experts of KIT has joined forces with specialists for Machine Learning and Explainable Artificial Intelligence (XAI) of Helmholtz Imaging and Helmholtz AI at the DKFZ in Heidelberg. The researchers developed AI methods that train and analyze neural networks using a huge dataset. This dataset includes video recordings that show the photoluminescence of the thin perovskite layers during the manufacturing process. Photoluminescence refers to the radiant emission of the semiconductor layers that have been excited by an external light source. “Since even experts could not see anything particular on the thin layers, the idea was born to train an AI system for Machine Learning (Deep Learning) to detect hidden signs of good or poor coating from the millions of data items on the videos,” Lukas Klein and Sebastian Ziegler from Helmholtz Imaging at the DKFZ explain.
    To filter and analyze the widely scattered indications output by the Deep Learning AI system, the researchers subsequently relied on methods of Explainable Artificial Intelligence.
    “A Blueprint for Follow-Up Research”
    The researchers found out experimentally that the photoluminescence varies during production and that this phenomenon has an influence on the coating quality. “Key to our work was the targeted use of XAI methods to see which factors have to be changed to obtain a high-grade solar cell,” Klein and Ziegler say. This is not the usual approach. In most cases, XAI is only used as a kind of guardrail to avoid mistakes when building AI models. “This is a change of paradigm: Gaining highly relevant insights in materials science in such a systematic way is a totally new experience.” It was indeed the conclusion drawn from the photoluminescence variation that enabled the researchers to take the next step. After the neural networks had been trained accordingly, the AI was able to predict whether each solar cell would achieve a low or a high level of efficiency based on which variation of light emission occurred at what point in the manufacturing process. “These are extremely exciting results,” emphasizes Ulrich W. Paetzold. “Thanks to the combined use of AI, we have a solid clue and know which parameters need to be changed in the first place to improve production. Now we are able to conduct our experiments in a more targeted way and are no longer forced to look blindfolded for the needle in a haystack. This is a blueprint for follow-up research that also applies to many other aspects of energy research and materials science.” More

  • in

    First experimental evidence of hopfions in crystals opens up new dimension for future technology

    Hopfions, magnetic spin structures predicted decades ago, have become a hot and challenging research topic in recent years. In a study published in Nature today, the first experimental evidence is presented by a Swedish-German-Chinese research collaboration.
    “Our results are important from both a fundamental and applied point of view, as a new bridge has emerged between experimental physics and abstract mathematical theory, potentially leading to hopfions finding an application in spintronics,” says Philipp Rybakov, researcher at the Department of Physics and Astronomy at Uppsala University, Sweden.
    A deeper understanding of how different components of materials function is important for the development of innovative materials and future technology. The research field of spintronics, for example, which studies the spin of electrons, has opened up promising possibilities to combine the electrons electricity and magnetism for applications such as new electronics, etc.
    Magnetic skyrmions and hopfions are topological structures — well-localized field configurations that have been a hot research topic over the past decade owing to their unique particle-like properties, which make them promising objects for spintronic applications. Skyrmions are two-dimensional, resembling vortex-like strings, while hopfions are three-dimensional structures within a magnetic sample volume resembling closed, twisted skyrmion strings in the shape of a donut-shaped ring in the simplest case.
    Despite extensive research in recent years, direct observation of magnetic hopfions has only been reported in synthetic material. This current work is the first experimental evidence of such states stabilised in a crystal of B20-type FeGe plates using transmission electron microscopy and holography. The results are highly reproducible and in full agreement with micromagnetic simulations. The researchers provide a unified skyrmion-hopfion homotopy classification and offer an insight into the diversity of topological solitons in three-dimensional chiral magnets.
    The findings open up new fields in experimental physics: identifying other crystals in which hopfions are stable, studying how hopfions interact with electric and spin currents, hopfion dynamics, and more.
    “Since the object is new and many of its interesting properties remain to be discovered, it is difficult to make predictions about specific spintronic applications. However, we can speculate that hopfions may be of greatest interest when upgrading to the third dimension of almost any technology being developed with magnetic skyrmions: racetrack memory, neuromorphic computing, and qubits (basic unit of quantum information). Compared to skyrmions, hopfions have an additional degree of freedom due to three-dimensionality and thus can move in three rather than two dimensions,” explains Rybakov. More

  • in

    Medical AI tool gets human thumbs-up

    A new artificial intelligence computer program created by researchers at the University of Florida and NVIDIA can generate doctors’ notes so well that two physicians couldn’t tell the difference, according to an early study from both groups.
    In this proof-of-concept study, physicians reviewed patient notes — some written by actual medical doctors while others were created by the new AI program — and the physicians identified the correct author only 49% of the time.
    A team of 19 researchers from NVIDIA and the University of Florida said their findings, published Nov. 16 in the Nature journal npj Digital Medicine, open the door for AI to support health care workers with groundbreaking efficiencies.
    The researchers trained supercomputers to generate medical records based on a new model, GatorTronGPT, that functions similarly to ChatGPT. The free versions of GatorTron™ models have more than 430,000 downloads from Hugging Face, an open-source AI website. GatorTron™ models are the site’s only models available for clinical research, according to the article’s lead author Yonghui Wu, Ph.D., from the UF College of Medicine’s department of health outcomes and biomedical informatics.
    “In health care, everyone is talking about these models. GatorTron™ and GatorTronGPT are unique AI models that can power many aspects of medical research and health care. Yet, they require massive data and extensive computing power to build. We are grateful to have this supercomputer, HiPerGator, from NVIDIA to explore the potential of AI in health care,” Wu said.
    UF alumnus and NVIDIA co-founder Chris Malachowsky is the namesake of UF’s new Malachowsky Hall for Data Science & Information Technology. A public-private partnership between UF and NVIDIA helped to fund this $150 million structure. In 2021, UF upgraded its HiPerGator supercomputer to elite status with a multimillion-dollar infrastructure package from NVIDIA, the first at a university.
    For this research, Wu and his colleagues developed a large language model that allows computers to mimic natural human language. These models work well with standard writing or conversations, but medical records bring additional hurdles, such as needing to protect patients’ privacy and being highly technical. Digital medical records cannot be Googled or shared on Wikipedia.

    To overcome these obstacles, the researchers stripped UF Health medical records of identifying information from 2 million patients while keeping 82 billion useful medical words. Combining this set with another dataset of 195 billion words, they trained the GatorTronGPT model to analyze the medical data with GPT-3 architecture, or Generative Pre-trained Transformer, a form of neural network architecture. That allowed GatorTronGPT to write clinical text similar to medical doctors’ notes.
    “This GatorTronGPT model is one of the first major products from UF’s initiative to incorporate AI across the university. We are so pleased with how the partnership with NVIDIA is already bearing fruit and setting the stage for the future of medicine,” said Elizabeth Shenkman, Ph.D., a co-author and chair of UF’s department of health outcomes and biomedical informatics.
    Of the many possible uses for a medical GPT, one idea involves replacing the tedium of documentation with notes recorded and transcribed by AI. Wu says that UF has an innovation center that is pursuing a commercial version of the software.
    For an AI tool to reach such parity with human writing, programmers spend weeks programming supercomputers with clinical vocabulary and language usage based on billions upon billions of words. One resource providing the necessary clinical data is the OneFlorida+ Clinical Research Network, coordinated at UF and representing many health care systems.
    “It’s critical to have such massive amounts of UF Health clinical data not only available but ready for AI. Only a supercomputer could handle such a big dataset of 277 billion words. We are excited to implement GatorTron™ and GatorTronGPT models to real-world health care at UF Health,” said Jiang Bian, Ph.D., a co-author and UF Health’s chief data scientist and chief research information officer.
    A cross-section of 14 UF and UF Health faculty contributed to this study, including researchers from Research Computing, Integrated Data Repository Research Services within the Clinical and Translational Science Institute, and from departments and divisions within the College of Medicine, including neurosurgery, endocrinology, diabetes and metabolism, cardiovascular medicine, and health outcomes and biomedical informatics.
    The study was partially funded by grants from the Patient-Centered Outcomes Research Institute, the National Cancer Institute and the National Institute on Aging.
    Here are two paragraphs that reference two patient cases one written by a human and one created by GatorTronGPT — can you tell whether the author was machine or human? More

  • in

    Computer simulation suggests mutant strains of COVID-19 emerged in response to human behavior

    Using artificial intelligence technology and mathematical modeling, a research group led by Nagoya University has revealed that human behavior, such as lockdowns and isolation measures, affect the evolution of new strains of COVID-19. SARS-CoV-2, the virus that causes COVID-19, developed to become more transmissible earlier in its lifecycle. The researcher’s findings, published in Nature Communications, provide new insights into the relationship between how people behave and disease-causing agents.
    As with any other living organism, viruses evolve over time. Those with survival advantages become dominant in the gene pool. Many environmental factors influence this evolution, including human behavior. By isolating sick people and using lockdowns to control outbreaks, humans may alter virus evolution in complicated ways. Predicting how these changes occur is vital to develop adaptive treatments and interventions.
    An important concept in this interaction is viral load, which refers to the amount or concentration of a virus present per ml of a bodily fluid. In SARS-CoV-2, a higher viral load in respiratory secretions increases the risk of transmission through droplets. Viral load relates to the potential to transmit a virus to others. For example, a virus like Ebola has an exceptionally high viral load, whereas the common cold has a low one. However, viruses must perform a careful balancing act, as increasing the maximum viral load can be advantageous, but an excessive viral load may cause individuals to become too sick to transmit the virus to others.
    The research group led by Professor Shingo Iwami at the Nagoya University Graduate School of Science identified trends using mathematical modeling with an artificial intelligence component to investigate previously published clinical data. They found that the SARS-CoV-2 variants that were most successful at spreading had an earlier and higher peak in viral load. However, as the virus evolved from the pre-Alpha to the Delta variants, it had a shorter duration of infection. The researchers also found that the decreased incubation period and the increased proportion of asymptomatic infections recorded as the virus mutated also affected virus evolution.
    The results showed a clear difference. As the virus evolved from the Wuhan strain to the Delta strain, they found a 5-fold increase in the maximum viral load and a 1.5-fold increase in the number of days before the viral load peaked.
    Iwami and his colleagues suggest that human behavioral changes in response to the virus, designed to limit transmission, were increasing the selection pressure on the virus. This caused SARS-CoV-2 to be transmitted mainly during the asymptomatic and presymptomatic periods, which occur earlier in its infectious cycle. As a result, the viral load peak advanced to this period to spread more effectively in the earlier pre-symptomatic stages.
    When evaluating public health strategies in response to COVID-19 and any future potentially pandemic-causing pathogens, it is necessary to consider the impact of changes in human behavior on virus evolution patterns. “We expect that immune pressure from vaccinations and/or previous infections drives the evolution of SARS-CoV-2,” Iwami said. “However, our study found that human behavior can also contribute to the virus’s evolution in a more complicated manner, suggesting the need to reevaluate virus evolution.”
    Their study suggests the possibility that new strains of coronavirus evolved because of a complex interaction between clinical symptoms and human behavior. The group hopes that their research will speed up the establishment of testing regimes for adaptive treatment, effective screening, and isolation strategies. More