More stories

  • in

    New hard disk write head analytical technology can increase hard disk capacities

    Using synchrotron radiation at SPring-8 — a large-scale synchrotron radiation facility — Tohoku University, Toshiba Corporation, and the Japan Synchrotron Radiation Research Institute (JASRI) have successfully imaged the magnetization dynamics of a hard disk drive (HDD) write head for the first time, with a precision of one ten-billionth of a second. The method makes possible precise analysis of write head operations, accelerating the development of the next-generation write heads and further increasing HDD capacity.
    Details of the research were published in the Journal of Applied Physics on October 6 and presented at the 44th Annual Conference on Magnetics in Japan, on December 14.
    International Data Corporation predicts a five-fold increase in the volume of data generated worldwide in the seven years between 2018 and 2025. HDDs continue to serve as the primary data storage devices in use, and in 2020 the annual total capacity of shipped HDDs is expected to exceed one zettabyte (1021 bytes), with sales reaching $20 billion. Securing further increases in HDD capacity and higher data transfer rates with logical write head designs requires an exhaustive and accurate understanding of write head operations.
    There are, however, high barriers to achieving this: current write heads have a very fine structure, with dimensions of less than 100 nanometers. Magnetization reversal occurs in less than a nanosecond, rendering experimental observations of write head dynamics difficult. Instead, the write head analysis has been conducted by simulations of magnetization dynamics, or done indirectly by evaluating the write performance on the magnetic recording media. Both approaches have their drawbacks, and there is clear demand for a new method capable of capturing the dynamics of a write head precisely.
    Tohoku University, Toshiba, and JASRI used the scanning soft X-ray magnetic circular dichroism microscope installed on the BL25SU beamline at SPring-8 to develop a new analysis technology for HDD write heads.
    The new technology realizes time-resolved measurements through synchronized timing control, in which a write head is operated at an interval of one-tenth of the cycle of the periodic X-ray pulses generated from the SPring-8 storage ring. Simultaneously, focused X-rays scan the medium-facing surface of a write head, and magnetic circular dichroism images temporal changes in the magnetization. This achieves temporal resolution of 50 picoseconds and spatial resolution of 100 nanometers, enabling analyses of the fine structures and fast write head operation. This method has the potential to achieve even higher resolutions by improving the focusing optics for the X-rays.
    The development team used the new technology to obtain the time evolution of the magnetization images during reversal of the write head. The imaging revealed that magnetization reversal of the main pole is completed within a nanosecond and that spatial patterns from magnetization appear in the shield area in response to the main pole reversal. No previous research into write head operations has achieved such high spatial and temporal resolutions, and the use of this approach is expected to support high-precision analyses of write head operations, contributing to the development of the next-generation write heads and the further improvements in HDD performance.
    Toshiba is currently developing energy-assisted magnetic recording technologies for next-generation HDD and aims to apply the developed analysis method and the knowledge obtained about write head operations to the development of a write head for energy-assisted magnetic recording.

    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More

  • in

    Physicists observe competition between magnetic orders

    They are as thin as a hair, only a hundred thousand times thinner — so-called two-dimensional materials, consisting of a single layer of atoms, have been booming in research for years. They became known to a wider audience when two Russian-British scientists were awarded the Nobel Prize in Physics in 2010 for the discovery of graphene, a building block of graphite. The special feature of such materials is that they possess novel properties that can only be explained with the help of the laws of quantum mechanics and that may be relevant for enhanced technologies. Researchers at the University of Bonn (Germany) have now used ultracold atoms to gain new insights into previously unknown quantum phenomena. They found out that the magnetic orders between two coupled thin films of atoms compete with each other. The study has been published in the journal Nature.
    Quantum systems realize very unique states of matter originating from the world of nanostructures. They facilitate a wide variety of new technological applications, e.g. contributing to secure data encryption, introducing ever smaller and faster technical devices and even enabling the development of a quantum computer. In the future, such a computer could solve problems which conventional computers cannot solve at all or only over a long period of time.
    How unusual quantum phenomena arise is still far from being fully understood. To shed light on this, a team of physicists led by Prof. Michael Köhl at the Matter and Light for Quantum Computing Cluster of Excellence at the University of Bonn are using so-called quantum simulators, which mimic the interaction of several quantum particles — something that cannot be done with conventional methods. Even state-of-the-art computer models cannot calculate complex processes such as magnetism and electricity down to the last detail.
    Ultracold atoms simulate solids
    The simulator used by the scientists consists of ultracold atoms — ultracold because their temperature is only a millionth of a degree above absolute zero. The atoms are cooled down using lasers and magnetic fields. The atoms are located in optical lattices, i.e. standing waves formed by superimposing laser beams. This way, the atoms simulate the behavior of electrons in a solid state. The experimental setup allows the scientists to perform a wide variety of experiments without external modifications.
    Within the quantum simulator, the scientists have, for the first time, succeeded in measuring the magnetic correlations of exactly two coupled layers of a crystal lattice. “Via the strength of this coupling, we were able to rotate the direction in which magnetism forms by 90 degrees — without changing the material in any other way,” first authors Nicola Wurz and Marcell Gall, doctoral students in Michael Köhl’s research group, explain.
    To study the distribution of atoms in the optical lattice, the physicists used a high-resolution microscope with which they were able to measure magnetic correlations between the individual lattice layers. In this way, they investigated the magnetic order, i.e. the mutual alignment of the atomic magnetic moments in the simulated solid state. They observed that the magnetic order between layers competed with the original order within a single layer, concluding that the more strongly layers were coupled, the more strongly correlations formed between the layers. At the same time, correlations within individual layers were reduced.
    The new results make it possible to better understand the magnetism propagating in the coupled layer systems at the microscopic level. In the future, the findings are to help make predictions about material properties and achieve new functionalities of solids, among other things. Since, for example, high-temperature superconductivity is closely linked to magnetic couplings, the new findings could, in the long run, contribute to the development of new technologies based on such superconductors.
    The Matter and Light for Quantum Computing (ML4Q) Cluster of Excellence
    The Matter and Light for Quantum Computing (ML4Q) Cluster of Excellence is a research cooperation by the universities of Cologne, Aachen and Bonn, as well as the Forschungszentrum Jülich. It is funded as part of the Excellence Strategy of the German federal and state governments. The aim of ML4Q is to develop new computing and networking architectures using the principles of quantum mechanics. ML4Q builds on and extends the complementary expertise in the three key research fields: solid-state physics, quantum optics, and quantum information science.
    The Cluster of Excellence is embedded in the Transdisciplinary Research Area “Building Blocks of Matter and Fundamental Interactions” at the University of Bonn. In six different TRAs, scientists from a wide range of faculties and disciplines come together to work on future-relevant research topics.

    Story Source:
    Materials provided by University of Bonn. Note: Content may be edited for style and length. More

  • in

    Old silicon learns new tricks

    Ultrasmall integrated circuits have revolutionized mobile phones, home appliances, cars, and other everyday technologies. To further miniaturize electronics and enable advanced functions, circuits must be reliably fabricated in three dimensions. Achieving ultrafine 3D shape control by etching into silicon is difficult because even atomic-scale damage reduces device performance. Researchers at Nara Institute of Science and Technology (NAIST) report, in a new study seen in Crystal Growth and Design, silicon etched to adopt the shape of atomically smooth pyramids. Coating these silicon pyramids with a thin layer of iron imparts magnetic properties that until now were only theoretical.
    NAIST researcher and senior author of the study Ken Hattori is widely published in the field of atomically controlled nanotechnology. One focus of Hattori’s research is in improving the functionality of silicon-based technology.
    “Silicon is the workhorse of modern electronics because it can act as a semiconductor or an insulator, and it’s an abundant element. However, future technological advances require atomically smooth device fabrication in three dimensions,” says Hattori.
    A combination of standard dry etching and chemical etching is necessary to fabricate arrays of pyramid-shaped silicon nanostructures. Until now, atomically smooth surfaces have been extremely challenging to prepare.
    “Our ordered array of isosceles silicon pyramids were all the same size and had flat facet planes. We confirmed these findings by low-energy electron diffraction patterns and electron microscopy,” explains lead author of the study Aydar Irmikimov.
    An ultrathin — 30 nanometer — layer of iron was deposited onto the silicon to impart unusual magnetic properties. The pyramids’ atomic-level orientation defined the orientation-and thus the properties-of the overlaying iron.
    “Epitaxial growth of iron enabled shape anisotropy of the nanofilm. The curve for the magnetization as a function of the magnetic field was rectangular-like shaped but with breaking points which were caused by asymmetric motion of magnetic vortex bound in pyramid apex,” explains Hattori.
    The researchers found that the curve had no breaking points in analogous experiments performed on planar iron-coated silicon. Other researchers have theoretically predicted the anomalous curve for pyramid shapes, but the NAIST researchers are the first to have shown it in a real nanostructure.
    “Our technology will enable fabrication of a circular magnetic array simply by tuning the shape of the substrate,” says Irmikimov. Integration into advanced technologies such as spintronics — which encode information by the spin, rather than electrical charge, of an electron — will considerably accelerate the functionality of 3D electronics.

    Story Source:
    Materials provided by Nara Institute of Science and Technology. Note: Content may be edited for style and length. More

  • in

    Light-carrying chips advance machine learning

    In the digital age, data traffic is growing at an exponential rate. The demands on computing power for applications in artificial intelligence such as pattern and speech recognition in particular, or for self-driving vehicles, often exceeds the capacities of conventional computer processors. Working together with an international team, researchers at the University of Münster are developing new approaches and process architectures which can cope with these tasks extremely efficient. They have now shown that so-called photonic processors, with which data is processed by means of light, can process information much more rapidly and in parallel — something electronic chips are incapable of doing.
    Background and methodology
    Light-based processors for speeding up tasks in the field of machine learning enable complex mathematical tasks to be processed at enormously fast speeds (10¹² -10¹⁵ operations per second). Conventional chips such as graphic cards or specialized hardware like Google’s TPU (Tensor Processing Unit) are based on electronic data transfer and are much slower. The team of researchers led by Prof. Wolfram Pernice from the Institute of Physics and the Center for Soft Nanoscience at the University of Münster implemented a hardware accelerator for so-called matrix multiplications, which represent the main processing load in the computation of neural networks. Neural networks are a series of algorithms which simulate the human brain. This is helpful, for example, for classifying objects in images and for speech recognition.
    The researchers combined the photonic structures with phase-change materials (PCMs) as energy-efficient storage elements. PCMs are usually used with DVDs or BluRay discs in optical data storage. In the new processor this makes it possible to store and preserve the matrix elements without the need for an energy supply. To carry out matrix multiplications on multiple data sets in parallel, the Münster physicists used a chip-based frequency comb as a light source. A frequency comb provides a variety of optical wavelengths which are processed independently of one another in the same photonic chip. As a result, this enables highly parallel data processing by calculating on all wavelengths simultaneously — also known as wavelength multiplexing. “Our study is the first one to apply frequency combs in the field of artificially neural networks,” says Wolfram Pernice.
    In the experiment the physicists used a so-called convolutional neural network for the recognition of handwritten numbers. These networks are a concept in the field of machine learning inspired by biological processes. They are used primarily in the processing of image or audio data, as they currently achieve the highest accuracies of classification. “The convolutional operation between input data and one or more filters — which can be a highlighting of edges in a photo, for example — can be transferred very well to our matrix architecture,” explains Johannes Feldmann, the lead author of the study. “Exploiting light for signal transference enables the processor to perform parallel data processing through wavelength multiplexing, which leads to a higher computing density and many matrix multiplications being carried out in just one timestep. In contrast to traditional electronics, which usually work in the low GHz range, optical modulation speeds can be achieved with speeds up to the 50 to 100 GHz range.” This means that the process permits data rates and computing densities, i.e. operations per area of processor, never previously attained.
    The results have a wide range of applications. In the field of artificial intelligence, for example, more data can be processed simultaneously while saving energy. The use of larger neural networks allows more accurate, and hitherto unattainable, forecasts and more precise data analysis. For example, photonic processors support the evaluation of large quantities of data in medical diagnoses, for instance in high-resolution 3D data produced in special imaging methods. Further applications are in the fields of self-driving vehicles, which depend on fast, rapid evaluation of sensor data, and of IT infrastructures such as cloud computing which provide storage space, computing power or applications software.

    Story Source:
    Materials provided by University of Münster. Note: Content may be edited for style and length. More

  • in

    A bit too much: Reducing the bit width of Ising models for quantum annealing

    Given a list of cities and the distances between each pair of cities, how do you determine the shortest route that visits each city exactly once and returns to the starting location? This famous problem is called the “traveling salesman problem” and is an example of a combinatorial optimization problem. Solving these problems using conventional computers can be very time-consuming, and special devices called “quantum annealers” have been created for this purpose.
    Quantum annealers are designed to find the lowest energy state (or “ground state”) of what’s known as an “Ising model.” Such models are abstract representations of a quantum mechanical system involving interacting spins that are also influenced by external magnetic fields. In the late 90s, scientists found that combinatorial optimization problems could be formulated as Ising models, which in turn could be physically implemented in quantum annealers. To obtain the solution to a combinatorial optimization problem, one simply has to observe the ground state reached in its associated quantum annealer after a short time.
    One of the biggest challenges in this process is the transformation of the “logical” Ising model into a physically implementable Ising model suitable for quantum annealing. Sometimes, the numerical values of the spin interactions or the external magnetic fields require a number of bits to represent them (bit width) too large for a physical system. This severely limits the versatility and applicability of quantum annealers to real world problems. Fortunately, in a recent study published in IEEE Transactions on Computers, scientists from Japan have tackled this issue. Based purely on mathematical theory, they developed a method by which a given logical Ising model can be transformed into an equivalent model with a desired bit width so as to make it “fit” a desired physical implementation.
    Their approach consists in adding auxiliary spins to the Ising model for problematic interactions or magnetic fields in such a way that the ground state (solution) of the transformed model is the same as that of the original model while also requiring a lower bit width. The technique is relatively simple and completely guaranteed to produce an equivalent Ising model with the same solution as the original. “Our strategy is the world’s first to efficiently and theoretically address the bit-width reduction problem in the spin interactions and magnetic field coefficients in Ising models,” remarks Professor Nozomu Togawa from Waseda University, Japan, who led the study.
    The scientists also put their method to the test in several experiments, which further confirmed its validity. Prof. Togawa has high hopes, and he concludes by saying, “The approach developed in this study will widen the applicability of quantum annealers and make them much more attractive for people dealing with not only physical Ising models but all kinds of combinatorial optimization problems. Such problems are common in cryptography, logistics, and artificial intelligence, among many other fields.”

    Story Source:
    Materials provided by Waseda University. Note: Content may be edited for style and length. More

  • in

    'Virtual biopsies' could replace tissue biopsies in future thanks to new technique

    A new advanced computing technique using routine medical scans to enable doctors to take fewer, more accurate tumour biopsies, has been developed by cancer researchers at the University of Cambridge.
    This is an important step towards precision tissue sampling for cancer patients to help select the best treatment. In future the technique could even replace clinical biopsies with ‘virtual biopsies’, sparing patients invasive procedures.
    The research published in European Radiology shows that combining computed tomography (CT) scans with ultrasound images creates a visual guide for doctors to ensure they sample the full complexity of a tumour with fewer targeted biopsies.
    Capturing the patchwork of different types of cancer cell within a tumour — known as tumour heterogeneity — is critical for selecting the best treatment because genetically-different cells may respond differently to treatment.
    Most cancer patients undergo one or several biopsies to confirm diagnosis and plan their treatment. But because this is an invasive clinical procedure, there is an urgent need to reduce the number of biopsies taken and to make sure biopsies accurately sample the genetically-different cells in the tumour, particularly for ovarian cancer patients.
    High grade serous ovarian (HGSO) cancer, the most common type of ovarian cancer, is referred to as a ‘silent killer’ because early symptoms can be difficult to pick up. By the time the cancer is diagnosed, it is often at an advanced stage, and survival rates have not changed much over the last 20 years.

    advertisement

    But late diagnosis isn’t the only problem. HGSO tumours tend to have a high level of tumour heterogeneity and patients with more genetically-different patches of cancer cells tend to have a poorer response to treatment.
    Professor Evis Sala from the Department of Radiology, co-lead CRUK Cambridge Centre Advanced Cancer Imaging Programme, leads a multi-disciplinary team of radiologists, physicists, oncologists and computational scientists using innovative computing techniques to reveal tumour heterogeneity from standard medical images. This new study, led by Professor Sala, involved a small group of patients with advanced ovarian cancer who were due to have ultrasound-guided biopsies prior to starting chemotherapy.
    For the study, the patients first had a standard-of-care CT scan. A CT scanner uses x-rays and computing to create a 3D image of the tumour from multiple image ‘slices’ through the body.
    The researchers then used a process called radiomics — using high-powered computing methods to analyse and extract additional information from the data-rich images created by the CT scanner — to identify and map distinct areas and features of the tumour. The tumour map was then superimposed on the ultrasound image of the tumour and the combined image used to guide the biopsy procedure.
    By taking targeted biopsies using this method, the research team reported that the diversity of cancer cells within the tumour was successfully captured.

    advertisement

    Co-first author Dr Lucian Beer, from the Department of Radiology and CRUK Cambridge Centre Ovarian Cancer Programme, said of the results: “Our study is a step forward to non-invasively unravel tumour heterogeneity by using standard-of-care CT-based radiomic tumour habitats for ultrasound-guided targeted biopsies.”
    Co-first author Paula Martin-Gonzalez, from the Cancer Research UK Cambridge Institute and CRUK Cambridge Centre Ovarian Cancer Programme, added: “We will now be applying this method in a larger clinical study.”
    Professor Sala said: “This study provides an important milestone towards precision tissue sampling. We are truly pushing the boundaries in translating cutting edge research to routine clinical care.”
    Fiona Barve (56) is a science teacher who lives near Cambridge. She was diagnosed with ovarian cancer in 2017 after visiting her doctor with abdominal pain. She was diagnosed with stage 4 ovarian cancer and immediately underwent surgery and a course of chemotherapy. Since March 2019 she has been cancer free and is now back to teaching three days a week.
    “I was diagnosed at a late stage and I was fortunate my surgery, which I received within four weeks of being diagnosed, and chemotherapy worked for me. I feel lucky to be around,” said Barve.
    “When you are first undergoing the diagnosis of cancer, you feel as if you are on a conveyor belt, every part of the journey being extremely stressful. This new enhanced technique will reduce the need for several procedures and allow patients more time to adjust to their circumstances. It will enable more accurate diagnosis with less invasion of the body and mind. This can only be seen as positive progress.” More

  • in

    COVID-19 unmasked: Math model suggests optimal treatment strategies

    Getting control of COVID-19 will take more than widespread vaccination; it will also require better understanding of why the disease causes no apparent symptoms in some people but leads to rapid multi-organ failure and death in others, as well as better insight into what treatments work best and for which patients.
    To meet this unprecedented challenge, researchers at Massachusetts General Hospital (MGH), in collaboration with investigators from Brigham and Women’s Hospital and the University of Cyprus, have created a mathematical model based on biology that incorporates information about the known infectious machinery of SARS-CoV-2, the virus that causes COVID-19, and about the potential mechanisms of action of various treatments that have been tested in patients with COVID-19.
    The model and its important clinical applications are described in the journal Proceedings of the National Academy of Sciences (PNAS).
    “Our model predicts that antiviral and anti-inflammatory drugs that were first employed to treat COVID-19 might have limited efficacy, depending on the stage of the disease progression,” says corresponding author Rakesh K. Jain, PhD, from the Edwin L. Steele Laboratories in the Department of Radiation Oncology at MGH and Harvard Medical School (HMS).
    Jain and colleagues found that in all patients, the viral load (the level of SARS-CoV-2 particles in the bloodstream) increases during early lung infection, but then may go in different directions starting after Day 5, depending on levels of key immune guardian cells, called T cells. T cells are the first responders of the immune system that effectively coordinate other aspects of immunity. The T cell response is known as adaptive immunity because it is flexible and responds to immediate threats.
    In patients younger than 35 who have healthy immune systems, a sustained recruitment of T cells occurs, accompanied by a reduction in viral load and inflammation and a decrease in nonspecific immune cells (so-called “innate” immunity). All of these processes lead to lower risk for blood clot formation and to restoring oxygen levels in lung tissues, and these patients tend to recover.

    advertisement

    In contrast, people who have higher levels of inflammation at the time of infection — such as those with diabetes, obesity or high blood pressure — or whose immune systems are tilted toward more active innate immune responses but less effective adaptive immune responses tend to have poor outcomes.
    The investigators also sought to answer the question of why men tend have more severe COVID-19 compared with women, and found that although the adaptive immune response is not as vigorous in women as in men, women have lower levels of a protein called TMPRSS2 that allows SARS-CoV-2 to enter and infect normal cells.
    Based on their findings, Jain and colleagues propose that optimal treatment for older patients — who are likely to already have inflammation and impaired immune responses compared with younger patients — should include the clot-preventing drug heparin and/or the use of an immune response-modifying drug (checkpoint inhibitor) in early stages of the disease, and the anti-inflammatory drug dexamethasone at later stages.
    In patients with pre-existing conditions such as obesity, diabetes and high blood pressure or immune system abnormalities, treatment might also include drugs specifically targeted against inflammation-promoting substances (cytokines, such as interleukin-6) in the body, as well as drugs that can inhibit the renin-angiotensin system (the body’s main blood pressure control mechanism), thereby preventing activation of abnormal blood pressure and resistance to blood flow that can occur in response to viral infections.
    This work shows how tools originally developed for cancer research can be useful for understanding COVID-19: The model was first created to analyze involvement of the renin angiotensin system in the development of fibrous tissues in tumors, but was modified to include SARS-CoV-2 infection and COVID-19-specific mechanisms. The team is further developing the model and plans to use it to examine the dynamics of the immune system in response to different types of COVID-19 vaccines as well as cancer-specific comorbidities that might require special considerations for treatment.
    Co-corresponding authors are Lance L. Munn, MGH, and Triantafyllos Stylianopoulos, University of Cyprus. Other authors are Chrysovalantis Voutouri, U. Cyprus; Mohammad Reza Nikmaneshi, Sharif University of Technology, Iran; C. Corey Hardin, Melin J. Khandekar and Sayon Dutta, all from MGH; and Ankit B. Patel and Ashish Verma from Brigham and Women’s Hospital.
    Jain’s research is supported by an Investigator Award and grants from the National Foundation for Cancer Research, Jane’s Trust Foundation, American Medical Research Foundation and Harvard Ludwig Cancer Center. Munn’s research is supported by a National Institutes of Health grant. Stylianopoulos’s research is supported by the European Research Council and Cyprus Research and Innovation Foundation. Patel is support by an American Society of Nephrology Joseph A. Carlucci Research Fellowship. More

  • in

    Advanced materials in a snap

    If everything moved 40,000 times faster, you could eat a fresh tomato three minutes after planting a seed. You could fly from New York to L.A. in half a second. And you’d have waited in line at airport security for that flight for 30 milliseconds.
    Thanks to machine learning, designing materials for new, advanced technologies could accelerate that much.
    A research team at Sandia National Laboratories has successfully used machine learning — computer algorithms that improve themselves by learning patterns in data — to complete cumbersome materials science calculations more than 40,000 times faster than normal.
    Their results, published Jan. 4 in npj Computational Materials, could herald a dramatic acceleration in the creation of new technologies for optics, aerospace, energy storage and potentially medicine while simultaneously saving laboratories money on computing costs.
    “We’re shortening the design cycle,” said David Montes de Oca Zapiain, a computational materials scientist at Sandia who helped lead the research. “The design of components grossly outpaces the design of the materials you need to build them. We want to change that. Once you design a component, we’d like to be able to design a compatible material for that component without needing to wait for years, as it happens with the current process.”
    The research, funded by the U.S. Department of Energy’s Basic Energy Sciences program, was conducted at the Center for Integrated Nanotechnologies, a DOE user research facility jointly operated by Sandia and Los Alamos national labs.

    advertisement

    Machine learning speeds up computationally expensive simulations
    Sandia researchers used machine learning to accelerate a computer simulation that predicts how changing a design or fabrication process, such as tweaking the amounts of metals in an alloy, will affect a material. A project might require thousands of simulations, which can take weeks, months or even years to run.
    The team clocked a single, unaided simulation on a high-performance computing cluster with 128 processing cores (a typical home computer has two to six processing cores) at 12 minutes. With machine learning, the same simulation took 60 milliseconds using only 36 cores-equivalent to 42,000 times faster on equal computers. This means researchers can now learn in under 15 minutes what would normally take a year.
    Sandia’s new algorithm arrived at an answer that was 5% different from the standard simulation’s result, a very accurate prediction for the team’s purposes. Machine learning trades some accuracy for speed because it makes approximations to shortcut calculations.
    “Our machine-learning framework achieves essentially the same accuracy as the high-fidelity model but at a fraction of the computational cost,” said Sandia materials scientist Rémi Dingreville, who also worked on the project.
    Benefits could extend beyond materials
    Dingreville and Montes de Oca Zapiain are going to use their algorithm first to research ultrathin optical technologies for next-generation monitors and screens. Their research, though, could prove widely useful because the simulation they accelerated describes a common event — the change, or evolution, of a material’s microscopic building blocks over time.
    Machine learning previously has been used to shortcut simulations that calculate how interactions between atoms and molecules change over time. The published results, however, demonstrate the first use of machine learning to accelerate simulations of materials at relatively large, microscopic scales, which the Sandia team expects will be of greater practical value to scientists and engineers.
    For instance, scientists can now quickly simulate how miniscule droplets of melted metal will glob together when they cool and solidify, or conversely, how a mixture will separate into layers of its constituent parts when it melts. Many other natural phenomena, including the formation of proteins, follow similar patterns. And while the Sandia team has not tested the machine-learning algorithm on simulations of proteins, they are interested in exploring the possibility in the future. More