More stories

  • in

    Artificial intelligence paves the way to discovering new rare-earth compounds

    Artificial intelligence advances how scientists explore materials. Researchers from Ames Laboratory and Texas A&M University trained a machine-learning (ML) model to assess the stability of rare-earth compounds. This work was supported by Laboratory Directed Research and Development Program (LDRD) program at Ames Laboratory. The framework they developed builds on current state-of-the-art methods for experimenting with compounds and understanding chemical instabilities.
    Ames Lab has been a leader in rare-earths research since the middle of the 20th century. Rare earth elements have a wide range of uses including clean energy technologies, energy storage, and permanent magnets. Discovery of new rare-earth compounds is part of a larger effort by scientists to expand access to these materials.
    The present approach is based on machine learning (ML), a form of artificial intelligence (AI), which is driven by computer algorithms that improve through data usage and experience. Researchers used the upgraded Ames Laboratory Rare Earth database (RIC 2.0) and high-throughput density-functional theory (DFT) to build the foundation for their ML model.
    High-throughput screening is a computational scheme that allows a researcher to test hundreds of models quickly. DFT is a quantum mechanical method used to investigate thermodynamic and electronic properties of many body systems. Based on this collection of information, the developed ML model uses regression learning to assess phase stability of compounds.
    Tyler Del Rose, an Iowa State University graduate student, conducted much of the foundational research needed for the database by writing algorithms to search the web for information to supplement the database and DFT calculations. He also worked on experimental validation of the AI predictions and helped to improve the ML based models by ensuring they are representative of reality.
    “Machine learning is really important here because when we are talking about new compositions, ordered materials are all very well known to everyone in the rare earth community,” said Ames Laboratory Scientist Prashant Singh, who led the DFT plus machine learning effort with Guillermo Vazquez and Raymundo Arroyave. “However, when you add disorder to known materials, it’s very different. The number of compositions becomes significantly larger, often thousands or millions, and you cannot investigate all the possible combinations using theory or experiments.”
    Singh explained that the material analysis is based on a discrete feedback loop in which the AI/ML model is updated using new DFT database based on real-time structural and phase information obtained from our experiments. This process ensures that information is carried from one step to the next and reduces the chance of making mistakes.
    Yaroslav Mudryk, the project supervisor, said that the framework was designed to explore rare earth compounds because of their technological importance, but its application is not limited to rare-earths research. The same approach can be used to train an ML model to predict magnetic properties of compounds, process controls for transformative manufacturing, and optimize mechanical behaviors.
    “It’s not really meant to discover a particular compound,” Mudryk said. “It was, how do we design a new approach or a new tool for discovery and prediction of rare earth compounds? And that’s what we did.”
    Mudryk emphasized that this work is just the beginning. The team is exploring the full potential of this method, but they are optimistic that there will be a wide range of applications for the framework in the future.
    Story Source:
    Materials provided by DOE/Ames Laboratory. Note: Content may be edited for style and length. More

  • in

    Researchers develop the world's first power-free frequency tuner using nanomaterials

    In a paper published today in Nature Communications, researchers at the University of Oxford and the University of Pennsylvania have found a power-free and ultra-fast way of frequency tuning using functional nanowires.
    Think of an orchestra warming up before the performance. The oboe starts to play a perfect A note at a frequency of 440 Hz while all the other instruments adjust themselves to that frequency. Telecommunications technology relies on this very concept of matching the frequencies of transmitters and receivers. In practice, this is achieved when both ends of the communication link tune into the same frequency channel.
    In today’s colossal communications networks, the ability to reliably synthesise as many frequencies as possible and to rapidly switch from one to another is paramount for seamless connectivity.
    Researchers at the University of Oxford and the University of Pennsylvania have fabricated vibrating nanostrings of a chalcogenide glass (germanium telluride) that resonate at predetermined frequencies, just like guitar strings. To tune the frequency of these resonators, the researchers switch the atomic structure of the material, which in turn changes the mechanical stiffness of the material itself.
    This differs from existing approaches that apply mechanical stress on the nanostrings similar to tuning a guitar using the tuning pegs. This directly translates into higher power consumption because the pegs are not permanent and require a voltage to hold the tension.
    Utku Emre Ali, at the University of Oxford who completed the research as part of his doctoral work said:
    ‘By changing how atoms bond with each other in these glasses, we are able to change the Young’s modulus within a few nanoseconds. Young’s modulus is a measure of stiffness, and it directly affects the frequency at which the nanostrings vibrate.’ More

  • in

    Making memory serve correctly: Fixing an inherent problem in next-generation magnetic RAM

    With the advent of the Internet of Things (IoT) era, many researchers are focused on making most of the technologies involved more sustainable. To reach this target of ‘green IoT,’ some of the building blocks of conventional electronics will have to be improved or radically changed to make them not only faster, but also more energy efficient. In line with this reasoning, many scientists worldwide are currently trying to develop and commercialize a new type of random-access memory (RAM) that will enable ultra-low-power electronics: magnetic RAMs.
    Each memory cell in a magnetic RAM stores either a ‘1’ or a ‘0’ depending on whether the magnetic orientation of two magnetic layers are equal or opposite to each other. Various types of magnetic RAM exist, and they mainly differ in how they modify the magnetic orientation of the magnetic layers when writing to a memory cell. In particular, spin injection torque RAM, or STT-RAM, is one type of magnetic memory that is already being commercialized. However, to achieve even lower write currents and higher reliability, a new type of magnetic memory called spin orbit torque RAM (SOT-RAM), is being actively researched.
    In SOT-RAM, by leveraging spin-orbit interactions, the write current can be immensely reduced, which lowers power consumption. Moreover, since the memory readout and write current paths are different, researchers initially thought that the potential disturbances on the stored values would also be small when either reading or writing. Unfortunately, this turned out not to be the case.
    In 2017, in a study led by Professor Takayuki Kawahara of Tokyo University of Science, Japan, researchers reported that SOT-RAMs face an additional source of disturbance when reading a stored value. In conventional SOT-RAMs, the readout current actually shares part of the path of the write current. When reading a value, the readout operation generates unbalanced spin currents due to the Spin Hall effect. This can unintentionally flip the stored bit if the effect is large enough, making reading in SOT-RAMs less reliable.
    To address this problem, Prof. Kawahara and colleagues conducted another study, which was recently published in IEEE Transactions on Magnetics. The team came up with a new reading method for SOT-RAMs that can nullify this new source of readout disturbance. In short, their idea is to alter the original SOT-RAM structure to create a bi-directional read path. When reading a value, the read current flows out of the magnetic layers in two opposite directions simultaneously. In turn, the disturbances produced by the spin currents generated on each side end up cancelling each other out. An explainer video on the same topic can be watched here: https://youtu.be/Gbz4rDOs4yQ.
    In addition to cementing the theory behind this new source of readout disturbance, the researchers conducted a series of simulations to verify the effectiveness of their proposed method. They tested three different types of ferromagnetic materials for the magnetic layers and various device shapes. The results were very favorable, as Prof. Kawahara remarks: “We confirmed that the proposed method reduces the readout disturbance by at least 10 times for all material parameters and device geometries compared with the conventional read path in SOT-RAM.”
    To top things off, the research team checked the performance of their method in the type of realistic array structure that would be used in an actual SOT-RAM. This test is important because the read paths in an array structure would not be perfectly balanced depending on each memory cell’s position. The results show that a sufficient readout disturbance reduction is possible even when connecting about 1,000 memory cells together. The team is now working towards improving their method to reach a higher number of integrated cells.
    This study could pave the way toward a new era in low-power electronics, from personal computers and portable devices to large-scale servers. Satisfied with what they have achieved, Prof. Kawahara remarks: “We expect next-generation SOT-RAMs to employ write currents an order of magnitude lower than current STT-RAMs, resulting in significant power savings. The results of our work will help solve one of the inherent problems of SOT-RAMs, which will be essential for their commercialization.” 
    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    AI provides accurate breast density classification

    An artificial intelligence (AI) tool can accurately and consistently classify breast density on mammograms, according to a study in Radiology: Artificial Intelligence.
    Breast density reflects the amount of fibroglandular tissue in the breast commonly seen on mammograms. High breast density is an independent breast cancer risk factor, and its masking effect of underlying lesions reduces the sensitivity of mammography. Consequently, many U.S. states have laws requiring that women with dense breasts be notified after a mammogram, so that they can choose to undergo supplementary tests to improve cancer detection.
    In clinical practice, breast density is visually assessed on two-view mammograms, most commonly with the American College of Radiology Breast Imaging-Reporting and Data System (BI-RADS) four-category scale, ranging from Category A for almost entirely fatty breasts to Category D for extremely dense. The system has limitations, as visual classification is prone to inter-observer variability, or the differences in assessments between two or more people, and intra-observer variability, or the differences that appear in repeated assessments by the same person.
    To overcome this variability, researchers in Italy developed software for breast density classification based on a sophisticated type of AI called deep learning with convolutional neural networks, a sophisticated type of AI that is capable of discerning subtle patterns in images beyond the capabilities of the human eye. The researchers trained the software, known as TRACE4BDensity, under the supervision of seven experienced radiologists who independently visually assessed 760 mammographic images.
    External validation of the tool was performed by the three radiologists closest to the consensus on a dataset of 384 mammographic images obtained from a different center.
    TRACE4BDensity showed 89% accuracy in distinguishing between low density (BI-RADS categories A and B) and high density (BI-RADS categories C and D) breast tissue, with an agreement of 90% between the tool and the three readers. All disagreements were in adjacent BI-RADS categories.
    “The particular value of this tool is the possibility to overcome the suboptimal reproducibility of visual human density classification that limits its practical usability,” said study co-author Sergio Papa, M.D., from the Centro Diagnostico Italiano in Milan, Italy. “To have a robust tool that proposes the density assignment in a standardized fashion may help a lot in decision-making.”
    Such a tool would be particularly valuable, the researchers said, as breast cancer screening becomes more personalized, with density assessment accounting for one important factor in risk stratification.
    “A tool such as TRACE4BDensity can help us advise women with dense breasts to have, after a negative mammogram, supplemental screening with ultrasound, MRI or contrast-enhanced mammography,” said study co-author Francesco Sardanelli, M.D., from the IRCCS Policlinico San Donato in San Donato, Italy.
    The researchers plan additional studies to better understand the full capabilities of the software.
    “We would like to further assess the AI tool TRACE4BDensity, particularly in countries where regulations on women density is not active, by evaluating the usefulness of such tool for radiologists and patients,” said study co-author Christian Salvatore, Ph.D., senior researcher, University School for Advanced Studies IUSS Pavia and co-founder and chief executive officer of DeepTrace Technologies.
    “Development and Validation of an AI-driven Mammographic Breast Density Classification Tool Based on Radiologist Consensus.” Collaborating with Drs. Papa, Sardanelli and Salvatore were Veronica Magni, M.D., Matteo Interlenghi, M.Sc., Andrea Cozzi, M.D., Marco Alì, Ph.D., Alcide A. Azzena, M.D., Davide Capra, M.D., Serena Carriero, M.D., Gianmarco Della Pepa, M.D., Deborah Fazzini, M.D., Giuseppe Granata, M.D., Caterina B. Monti, M.D., Ph.D., Giulia Muscogiuri, M.D., Giuseppe Pellegrino, M.D., Simone Schiaffino, M.D., and Isabella Castiglioni, M.Sc., M.B.A. More

  • in

    Mathematical paradoxes demonstrate the limits of AI

    Humans are usually pretty good at recognising when they get things wrong, but artificial intelligence systems are not. According to a new study, AI generally suffers from inherent limitations due to a century-old mathematical paradox.
    Like some people, AI systems often have a degree of confidence that far exceeds their actual abilities. And like an overconfident person, many AI systems don’t know when they’re making mistakes. Sometimes it’s even more difficult for an AI system to realise when it’s making a mistake than to produce a correct result.
    Researchers from the University of Cambridge and the University of Oslo say that instability is the Achilles’ heel of modern AI and that a mathematical paradox shows AI’s limitations. Neural networks, the state of the art tool in AI, roughly mimic the links between neurons in the brain. The researchers show that there are problems where stable and accurate neural networks exist, yet no algorithm can produce such a network. Only in specific cases can algorithms compute stable and accurate neural networks.
    The researchers propose a classification theory describing when neural networks can be trained to provide a trustworthy AI system under certain specific conditions. Their results are reported in the Proceedings of the National Academy of Sciences.
    Deep learning, the leading AI technology for pattern recognition, has been the subject of numerous breathless headlines. Examples include diagnosing disease more accurately than physicians or preventing road accidents through autonomous driving. However, many deep learning systems are untrustworthy and easy to fool.
    “Many AI systems are unstable, and it’s becoming a major liability, especially as they are increasingly used in high-risk areas such as disease diagnosis or autonomous vehicles,” said co-author Professor Anders Hansen from Cambridge’s Department of Applied Mathematics and Theoretical Physics. “If AI systems are used in areas where they can do real harm if they go wrong, trust in those systems has got to be the top priority.”
    The paradox identified by the researchers traces back to two 20th century mathematical giants: Alan Turing and Kurt Gödel. At the beginning of the 20th century, mathematicians attempted to justify mathematics as the ultimate consistent language of science. However, Turing and Gödel showed a paradox at the heart of mathematics: it is impossible to prove whether certain mathematical statements are true or false, and some computational problems cannot be tackled with algorithms. And, whenever a mathematical system is rich enough to describe the arithmetic we learn at school, it cannot prove its own consistency. More

  • in

    Public transport: AI assesses resilience of timetables

    A brief traffic jam, a stuck door, or many passengers getting on and off at a stop — even small delays in the timetables of trains and buses can lead to major problems. A new artificial intelligence (AI) could help designing schedules that are less susceptible to those minor disruptions. It was developed by a team from the Martin Luther University Halle-Wittenberg (MLU), the Fraunhofer Institute for Industrial Mathematics ITWM and the University of Kaiserslautern. The study was published in “Transportation Research Part C: Emerging Technologies.”
    The team was looking for an efficient way to test how well timetables can compensate for minor, unavoidable disruptions and delays. In technical terms, this is called robustness. Until now, such timetable optimisations have required elaborate computer simulations that calculate the routes of a large number of passengers under different scenarios. A single simulation can easily take several minutes of computing time. However, many thousands of such simulations are needed to optimise timetables. “Our new method enables a timetable’s robustness to be very accurately estimated within milliseconds,” says Professor Matthias Müller-Hannemann from the Institute of Computer Science at MLU. The researchers from Halle and Kaiserslautern used numerous methods for evaluating timetables in order to train their artificial intelligence. The team tested the new AI using timetables for Göttingen and part of southern Lower Saxony and achieved very good results.
    “Delays are unavoidable. They happen, for example, when there is a traffic jam during rush hour, when a door of the train jams, or when a particularly large number of passengers get on or off at a stop,” Müller-Hannemann says. When transfers are tightly scheduled, even a few minutes of delay can lead to travellers missing their connections. “In the worst case, they miss the last connection of the day,” adds co-author Ralf Rückert. Another consequence is that vehicle rotations can be disrupted so that follow-on journeys begin with a delay and the problem continues to grow.
    There are limited ways to counteract such delays ahead of time: Travel times between stops and waiting times at stops could be more generously calculated, and larger time buffers could be planned at terminal stops and between subsequent trips. However, all this comes at the expense of economic efficiency. The new method could now help optimise timetables so that a very good balance can be achieved between passenger needs, such as fast connections and few transfers, timetable robustness against disruptions, and the external economic conditions of the transport companies.
    The study was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) within the framework of the research unit “Integrated Planning for Public Transport.”
    Story Source:
    Materials provided by Martin-Luther-Universität Halle-Wittenberg. Note: Content may be edited for style and length. More

  • in

    BirdBot is energy-efficient thanks to nature as a model

    If a Tyrannosaurus Rex living 66 million years ago featured a similar leg structure as an ostrich running in the savanna today, then we can assume bird legs stood the test of time — a good example of evolutionary selection.
    Graceful, elegant, powerful — flightless birds like the ostrich are a mechanical wonder. Ostriches, some of which weigh over 100kg, run through the savanna at up to 55km/h. The ostriches outstanding locomotor performance is thought to be enabled by the animal’s leg structure. Unlike humans, birds fold their feet back when pulling their legs up towards their bodies. Why do the animals do this? Why is this foot movement pattern energy-efficient for walking and running? And can the bird’s leg structure with all its bones, muscles, and tendons be transferred to walking robots?
    Alexander Badri-Spröwitz has spent more than five years on these questions. At the Max Planck Institute for Intelligent Systems (MPI-IS), he leads the Dynamic Locomotion Group. His team works at the interface between biology and robotics in the field of biomechanics and neurocontrol. The dynamic locomotion of animals and robots is the group’s main focus.
    Together with his doctoral student Alborz Aghamaleki Sarvestani, Badri-Spröwitz has constructed a robot leg that, like its natural model, is energy-efficient: BirdBot needs fewer motors than other machines and could, theoretically, scale to large size. On March 16th, Badri-Spröwitz, Aghamaleki Sarvestani, the roboticist Metin Sitti, a director at MPI-IS, and biology professor Monica A. Daley of the University of California, Irvine, published their research in the  journal Science Robotics.
    Compliant spring-tendon network made of muscles and tendons
    When walking, humans pull their feet up and bend their knees, but feet and toes point forward almost unchanged. It is known that Birds are different — in the swing phase, they fold their feet backward. But what is the function of this motion? Badri-Spröwitz and his team attribute this movement to a mechanical coupling. “It’s not the nervous system, it’s not electrical impulses, it’s not muscle activity,” Badri-Spröwitz explains. “We hypothesized a new function of the foot-leg coupling through a network of muscles and tendons that extends across multiple joints.” These multi-joint muscle-tendon coordinate foot folding in the swing phase. In our robot, we have implemented the coupled mechanics in the leg and foot, which enables energy-efficient and robust robot walking. Our results demonstrating this mechanism in a robot lead us to believe that similar efficiency benefits also hold true for birds,” he explains. More

  • in

    Scientists devise new technique to increase chip yield from semiconductor wafer

    Scientists from the Nanyang Technological University, Singapore (NTU Singapore) and the Korea Institute of Machinery & Materials (KIMM) have developed a technique to create a highly uniform and scalable semiconductor wafer, paving the way to higher chip yield and more cost-efficient semiconductors.
    Semiconductor chips commonly found in smart phones and computers are difficult and complex to make, requiring highly advanced machines and special environments to manufacture.
    Their fabrication is typically done on silicon wafers and then diced into the small chips that are used in devices. However, the process is imperfect and not all chips from the same wafer work or operate as desired. These defective chips are discarded, lowering semiconductor yield while increasing production cost.
    The ability to produce uniform wafers at the desired thickness is the most important factor in ensuring that every chip fabricated on the same wafer performs correctly.
    Nanotransfer-based printing — a process that uses a polymer mould to print metal onto a substrate through pressure, or ‘stamping’ — has gained traction in recent years as a promising technology for its simplicity, relative cost-effectiveness, and high throughput.
    However, the technique uses a chemical adhesive layer, which causes negative effects, such as surface defects and performance degradation when printed at scale, as well as human health hazards. For these reasons, mass adoption of the technology and consequent chip application in devices has been limited. More