More stories

  • in

    Making memory serve correctly: Fixing an inherent problem in next-generation magnetic RAM

    With the advent of the Internet of Things (IoT) era, many researchers are focused on making most of the technologies involved more sustainable. To reach this target of ‘green IoT,’ some of the building blocks of conventional electronics will have to be improved or radically changed to make them not only faster, but also more energy efficient. In line with this reasoning, many scientists worldwide are currently trying to develop and commercialize a new type of random-access memory (RAM) that will enable ultra-low-power electronics: magnetic RAMs.
    Each memory cell in a magnetic RAM stores either a ‘1’ or a ‘0’ depending on whether the magnetic orientation of two magnetic layers are equal or opposite to each other. Various types of magnetic RAM exist, and they mainly differ in how they modify the magnetic orientation of the magnetic layers when writing to a memory cell. In particular, spin injection torque RAM, or STT-RAM, is one type of magnetic memory that is already being commercialized. However, to achieve even lower write currents and higher reliability, a new type of magnetic memory called spin orbit torque RAM (SOT-RAM), is being actively researched.
    In SOT-RAM, by leveraging spin-orbit interactions, the write current can be immensely reduced, which lowers power consumption. Moreover, since the memory readout and write current paths are different, researchers initially thought that the potential disturbances on the stored values would also be small when either reading or writing. Unfortunately, this turned out not to be the case.
    In 2017, in a study led by Professor Takayuki Kawahara of Tokyo University of Science, Japan, researchers reported that SOT-RAMs face an additional source of disturbance when reading a stored value. In conventional SOT-RAMs, the readout current actually shares part of the path of the write current. When reading a value, the readout operation generates unbalanced spin currents due to the Spin Hall effect. This can unintentionally flip the stored bit if the effect is large enough, making reading in SOT-RAMs less reliable.
    To address this problem, Prof. Kawahara and colleagues conducted another study, which was recently published in IEEE Transactions on Magnetics. The team came up with a new reading method for SOT-RAMs that can nullify this new source of readout disturbance. In short, their idea is to alter the original SOT-RAM structure to create a bi-directional read path. When reading a value, the read current flows out of the magnetic layers in two opposite directions simultaneously. In turn, the disturbances produced by the spin currents generated on each side end up cancelling each other out. An explainer video on the same topic can be watched here: https://youtu.be/Gbz4rDOs4yQ.
    In addition to cementing the theory behind this new source of readout disturbance, the researchers conducted a series of simulations to verify the effectiveness of their proposed method. They tested three different types of ferromagnetic materials for the magnetic layers and various device shapes. The results were very favorable, as Prof. Kawahara remarks: “We confirmed that the proposed method reduces the readout disturbance by at least 10 times for all material parameters and device geometries compared with the conventional read path in SOT-RAM.”
    To top things off, the research team checked the performance of their method in the type of realistic array structure that would be used in an actual SOT-RAM. This test is important because the read paths in an array structure would not be perfectly balanced depending on each memory cell’s position. The results show that a sufficient readout disturbance reduction is possible even when connecting about 1,000 memory cells together. The team is now working towards improving their method to reach a higher number of integrated cells.
    This study could pave the way toward a new era in low-power electronics, from personal computers and portable devices to large-scale servers. Satisfied with what they have achieved, Prof. Kawahara remarks: “We expect next-generation SOT-RAMs to employ write currents an order of magnitude lower than current STT-RAMs, resulting in significant power savings. The results of our work will help solve one of the inherent problems of SOT-RAMs, which will be essential for their commercialization.” 
    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    AI provides accurate breast density classification

    An artificial intelligence (AI) tool can accurately and consistently classify breast density on mammograms, according to a study in Radiology: Artificial Intelligence.
    Breast density reflects the amount of fibroglandular tissue in the breast commonly seen on mammograms. High breast density is an independent breast cancer risk factor, and its masking effect of underlying lesions reduces the sensitivity of mammography. Consequently, many U.S. states have laws requiring that women with dense breasts be notified after a mammogram, so that they can choose to undergo supplementary tests to improve cancer detection.
    In clinical practice, breast density is visually assessed on two-view mammograms, most commonly with the American College of Radiology Breast Imaging-Reporting and Data System (BI-RADS) four-category scale, ranging from Category A for almost entirely fatty breasts to Category D for extremely dense. The system has limitations, as visual classification is prone to inter-observer variability, or the differences in assessments between two or more people, and intra-observer variability, or the differences that appear in repeated assessments by the same person.
    To overcome this variability, researchers in Italy developed software for breast density classification based on a sophisticated type of AI called deep learning with convolutional neural networks, a sophisticated type of AI that is capable of discerning subtle patterns in images beyond the capabilities of the human eye. The researchers trained the software, known as TRACE4BDensity, under the supervision of seven experienced radiologists who independently visually assessed 760 mammographic images.
    External validation of the tool was performed by the three radiologists closest to the consensus on a dataset of 384 mammographic images obtained from a different center.
    TRACE4BDensity showed 89% accuracy in distinguishing between low density (BI-RADS categories A and B) and high density (BI-RADS categories C and D) breast tissue, with an agreement of 90% between the tool and the three readers. All disagreements were in adjacent BI-RADS categories.
    “The particular value of this tool is the possibility to overcome the suboptimal reproducibility of visual human density classification that limits its practical usability,” said study co-author Sergio Papa, M.D., from the Centro Diagnostico Italiano in Milan, Italy. “To have a robust tool that proposes the density assignment in a standardized fashion may help a lot in decision-making.”
    Such a tool would be particularly valuable, the researchers said, as breast cancer screening becomes more personalized, with density assessment accounting for one important factor in risk stratification.
    “A tool such as TRACE4BDensity can help us advise women with dense breasts to have, after a negative mammogram, supplemental screening with ultrasound, MRI or contrast-enhanced mammography,” said study co-author Francesco Sardanelli, M.D., from the IRCCS Policlinico San Donato in San Donato, Italy.
    The researchers plan additional studies to better understand the full capabilities of the software.
    “We would like to further assess the AI tool TRACE4BDensity, particularly in countries where regulations on women density is not active, by evaluating the usefulness of such tool for radiologists and patients,” said study co-author Christian Salvatore, Ph.D., senior researcher, University School for Advanced Studies IUSS Pavia and co-founder and chief executive officer of DeepTrace Technologies.
    “Development and Validation of an AI-driven Mammographic Breast Density Classification Tool Based on Radiologist Consensus.” Collaborating with Drs. Papa, Sardanelli and Salvatore were Veronica Magni, M.D., Matteo Interlenghi, M.Sc., Andrea Cozzi, M.D., Marco Alì, Ph.D., Alcide A. Azzena, M.D., Davide Capra, M.D., Serena Carriero, M.D., Gianmarco Della Pepa, M.D., Deborah Fazzini, M.D., Giuseppe Granata, M.D., Caterina B. Monti, M.D., Ph.D., Giulia Muscogiuri, M.D., Giuseppe Pellegrino, M.D., Simone Schiaffino, M.D., and Isabella Castiglioni, M.Sc., M.B.A. More

  • in

    Mathematical paradoxes demonstrate the limits of AI

    Humans are usually pretty good at recognising when they get things wrong, but artificial intelligence systems are not. According to a new study, AI generally suffers from inherent limitations due to a century-old mathematical paradox.
    Like some people, AI systems often have a degree of confidence that far exceeds their actual abilities. And like an overconfident person, many AI systems don’t know when they’re making mistakes. Sometimes it’s even more difficult for an AI system to realise when it’s making a mistake than to produce a correct result.
    Researchers from the University of Cambridge and the University of Oslo say that instability is the Achilles’ heel of modern AI and that a mathematical paradox shows AI’s limitations. Neural networks, the state of the art tool in AI, roughly mimic the links between neurons in the brain. The researchers show that there are problems where stable and accurate neural networks exist, yet no algorithm can produce such a network. Only in specific cases can algorithms compute stable and accurate neural networks.
    The researchers propose a classification theory describing when neural networks can be trained to provide a trustworthy AI system under certain specific conditions. Their results are reported in the Proceedings of the National Academy of Sciences.
    Deep learning, the leading AI technology for pattern recognition, has been the subject of numerous breathless headlines. Examples include diagnosing disease more accurately than physicians or preventing road accidents through autonomous driving. However, many deep learning systems are untrustworthy and easy to fool.
    “Many AI systems are unstable, and it’s becoming a major liability, especially as they are increasingly used in high-risk areas such as disease diagnosis or autonomous vehicles,” said co-author Professor Anders Hansen from Cambridge’s Department of Applied Mathematics and Theoretical Physics. “If AI systems are used in areas where they can do real harm if they go wrong, trust in those systems has got to be the top priority.”
    The paradox identified by the researchers traces back to two 20th century mathematical giants: Alan Turing and Kurt Gödel. At the beginning of the 20th century, mathematicians attempted to justify mathematics as the ultimate consistent language of science. However, Turing and Gödel showed a paradox at the heart of mathematics: it is impossible to prove whether certain mathematical statements are true or false, and some computational problems cannot be tackled with algorithms. And, whenever a mathematical system is rich enough to describe the arithmetic we learn at school, it cannot prove its own consistency. More

  • in

    Public transport: AI assesses resilience of timetables

    A brief traffic jam, a stuck door, or many passengers getting on and off at a stop — even small delays in the timetables of trains and buses can lead to major problems. A new artificial intelligence (AI) could help designing schedules that are less susceptible to those minor disruptions. It was developed by a team from the Martin Luther University Halle-Wittenberg (MLU), the Fraunhofer Institute for Industrial Mathematics ITWM and the University of Kaiserslautern. The study was published in “Transportation Research Part C: Emerging Technologies.”
    The team was looking for an efficient way to test how well timetables can compensate for minor, unavoidable disruptions and delays. In technical terms, this is called robustness. Until now, such timetable optimisations have required elaborate computer simulations that calculate the routes of a large number of passengers under different scenarios. A single simulation can easily take several minutes of computing time. However, many thousands of such simulations are needed to optimise timetables. “Our new method enables a timetable’s robustness to be very accurately estimated within milliseconds,” says Professor Matthias Müller-Hannemann from the Institute of Computer Science at MLU. The researchers from Halle and Kaiserslautern used numerous methods for evaluating timetables in order to train their artificial intelligence. The team tested the new AI using timetables for Göttingen and part of southern Lower Saxony and achieved very good results.
    “Delays are unavoidable. They happen, for example, when there is a traffic jam during rush hour, when a door of the train jams, or when a particularly large number of passengers get on or off at a stop,” Müller-Hannemann says. When transfers are tightly scheduled, even a few minutes of delay can lead to travellers missing their connections. “In the worst case, they miss the last connection of the day,” adds co-author Ralf Rückert. Another consequence is that vehicle rotations can be disrupted so that follow-on journeys begin with a delay and the problem continues to grow.
    There are limited ways to counteract such delays ahead of time: Travel times between stops and waiting times at stops could be more generously calculated, and larger time buffers could be planned at terminal stops and between subsequent trips. However, all this comes at the expense of economic efficiency. The new method could now help optimise timetables so that a very good balance can be achieved between passenger needs, such as fast connections and few transfers, timetable robustness against disruptions, and the external economic conditions of the transport companies.
    The study was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) within the framework of the research unit “Integrated Planning for Public Transport.”
    Story Source:
    Materials provided by Martin-Luther-Universität Halle-Wittenberg. Note: Content may be edited for style and length. More

  • in

    BirdBot is energy-efficient thanks to nature as a model

    If a Tyrannosaurus Rex living 66 million years ago featured a similar leg structure as an ostrich running in the savanna today, then we can assume bird legs stood the test of time — a good example of evolutionary selection.
    Graceful, elegant, powerful — flightless birds like the ostrich are a mechanical wonder. Ostriches, some of which weigh over 100kg, run through the savanna at up to 55km/h. The ostriches outstanding locomotor performance is thought to be enabled by the animal’s leg structure. Unlike humans, birds fold their feet back when pulling their legs up towards their bodies. Why do the animals do this? Why is this foot movement pattern energy-efficient for walking and running? And can the bird’s leg structure with all its bones, muscles, and tendons be transferred to walking robots?
    Alexander Badri-Spröwitz has spent more than five years on these questions. At the Max Planck Institute for Intelligent Systems (MPI-IS), he leads the Dynamic Locomotion Group. His team works at the interface between biology and robotics in the field of biomechanics and neurocontrol. The dynamic locomotion of animals and robots is the group’s main focus.
    Together with his doctoral student Alborz Aghamaleki Sarvestani, Badri-Spröwitz has constructed a robot leg that, like its natural model, is energy-efficient: BirdBot needs fewer motors than other machines and could, theoretically, scale to large size. On March 16th, Badri-Spröwitz, Aghamaleki Sarvestani, the roboticist Metin Sitti, a director at MPI-IS, and biology professor Monica A. Daley of the University of California, Irvine, published their research in the  journal Science Robotics.
    Compliant spring-tendon network made of muscles and tendons
    When walking, humans pull their feet up and bend their knees, but feet and toes point forward almost unchanged. It is known that Birds are different — in the swing phase, they fold their feet backward. But what is the function of this motion? Badri-Spröwitz and his team attribute this movement to a mechanical coupling. “It’s not the nervous system, it’s not electrical impulses, it’s not muscle activity,” Badri-Spröwitz explains. “We hypothesized a new function of the foot-leg coupling through a network of muscles and tendons that extends across multiple joints.” These multi-joint muscle-tendon coordinate foot folding in the swing phase. In our robot, we have implemented the coupled mechanics in the leg and foot, which enables energy-efficient and robust robot walking. Our results demonstrating this mechanism in a robot lead us to believe that similar efficiency benefits also hold true for birds,” he explains. More

  • in

    Scientists devise new technique to increase chip yield from semiconductor wafer

    Scientists from the Nanyang Technological University, Singapore (NTU Singapore) and the Korea Institute of Machinery & Materials (KIMM) have developed a technique to create a highly uniform and scalable semiconductor wafer, paving the way to higher chip yield and more cost-efficient semiconductors.
    Semiconductor chips commonly found in smart phones and computers are difficult and complex to make, requiring highly advanced machines and special environments to manufacture.
    Their fabrication is typically done on silicon wafers and then diced into the small chips that are used in devices. However, the process is imperfect and not all chips from the same wafer work or operate as desired. These defective chips are discarded, lowering semiconductor yield while increasing production cost.
    The ability to produce uniform wafers at the desired thickness is the most important factor in ensuring that every chip fabricated on the same wafer performs correctly.
    Nanotransfer-based printing — a process that uses a polymer mould to print metal onto a substrate through pressure, or ‘stamping’ — has gained traction in recent years as a promising technology for its simplicity, relative cost-effectiveness, and high throughput.
    However, the technique uses a chemical adhesive layer, which causes negative effects, such as surface defects and performance degradation when printed at scale, as well as human health hazards. For these reasons, mass adoption of the technology and consequent chip application in devices has been limited. More

  • in

    What's the prevailing opinion on social media? Look at the flocks, says researcher

    A University at Buffalo communication researcher has developed a framework for measuring the slippery concept of social media public opinion.
    These collective views on a topic or issue expressed on social media, distinct from the conclusions determined through survey-based public opinion polling, have never been easy to determine. But the “murmuration” framework developed and tested by Yini Zhang, PhD, an assistant professor of communication in the UB College of Arts and Sciences, and her collaborators addresses challenges, like identifying online demographics and factoring for opinion manipulation, that are characteristic on these digital battlegrounds of public discourse.
    Murmuration identifies meaningful groups of social media actors based on the “who-follows-whom” relationship. The actors attract like-minded followers to form “flocks,” which serve as the units of analysis. As opinions form and shift in response to external events, the flocks’ unfolding opinions move like the fluid murmuration of airborne starlings.
    The framework and the findings from an analysis of social network structure and opinion expression from over 193,000 Twitter accounts, which followed more than 1.3 million other accounts, suggest that flock membership can predict opinion and that the murmuration framework reveals distinct patterns of opinion intensity. The researchers studied Twitter because of the ability to see who is following whom, information that is not publicly accessible on other platforms.
    The results, published in the Journal of Computer-Mediated Communication, further support the echo chamber tendencies prevalent on social media, while adding important nuance to existing knowledge.
    “By identifying different flocks and examining the intensity, temporal pattern and content of their expression, we can gain deeper insights far beyond where liberals and conservatives stand on a certain issue,” says Zhang, an expert in social media and political communication. “These flocks are segments of the population, defined not by demographic variables of questionable salience, like white women aged 18-29, but by their online connections and response to events.
    “As such, we can observe opinion variations within an ideological camp and opinions of people that might not be typically assumed to have an opinion on certain issues. We see the flocks as naturally occurring, responding to things as they happen, in ways that take a conversational element into consideration.”
    Zhang says it’s important not to confuse public opinion, as measured by survey-based polling methods, and social media public opinion.
    “Arguably, social media public opinion is twice removed from the general public opinion measured by surveys,” say Zhang. “First, not everyone uses social media. Second, among those who do, only a subset of them actually express opinions on social media. They tend to be strongly opinionated and thus more willing to express their views publicly.”
    Murmuration offers insights that can complement information gathered through survey-based polling. It also moves away from mining social media for text from specific tweets. Murmuration takes full advantage of social media’s dynamic aspect. When text is removed from its context, it becomes difficult to accurately determine questions about what led to the discussion, when it began, and how it evolved over time.
    “Murmuration can allow for research that makes better use of social media data to study public opinion as a form of social interaction and reveal underlying social dynamics,” says Zhang.
    Story Source:
    Materials provided by University at Buffalo. Original written by Bert Gambini. Note: Content may be edited for style and length. More

  • in

    Pivotal technique harnesses cutting-edge AI capabilities to model and map the natural environment

    Scientists have developed a pioneering new technique that harnesses the cutting-edge capabilities of AI to model and map the natural environment in intricate detail.
    A team of experts, including Charlie Kirkwood from the University of Exeter, has created a sophisticated new approach to modelling the Earth’s natural features in greater detail and accuracy.
    The new technique can recognise intricate features and aspects of the terrain far beyond the capabilities of more traditional methods and use these to generate enhanced-quality environmental maps.
    Crucially, the new system could also pave the way to unlocking new discoveries of the relationships within the natural environment, that may help tackle some of the greater climate and environment issues of the 21st century.
    The study is published in leading journal Mathematical Geosciences, as part of a special issue on geostatistics and machine learning.
    Modelling and mapping the environment is a lengthy, time consuming and expensive process. Cost limits the number of observations that can be obtained, which means that creating comprehensive spatially-continuous maps depends upon filling in the gaps between these observations. More