More stories

  • in

    Want to wait less at the bus stop? Beware real-time updates

    Smartphone apps that tell commuters when a bus will arrive at a stop don’t result in less time waiting than reliance on an official bus route schedule, a new study suggests.
    In fact, people who followed the suggestions of transit apps to time their arrival for when the bus pulls up to the stop were likely to miss the bus about three-fourths of the time, results showed.
    “Following what transit apps tell you about when to leave your home or office for the bus stop is a risky strategy,” said Luyu Liu, lead author of the study and a doctoral student in geography at The Ohio State University.
    “The app may tell you the bus will be five minutes late, but drivers can make up time after you start walking, and you end up missing the bus.”
    The best choice on average for bus commuters is to refer to the official schedule, or at least build in extra time when using the app’s suggestions, according to the researchers.
    Liu conducted the study with Harvey Miller, professor of geography and director of Ohio State’s Center for Urban and Regional Analysis. The study was published recently online in the journal Transportation Research Part A.

    advertisement

    “We’re not saying that real-time bus information is bad. It is reassuring to know that a bus is coming,” Miller said.
    “But if you’re going to use these apps, you have to know how to use them and realize it still won’t be better on average than following the schedule.”
    For the study, the researchers analyzed bus traffic for one year (May 2018 to May 2019) on one route of the Central Ohio Transit Authority (COTA), the public bus system in Columbus.
    Liu and Miller used the same real-time data that publicly available apps use to tell riders where buses are and when they are likely to reach individual stops. They compared the real-time data predictions of when buses would arrive at stops to when buses actually arrived for a popular bus route that traverses a large part of the city. The researchers then calculated the average time commuters would wait at a stop if they used different tactics to time their arrival, including just following the bus schedule.
    The absolute worst way to catch the bus was using what the researchers called the “greedy tactic” — the one used by many transit apps — in which commuters timed their arrival at the stop to when the app said the bus would pull up.

    advertisement

    The average wait using the greedy tactic was about 12½ minutes — about three times longer than simply following the schedule. That’s because riders using this tactic are at high risk of missing the bus, researchers found.
    The app tells riders when the bus will arrive based on where it is and how fast it is traveling when a commuter checks it, Miller said.
    But there are two problems with that method, he said. For one, drivers can make up lost time.
    “COTA wants to deliver on-time service, so bus operators understandably will try to get back on schedule,” Miller said.
    Plus, the apps don’t check the bus location often enough to get accurate real-time information.
    Slightly better was the “arbitrary tactic” when a person just randomly walked up to a stop and caught the next bus that arrived. Commuters using this tactic would wait on average about 8½ minutes for the next bus.
    The second-best tactic was what the researchers called the “prudent tactic,” which was using the app to plan for arrival at the stop but adding some time as an “insurance buffer.” Here the average wait time was four minutes and 42 seconds, with a 10 percent risk of missing the bus.
    The prudent tactic waiting time was similar to the “schedule tactic,” which is just using the public schedule to determine when to arrive at the stop. These commuters waited an average of four minutes and 12 seconds, with only a 6 percent risk of missing the bus.
    There is some variation on waiting time within these averages, especially with the two tactics that use real-time information from apps. One of the most important factors is the length of a commuter’s walk to the bus stop.
    Those who have longer walks take more risks when they rely on real-time information. If the app tells commuters their bus is running late, a long walk gives the bus more time to speed up to get back on schedule.
    Another important factor is the length of time between buses arriving at a stop. A longer time between buses means more risk if you miss a bus, and results in more time waiting.
    While on average the schedule tactic worked best, there were minor exceptions.
    Results showed that it was generally better for work commuters to follow the schedule tactic in the morning when going to work and follow the prudent tactic using an app in the afternoon.
    But one thing was certain, the researchers said: It was never a good idea to be greedy and try to achieve no waiting at the bus stop.
    Waiting time for buses is an important issue, Miller said. For one, long waiting times is one of the top issues cited by people for not using public transportation.
    It is also a safety concern for people to not have to wait for long periods at stops, especially at night, or for those rushing around busy streets because they are late for a bus. And for many people, missing buses can jeopardize their jobs or important health care appointments, Miller said.
    Miller said the apps themselves could be more helpful by taking advantage of the data used in this study to make better recommendations.
    “These apps shouldn’t be pushing risky strategies on users for eliminating waiting time. They should be more sophisticated,” he said. More

  • in

    New deep learning models: Fewer neurons, more intelligence

    Artificial intelligence has arrived in our everyday lives — from search engines to self-driving cars. This has to do with the enormous computing power that has become available in recent years. But new results from AI research now show that simpler, smaller neural networks can be used to solve certain tasks even better, more efficiently, and more reliably than ever before.
    An international research team from TU Wien (Vienna), IST Austria and MIT (USA) has developed a new artificial intelligence system based on the brains of tiny animals, such as threadworms. This novel AI-system can control a vehicle with just a few artificial neurons. The team says that system has decisive advantages over previous deep learning models: It copes much better with noisy input, and, because of its simplicity, its mode of operation can be explained in detail. It does not have to be regarded as a complex “black box,” but it can be understood by humans. This new deep learning model has now been published in the journal Nature Machine Intelligence.
    Learning from nature
    Similar to living brains, artificial neural networks consist of many individual cells. When a cell is active, it sends a signal to other cells. All signals received by the next cell are combined to decide whether this cell will become active as well. The way in which one cell influences the activity of the next determines the behavior of the system — these parameters are adjusted in an automatic learning process until the neural network can solve a specific task.
    “For years, we have been investigating what we can learn from nature to improve deep learning,” says Prof. Radu Grosu, head of the research group “Cyber-Physical Systems” at TU Wien. “The nematode C. elegans, for example, lives its life with an amazingly small number of neurons, and still shows interesting behavioral patterns. This is due to the efficient and harmonious way the nematode’s nervous system processes information.”
    “Nature shows us that there is still lots of room for improvement,” says Prof. Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “Therefore, our goal was to massively reduce complexity and enhance interpretability of neural network models.”
    “Inspired by nature, we developed new mathematical models of neurons and synapses,” says Prof. Thomas Henzinger, president of IST Austria.

    advertisement

    “The processing of the signals within the individual cells follows different mathematical principles than previous deep learning models,” says Dr. Ramin Hasani, postdoctoral associate at the Institute of Computer Engineering, TU Wien and MIT CSAIL. “Also, our networks are highly sparse — this means that not every cell is connected to every other cell. This also makes the network simpler.”
    Autonomous Lane Keeping
    To test the new ideas, the team chose a particularly important test task: self-driving cars staying in their lane. The neural network receives camera images of the road as input and is to decide automatically whether to steer to the right or left.
    “Today, deep learning models with many millions of parameters are often used for learning complex tasks such as autonomous driving,” says Mathias Lechner, TU Wien alumnus and PhD student at IST Austria. “However, our new approach enables us to reduce the size of the networks by two orders of magnitude. Our systems only use 75,000 trainable parameters.”
    Alexander Amini, PhD student at MIT CSAIL explains that the new system consists of two parts: The camera input is first processed by a so-called convolutional neural network, which only perceives the visual data to extract structural features from incoming pixels. This network decides which parts of the camera image are interesting and important, and then passes signals to the crucial part of the network — a “control system” that then steers the vehicle.

    advertisement

    Both subsystems are stacked together and are trained simultaneously. Many hours of traffic videos of human driving in the greater Boston area were collected, and are fed into the network, together with information on how to steer the car in any given situation — until the system has learned to automatically connect images with the appropriate steering direction and can independently handle new situations.
    The control part of the system (called neural circuit policy, or NCP), which translates the data from the perception module into a steering command, only consists of 19 neurons. Mathias Lechner explains that NCPs are up to 3 orders of magnitude smaller than what would have been possible with previous state-of-the-art models.
    Causality and Interpretability
    The new deep learning model was tested on a real autonomous vehicle. “Our model allows us to investigate what the network focuses its attention on while driving. Our networks focus on very specific parts of the camera picture: The curbside and the horizon. This behavior is highly desirable, and it is unique among artificial intelligence systems,” says Ramin Hasani. “Moreover, we saw that the role of every single cell at any driving decision can be identified. We can understand the function of individual cells and their behavior. Achieving this degree of interpretability is impossible for larger deep learning models.”
    Robustness
    “To test how robust NCPs are compared to previous deep models, we perturbed the input images and evaluated how well the agents can deal with the noise,” says Mathias Lechner. “While this became an insurmountable problem for other deep neural networks, our NCPs demonstrated strong resistance to input artifacts. This attribute is a direct consequence of the novel neural model and the architecture.”
    “Interpretability and robustness are the two major advantages of our new model,” says Ramin Hasani. “But there is more: Using our new methods, we can also reduce training time and the possibility to implement AI in relatively simple systems. Our NCPs enable imitation learning in a wide range of possible applications, from automated work in warehouses to robot locomotion. The new findings open up important new perspectives for the AI community: The principles of computation in biological nervous systems can become a great resource for creating high-performance interpretable AI — as an alternative to the black-box machine learning systems we have used so far.”
    Code Repository: https://github.com/mlech26l/keras-ncp
    Video: https://ist.ac.at/en/news/new-deep-learning-models/ More

  • in

    Software spots and fixes hang bugs in seconds, rather than weeks

    Hang bugs — when software gets stuck, but doesn’t crash — can frustrate both users and programmers, taking weeks for companies to identify and fix. Now researchers from North Carolina State University have developed software that can spot and fix the problems in seconds.
    “Many of us have experience with hang bugs — think of a time when you were on website and the wheel just kept spinning and spinning,” says Helen Gu, co-author of a paper on the work and a professor of computer science at NC State. “Because these bugs don’t crash the program, they’re hard to detect. But they can frustrate or drive away customers and hurt a company’s bottom line.”
    With that in mind, Gu and her collaborators developed an automated program, called HangFix, that can detect hang bugs, diagnose the relevant problem, and apply a patch that corrects the root cause of the error. Video of Gu discussing the program can be found here.
    The researchers tested a prototype of HangFix against 42 real-world hang bugs in 10 commonly used cloud server applications. The bugs were drawn from a database of hang bugs that programmers discovered affecting various websites. HangFix fixed 40 of the bugs in seconds.
    “The remaining two bugs were identified and partially fixed, but required additional input from programmers who had relevant domain knowledge of the application,” Gu says.
    For comparison, it took weeks or months to detect, diagnose and fix those hang bugs when they were first discovered.
    “We’re optimistic that this tool will make hang bugs less common — and websites less frustrating for many users,” Gu says. “We are working to integrate Hangfix into InsightFinder.” InsightFinder is the AI-based IT operations and analytics startup founded by Gu.
    The paper, “HangFix: Automatically Fixing Software Hang Bugs for Production Cloud Systems,” is being presented at the ACM Symposium on Cloud Computing (SoCC’20), being held online Oct. 19-21. The paper was co-authored by Jingzhu He, a Ph.D. student at NC State who is nearing graduation; Ting Dai, a Ph.D. graduate of NC State who is now at IBM Research; and Guoliang Jin, an assistant professor of computer science at NC State.
    The work was done with support from the National Science Foundation under grants 1513942 and 1149445.
    HangFix is the latest in a long line of tools Gu’s team has developed to address cloud computing challenges. Her 2011 paper, “CloudScale: Elastic Resource Scaling for Multi-tenant Cloud Systems,” was selected as the winner of the 2020 SoCC 10-Year Award at this year’s conference.

    Story Source:
    Materials provided by North Carolina State University. Note: Content may be edited for style and length. More

  • in

    Using robotic assistance to make colonoscopy kinder and easier

    Scientists have made a breakthrough in their work to develop semi-autonomous colonoscopy, using a robot to guide a medical device into the body.
    The milestone brings closer the prospect of an intelligent robotic system being able to guide instruments to precise locations in the body to take biopsies or allow internal tissues to be examined.
    A doctor or nurse would still be on hand to make clinical decisions but the demanding task of manipulating the device is offloaded to a robotic system.
    The latest findings — ‘Enabling the future of colonoscopy with intelligent and autonomous magnetic manipulation’ — is the culmination of 12 years of research by an international team of scientists led by the University of Leeds.
    The research is published today (Monday, 12 October) in the scientific journal Nature Machine Intelligence. 
    Patient trials using the system could begin next year or in early 2022.

    advertisement

    Pietro Valdastri, Professor of Robotics and Autonomous Systems at Leeds, is supervising the research. He said: “Colonoscopy gives doctors a window into the world hidden deep inside the human body and it provides a vital role in the screening of diseases such as colorectal cancer. But the technology has remained relatively unchanged for decades.
    “What we have developed is a system that is easier for doctors or nurses to operate and is less painful for patients. It marks an important a step in the move to make colonoscopy much more widely available — essential if colorectal cancer is to be identified early.”
    Because the system is easier to use, the scientists hope this can increase the number of providers who can perform the procedure and allow for greater patient access to colonoscopy.
    A colonoscopy is a procedure to examine the rectum and colon. Conventional colonoscopy is carried out using a semi-flexible tube which is inserted into the anus, a process some patients find so painful they require an anaesthetic.
    Magnetic flexible colonoscope
    The research team has developed a smaller, capsule-shaped device which is tethered to a narrow cable and is inserted into the anus and then guided into place — not by the doctor or nurse pushing the colonoscope but by a magnet on a robotic arm positioned over the patient.

    advertisement

    The robotic arm moves around the patient as it manoeuvres the capsule. The system is based on the principle that magnetic forces attract and repel.
    The magnet on the outside of the patient interacts with tiny magnets in the capsule inside the body, navigating it through the colon. The researchers say it will be less painful than having a conventional colonoscopy.
    Guiding the robotic arm can be done manually but it is a technique that is difficult to master. In response, the researchers have developed different levels of robotic assistance. This latest research evaluated how effective the different levels of robotic assistance were in aiding non-specialist staff to carry out the procedure.
    Levels of robotic assistance
    Direct robot control. This is where the operator has direct control of the robot via a joystick. In this case, there is no assistance.
    Intelligent endoscope teleoperation. The operator focuses on where they want the capsule to be located in the colon, leaving the robotic system to calculate the movements of the robotic arm necessary to get the capsule into place.
    Semi-autonomous navigation. The robotic system autonomously navigates the capsule through the colon, using computer vision — although this can be overridden by the operator.
    During a laboratory simulation, 10 non-expert staff were asked to get the capsule to a point within the colon within 20 minutes. They did that five times, using the three different levels of assistance.
    Using direct robot control, the participants had a 58% success rate. That increased to 96% using intelligent endoscope teleoperation — and 100% using semi-autonomous navigation.
    In the next stage of the experiment, two participants were asked to navigate a conventional colonoscope into the colon of two anaesthetised pigs — and then to repeat the task with the magnet-controlled robotic system using the different levels of assistance. A vet was in attendance to ensure the animals were not harmed.
    The participants were scored on the NASA Task Load Index, a measure of how taxing a task was, both physically and mentally.
    The NASA Task Load Index revealed that they found it easier to operate the colonoscope with robotic assistance. A sense of frustration was a major factor in operating the conventional colonoscope and where participants had direct control of the robot.
    James Martin, a PhD researcher from the University of Leeds who co-led the study, said: “Operating the robotic arm is challenging. It is not very intuitive and that has put a brake on the development of magnetic flexible colonoscopes.
    “But we have demonstrated for the first time that it is possible to offload that function to the robotic system, leaving the operator to think about the clinical task they are undertaking — and it is making a measurable difference in human performance.”
    The techniques developed to conduct colonoscopy examinations could be applied to other endoscopic devices, such as those used to inspect the upper digestive tract or lungs.
    Dr Bruno Scaglioni, a Postdoctoral Research Fellow at Leeds and co-leader of the study, added: “Robot-assisted colonoscopy has the potential to revolutionize the way the procedure is carried out. It means people conducting the examination do not need to be experts in manipulating the device.
    “That will hopefully make the technique more widely available, where it could be offered in clinics and health centres rather than hospitals.” More

  • in

    Liquid metals come to the rescue of semiconductors

    Moore’s law is an empirical suggestion describing that the number of transistors doubles every few years in integrated circuits (ICs). However, Moore’s law has started to fail as transistors are now so small that the current silicon-based technologies are unable to offer further opportunities for shrinking.
    One possibility of overcoming Moore’s law is to resort to two-dimensional semiconductors. These two-dimensional materials are so thin that they can allow the propagation of free charge carriers, namely electrons and holes in transistors that carry the information, along an ultra-thin plane. This confinement of charge carriers can potentially allow the switching of the semiconductor very easily. It also allows directional pathways for the charge carriers to move without scattering and therefore leading to infinitely small resistance for the transistors. This means in theory the two-dimensional materials can result in transistors that do not waste energy during their on/off switching. Theoretically, they can switch very fast and also switch off to absolute zero resistance values during their non-operational states. Sounds ideal, but life is not ideal! In reality, there are still many technological barriers that should be surpassed for creating such perfect ultra-thin semiconductors. One of the barriers with the current technologies is that the deposited ultra-thin films are full of grain boundaries so that the charge carriers are bounced back from them and hence the resistive loss increases.
    One of the most exciting ultra-thin semiconductors is molybdenum disulphide (MoS2) which has been the subject of investigation for the past two decades for its electronic properties. However, obtaining very large-scale two-dimensional MoS2 without any grain boundaries has been proven to be a real challenge. Using any current large-scale deposition technologies, grain-boundary-free MoS2 which is essential for making ICs has yet been reached with acceptable maturity. However, now researchers at the School of Chemical Engineering, University of New South Wales (UNSW) have developed a method to eliminate such grain boundaries based on a new deposition approach.
    “This unique capability was achieved with the help of gallium metal in its liquid state. Gallium is an amazing metal with a low melting point of only 29.8 °C. It means that at a normal office temperature it is solid, while it turns into a liquid when placed at the palm of someone’s hand. It is a melted metal, so its surface is atomically smooth. It is also a conventional metal which means that its surface provides a large number of free electrons for facilitating chemical reactions.” Ms Yifang Wang, the first author of the paper said.
    “By bringing the sources of molybdenum and sulphur near the surface of gallium liquid metal, we were able to realize chemical reactions that form the molybdenum sulphur bonds to establish the desired MoS2. The formed two-dimensional material is templated onto an atomically smooth surface of gallium, so it is naturally nucleated and grain boundary free. This means that by a second step annealing, we were able to obtain very large area MoS2 with no grain boundary. This is a very important step for scaling up this fascinating ultra-smooth semiconductor.” Prof Kourosh Kalantar-Zadeh, the leading author of the work said.
    The researchers at UNSW are now planning to expand their methods to creating other two-dimensional semiconductors and dielectric materials in order to create a number of materials that can be used as different parts of transistors.

    Story Source:
    Materials provided by ARC Centre of Excellence in Future Low-Energy Electronics Technologies. Note: Content may be edited for style and length. More

  • in

    New virtual reality software allows scientists to 'walk' inside cells

    Virtual reality software which allows researchers to ‘walk’ inside and analyse individual cells could be used to understand fundamental problems in biology and develop new treatments for disease.
    The software, called vLUME, was created by scientists at the University of Cambridge and 3D image analysis software company Lume VR Ltd. It allows super-resolution microscopy data to be visualised and analysed in virtual reality, and can be used to study everything from individual proteins to entire cells. Details are published in the journal Nature Methods.
    Super-resolution microscopy, which was awarded the Nobel Prize for Chemistry in 2014, makes it possible to obtain images at the nanoscale by using clever tricks of physics to get around the limits imposed by light diffraction. This has allowed researchers to observe molecular processes as they happen. However, a problem has been the lack of ways to visualise and analyse this data in three dimensions.
    “Biology occurs in 3D, but up until now it has been difficult to interact with the data on a 2D computer screen in an intuitive and immersive way,” said Dr Steven F. Lee from Cambridge’s Department of Chemistry, who led the research. “It wasn’t until we started seeing our data in virtual reality that everything clicked into place.”
    The vLUME project started when Lee and his group met with the Lume VR founders at a public engagement event at the Science Museum in London. While Lee’s group had expertise in super-resolution microscopy, the team from Lume specialised in spatial computing and data analysis, and together they were able to develop vLUME into a powerful new tool for exploring complex datasets in virtual reality.
    “vLUME is revolutionary imaging software that brings humans into the nanoscale,” said Alexandre Kitching, CEO of Lume. “It allows scientists to visualise, question and interact with 3D biological data, in real time all within a virtual reality environment, to find answers to biological questions faster. It’s a new tool for new discoveries.”
    Viewing data in this way can stimulate new initiatives and ideas. For example, Anoushka Handa — a PhD student from Lee’s group — used the software to image an immune cell taken from her own blood, and then stood inside her own cell in virtual reality. “It’s incredible — it gives you an entirely different perspective on your work,” she said.
    The software allows multiple datasets with millions of data points to be loaded in and finds patterns in the complex data using in-built clustering algorithms. These findings can then be shared with collaborators worldwide using image and video features in the software.
    “Data generated from super-resolution microscopy is extremely complex,” said Kitching. “For scientists, running analysis on this data can be very time consuming. With vLUME, we have managed to vastly reduce that wait time allowing for more rapid testing and analysis.”
    The team are mostly using vLUME with biological datasets, such as neurons, immune cells or cancer cells. For example, Lee’s group has been studying how antigen cells trigger an immune response in the body. “Through segmenting and viewing the data in vLUME, we’ve quickly been able to rule out certain hypotheses and propose new ones,” said Lee. This software allows researchers to explore, analyse, segment and share their data in new ways. All you need is a VR headset.”

    Story Source:
    Materials provided by University of Cambridge. The original story is licensed under a Creative Commons License. Note: Content may be edited for style and length. More

  • in

    Multi-state data storage leaving binary behind

    Electronic data is being produced at a breath-taking rate.
    The total amount of data stored in data centres around the globe is of the order of ten zettabytes (a zettabyte is a trillion gigabytes), and we estimate that amount doubles every couple of years.
    With 8% of global electricity already being consumed in information and communication technology (ICT), low-energy data-storage is a key priority.
    To date there is no clear winner in the race for next-generation memory that is non-volatile, has great endurance, highly energy efficient, low cost, high density, and allows fast access operation.
    The joint international team comprehensively reviews ‘multi-state memory’ data storage, which steps ‘beyond binary’ to store more data than just 0s and 1s.
    MULTI-STATE MEMORY: MORE THAN JUST ZEROES AND ONES
    Multi-state memory is an extremely promising technology for future data storage, with the ability to store data in more than a single bit (ie, 0 or 1) allowing much higher storage density (amount of data stored per unit area.

    advertisement

    This circumvents the plateauing of benefits historically offered by ‘Moore’s Law’, where component size halved abut every two years. In recent years, the long-predicted plateauing of Moore’s Law has been observed, with charge leakage and spiralling research and fabrication costs putting the nail in the Moore’s Law coffin.
    Non-volatile, multi-state memory (NMSM) offers energy efficiency, high, nonvolatility, fast access, and low cost.
    Storage density is dramatically enhanced without scaling down the dimensions of the memory cell, making memory devices more efficient and less expensive.
    NEUROMORPHIC COMPUTER MIMICKING THE HUMAN BRAIN
    Multi-state memory also enables the proposed future technology neuromorphic computing, which would mirror the structure of the human brain. This radically-different, brain-inspired computing regime could potentially provide the economic impetus for adoption of a novel technology such as NMSM.
    NMSMs allow analog calculation, which could be vital to intelligent, neuromorphic networks, as well as potentially helping us finally unravel the working mechanism of the human brain itself.
    THE STUDY
    The paper reviews device architectures, working mechanisms, material innovation, challenges, and recent progress for leading NMSM candidates, including:
    Flash memory
    magnetic random-access memory (MRAM)
    resistive random-access memory (RRAM)
    ferroelectric random-access memory (FeRAM)
    phase-change memory (PCM) More

  • in

    New project to build nano-thermometers could revolutionize temperature imaging

    Cheaper refrigerators? Stronger hip implants? A better understanding of human disease? All of these could be possible and more, someday, thanks to an ambitious new project underway at the National Institute of Standards and Technology (NIST).
    NIST researchers are in the early stages of a massive undertaking to design and build a fleet of tiny ultra-sensitive thermometers. If they succeed, their system will be the first to make real-time measurements of temperature on the microscopic scale in an opaque 3D volume — which could include medical implants, refrigerators, and even the human body.
    The project is called Thermal Magnetic Imaging and Control (Thermal MagIC), and the researchers say it could revolutionize temperature measurements in many fields: biology, medicine, chemical synthesis, refrigeration, the automotive industry, plastic production — “pretty much anywhere temperature plays a critical role,” said NIST physicist Cindi Dennis. “And that’s everywhere.”
    The NIST team has now finished building its customized laboratory spaces for this unique project and has begun the first major phase of the experiment.
    Thermal MagIC will work by using nanometer-sized objects whose magnetic signals change with temperature. The objects would be incorporated into the liquids or solids being studied — the melted plastic that might be used as part of an artificial joint replacement, or the liquid coolant being recirculated through a refrigerator. A remote sensing system would then pick up these magnetic signals, meaning the system being studied would be free from wires or other bulky external objects.
    The final product could make temperature measurements that are 10 times more precise than state-of-the-art techniques, acquired in one-tenth the time in a volume 10,000 times smaller. This equates to measurements accurate to within 25 millikelvin (thousandths of a kelvin) in as little as a tenth of a second, in a volume just a hundred micrometers (millionths of a meter) on a side. The measurements would be “traceable” to the International System of Units (SI); in other words, its readings could be accurately related to the fundamental definition of the kelvin, the world’s basic unit of temperature.

    advertisement

    The system aims to measure temperatures over the range from 200 to 400 kelvin (K), which is about -99 to 260 degrees Fahrenheit (F). This would cover most potential applications — at least the ones the Thermal MagIC team envisions will be possible within the next 5 years. Dennis and her colleagues see potential for a much larger temperature range, stretching from 4 K-600 K, which would encompass everything from supercooled superconductors to molten lead. But that is not a part of current development plans.
    “This is a big enough sea change that we expect that if we can develop it — and we have confidence that we can — other people will take it and really run with it and do things that we currently can’t imagine,” Dennis said.
    Potential applications are mostly in research and development, but Dennis said the increase in knowledge would likely trickle down to a variety of products, possibly including 3D printers, refrigerators, and medicines.
    What Is It Good For?
    Whether it’s the thermostat in your living room or a high-precision standard instrument that scientists use for laboratory measurements, most thermometers used today can only measure relatively big areas — on a macroscopic as opposed to microscopic level. These conventional thermometers are also intrusive, requiring sensors to penetrate the system being measured and to connect to a readout system by bulky wires.

    advertisement

    Infrared thermometers, such as the forehead instruments used at many doctors’ offices, are less intrusive. But they still only make macroscopic measurements and cannot see beneath surfaces.
    Thermal MagIC should let scientists get around both these limitations, Dennis said.
    Engineers could use Thermal MagIC to study, for the first time, how heat transfer occurs within different coolants on the microscale, which could aid their quest to find cheaper, less energy-intensive refrigeration systems.
    Doctors could use Thermal MagIC to study diseases, many of which are associated with temperature increases — a hallmark of inflammation — in specific parts of the body.
    And manufacturers could use the system to better control 3D printing machines that melt plastic to build custom objects such as medical implants and prostheses. Without the ability to measure temperature on the microscale, 3D printing developers are missing crucial information about what’s going on inside the plastic as it solidifies into an object. More knowledge could improve the strength and quality of 3D-printed materials someday, by giving engineers more control over the 3D printing process.
    Giving It OOMMF
    The first step in making this new thermometry system is creating nano-sized magnets that will give off strong magnetic signals in response to temperature changes. To keep particle concentrations as low as possible, the magnets will need to be 10 times more sensitive to temperature changes than any objects that currently exist.
    To get that kind of signal, Dennis said, researchers will likely need to use multiple magnetic materials in each nano-object. A core of one substance will be surrounded by other materials like the layers of an onion.
    The trouble is that there are practically endless combinations of properties that can be tweaked, including the materials’ composition, size, shape, the number and thickness of the layers, or even the number of materials. Going through all of these potential combinations and testing each one for its effect on the object’s temperature sensitivity could take multiple lifetimes to accomplish.
    To help them get there in months instead of decades, the team is turning to sophisticated software: the Object Oriented MicroMagnetic Framework (OOMMF), a widely used modeling program developed by NIST researchers Mike Donahue and Don Porter.
    The Thermal MagIC team will use this program to create a feedback loop. NIST chemists Thomas Moffat, Angela Hight Walker and Adam Biacchi will synthesize new nano-objects. Then Dennis and her team will characterize the objects’ properties. And finally, Donahue will help them feed that information into OOMMF, which will make predictions about what combinations of materials they should try next.
    “We have some very promising results from the magnetic nano-objects side of things, but we’re not quite there yet,” Dennis said.
    Each Dog Is a Voxel
    So how do they measure the signals given out by tiny concentrations of nano-thermometers inside a 3D object in response to temperature changes? They do it with a machine called a magnetic particle imager (MPI), which surrounds the sample and measures a magnetic signal coming off the nanoparticles.
    Effectively, they measure changes to the magnetic signal coming off one small volume of the sample, called a “voxel” — basically a 3D pixel — and then scan through the entire sample one voxel at a time.
    But it’s hard to focus a magnetic field, said NIST physicist Solomon Woods. So they achieve their goal in reverse.
    Consider a metaphor. Say you have a dog kennel, and you want to measure how loud each individual dog is barking. But you only have one microphone. If multiple dogs are barking at once, your mic will pick up all of that sound, but with only one mic you won’t be able to distinguish one dog’s bark from another’s.
    However, if you could quiet each dog somehow — perhaps by occupying its mouth with a bone — except for a single cocker spaniel in the corner, then your mic would still be picking up all the sounds in the room, but the only sound would be from the cocker spaniel.
    In theory, you could do this with each dog in sequence — first the cocker spaniel, then the mastiff next to it, then the labradoodle next in line — each time leaving just one dog bone-free.
    In this metaphor, each dog is a voxel.
    Basically, the researchers max out the ability of all but one small volume of their sample to respond to a magnetic field. (This is the equivalent of stuffing each dog’s mouth with a delicious bone.) Then, measuring the change in magnetic signal from the entire sample effectively lets you measure just that one little section.
    MPI systems similar to this exist but are not sensitive enough to measure the kind of tiny magnetic signal that would come from a small change in temperature. The challenge for the NIST team is to boost the signal significantly.
    “Our instrumentation is very similar to MPI, but since we have to measure temperature, not just measure the presence of a nano-object, we essentially need to boost our signal-to-noise ratio over MPI by a thousand or 10,000 times,” Woods said.
    They plan to boost the signal using state-of-the-art technologies. For example, Woods may use superconducting quantum interference devices (SQUIDs), cryogenic sensors that measure extremely subtle changes in magnetic fields, or atomic magnetometers, which detect how energy levels of atoms are changed by an external magnetic field. Woods is working on which are best to use and how to integrate them into the detection system.
    The final part of the project is making sure the measurements are traceable to the SI, a project led by NIST physicist Wes Tew. That will involve measuring the nano-thermometers’ magnetic signals at different temperatures that are simultaneously being measured by standard instruments.
    Other key NIST team members include Thinh Bui, Eric Rus, Brianna Bosch Correa, Mark Henn, Eduardo Correa and Klaus Quelhas.
    Before finishing their new laboratory space, the researchers were able to complete some important work. In a paper published last month in the International Journal on Magnetic Particle Imaging, the group reported that they had found and tested a “promising” nanoparticle material made of iron and cobalt, with temperature sensitivities that varied in a controllable way depending on how the team prepared the material. Adding an appropriate shell material to encase this nanoparticle “core” would bring the team closer to creating a working temperature-sensitive nanoparticle for Thermal MagIC.
    In the past few weeks, the researchers have made further progress testing combinations of materials for the nanoparticles.
    “Despite the challenge of working during the pandemic, we have had some successes in our new labs,” Woods said. “These achievements include our first syntheses of multi-layer nanomagnetic systems for thermometry, and ultra-stable magnetic temperature measurements using techniques borrowed from atomic clock research.” More