More stories

  • in

    Quantum algorithm breakthrough

    Researchers led by City College of New York physicist Pouyan Ghaemi report the development of a quantum algorithm with the potential to study a class of many-electron quantums system using quantum computers. Their paper, entitled “Creating and Manipulating a Laughlin-Type ?=1/3 Fractional Quantum Hall State on a Quantum Computer with Linear Depth Circuits,” appears in the December issue of PRX Quantum, a journal of the American Physical Society.
    “Quantum physics is the fundamental theory of nature which leads to formation of molecules and the resulting matter around us,” said Ghaemi, assistant professor in CCNY’s Division of Science. “It is already known that when we have a macroscopic number of quantum particles, such as electrons in the metal, which interact with each other, novel phenomena such as superconductivity emerge.”
    However, until now, according to Ghaemi, tools to study systems with large numbers of interacting quantum particles and their novel properties have been extremely limited.
    “Our research has developed a quantum algorithm which can be used to study a class of many-electron quantum systems using quantum computers. Our algorithm opens a new venue to use the new quantum devices to study problems which are quite challenging to study using classical computers. Our results are new and motivate many follow up studies,” added Ghaemi.
    On possible applications for this advancement, Ghaemi, who’s also affiliated with the Graduate Center, CUNY noted: “Quantum computers have witnessed extensive developments during the last few years. Development of new quantum algorithms, regardless of their direct application, will contribute to realize applications of quantum computers.
    “I believe the direct application of our results is to provide tools to improve quantum computing devices. Their direct real-life application would emerge when quantum computers can be used for daily life applications.”
    His collaborators included scientists from: Western Washington University, University of California, Santa Barbara; Google AI Quantum and the University of Michigan, Ann Arbor.

    Story Source:
    Materials provided by City College of New York. Note: Content may be edited for style and length. More

  • in

    System brings deep learning to 'internet of things' devices

    Deep learning is everywhere. This branch of artificial intelligence curates your social media and serves your Google search results. Soon, deep learning could also check your vitals or set your thermostat. MIT researchers have developed a system that could bring deep learning neural networks to new — and much smaller — places, like the tiny computer chips in wearable medical devices, household appliances, and the 250 billion other objects that constitute the “internet of things” (IoT).
    The system, called MCUNet, designs compact neural networks that deliver unprecedented speed and accuracy for deep learning on IoT devices, despite limited memory and processing power. The technology could facilitate the expansion of the IoT universe while saving energy and improving data security.
    The research will be presented at next month’s Conference on Neural Information Processing Systems. The lead author is Ji Lin, a PhD student in Song Han’s lab in MIT’s Department of Electrical Engineering and Computer Science. Co-authors include Han and Yujun Lin of MIT, Wei-Ming Chen of MIT and National University Taiwan, and John Cohn and Chuang Gan of the MIT-IBM Watson AI Lab.
    The Internet of Things
    The IoT was born in the early 1980s. Grad students at Carnegie Mellon University, including Mike Kazar ’78, connected a Cola-Cola machine to the internet. The group’s motivation was simple: laziness. They wanted to use their computers to confirm the machine was stocked before trekking from their office to make a purchase. It was the world’s first internet-connected appliance. “This was pretty much treated as the punchline of a joke,” says Kazar, now a Microsoft engineer. “No one expected billions of devices on the internet.”
    Since that Coke machine, everyday objects have become increasingly networked into the growing IoT. That includes everything from wearable heart monitors to smart fridges that tell you when you’re low on milk. IoT devices often run on microcontrollers — simple computer chips with no operating system, minimal processing power, and less than one thousandth of the memory of a typical smartphone. So pattern-recognition tasks like deep learning are difficult to run locally on IoT devices. For complex analysis, IoT-collected data is often sent to the cloud, making it vulnerable to hacking.

    advertisement

    “How do we deploy neural nets directly on these tiny devices? It’s a new research area that’s getting very hot,” says Han. “Companies like Google and ARM are all working in this direction.” Han is too.
    With MCUNet, Han’s group codesigned two components needed for “tiny deep learning” — the operation of neural networks on microcontrollers. One component is TinyEngine, an inference engine that directs resource management, akin to an operating system. TinyEngine is optimized to run a particular neural network structure, which is selected by MCUNet’s other component: TinyNAS, a neural architecture search algorithm.
    System-algorithm codesign
    Designing a deep network for microcontrollers isn’t easy. Existing neural architecture search techniques start with a big pool of possible network structures based on a predefined template, then they gradually find the one with high accuracy and low cost. While the method works, it’s not the most efficient. “It can work pretty well for GPUs or smartphones,” says Lin. “But it’s been difficult to directly apply these techniques to tiny microcontrollers, because they are too small.”
    So Lin developed TinyNAS, a neural architecture search method that creates custom-sized networks. “We have a lot of microcontrollers that come with different power capacities and different memory sizes,” says Lin. “So we developed the algorithm [TinyNAS] to optimize the search space for different microcontrollers.” The customized nature of TinyNAS means it can generate compact neural networks with the best possible performance for a given microcontroller — with no unnecessary parameters. “Then we deliver the final, efficient model to the microcontroller,” say Lin.

    advertisement

    To run that tiny neural network, a microcontroller also needs a lean inference engine. A typical inference engine carries some dead weight — instructions for tasks it may rarely run. The extra code poses no problem for a laptop or smartphone, but it could easily overwhelm a microcontroller. “It doesn’t have off-chip memory, and it doesn’t have a disk,” says Han. “Everything put together is just one megabyte of flash, so we have to really carefully manage such a small resource.” Cue TinyEngine.
    The researchers developed their inference engine in conjunction with TinyNAS. TinyEngine generates the essential code necessary to run TinyNAS’ customized neural network. Any deadweight code is discarded, which cuts down on compile-time. “We keep only what we need,” says Han. “And since we designed the neural network, we know exactly what we need. That’s the advantage of system-algorithm codesign.” In the group’s tests of TinyEngine, the size of the compiled binary code was between 1.9 and five times smaller than comparable microcontroller inference engines from Google and ARM. TinyEngine also contains innovations that reduce runtime, including in-place depth-wise convolution, which cuts peak memory usage nearly in half. After codesigning TinyNAS and TinyEngine, Han’s team put MCUNet to the test.
    MCUNet’s first challenge was image classification. The researchers used the ImageNet database to train the system with labeled images, then to test its ability to classify novel ones. On a commercial microcontroller they tested, MCUNet successfully classified 70.7 percent of the novel images — the previous state-of-the-art neural network and inference engine combo was just 54 percent accurate. “Even a 1 percent improvement is considered significant,” says Lin. “So this is a giant leap for microcontroller settings.”
    The team found similar results in ImageNet tests of three other microcontrollers. And on both speed and accuracy, MCUNet beat the competition for audio and visual “wake-word” tasks, where a user initiates an interaction with a computer using vocal cues (think: “Hey, Siri”) or simply by entering a room. The experiments highlight MCUNet’s adaptability to numerous applications.
    “Huge potential”
    The promising test results give Han hope that it will become the new industry standard for microcontrollers. “It has huge potential,” he says.
    The advance “extends the frontier of deep neural network design even farther into the computational domain of small energy-efficient microcontrollers,” says Kurt Keutzer, a computer scientist at the University of California at Berkeley, who was not involved in the work. He adds that MCUNet could “bring intelligent computer-vision capabilities to even the simplest kitchen appliances, or enable more intelligent motion sensors.”
    MCUNet could also make IoT devices more secure. “A key advantage is preserving privacy,” says Han. “You don’t need to transmit the data to the cloud.”
    Analyzing data locally reduces the risk of personal information being stolen — including personal health data. Han envisions smart watches with MCUNet that don’t just sense users’ heartbeat, blood pressure, and oxygen levels, but also analyze and help them understand that information. MCUNet could also bring deep learning to IoT devices in vehicles and rural areas with limited internet access.
    Plus, MCUNet’s slim computing footprint translates into a slim carbon footprint. “Our big dream is for green AI,” says Han, adding that training a large neural network can burn carbon equivalent to the lifetime emissions of five cars. MCUNet on a microcontroller would require a small fraction of that energy. “Our end goal is to enable efficient, tiny AI with less computational resources, less human resources, and less data,” says Han. More

  • in

    Order from chaos: Seemingly random photonic crystals greatly improve laser scanning

    Scanning lasers — from barcode scanners at the supermarket to cameras on newer smartphones — are an indispensable part of our daily lives, relying on lasers and detectors for pinpoint precision.
    Distance and object recognition using LiDAR — a portmanteau of light and radar — is becoming increasingly common: reflected laser beams record the surrounding environment, providing crucial data for autonomous cars, agricultural machines, and factory robots.
    Current technology bounces the laser beams off of moving mirrors, a mechanical method that results in slower scanning speeds and inaccuracies, not to mention the large physical size and complexity of devices housing a laser and mirrors.
    Publishing in Nature Communications, a research team from Kyoto University’s Graduate School of Engineering describe a new beam scanning device utilizing ‘photonic crystals’, eliminating the need for moving parts.
    Instead of arranging the lattice points of the crystals in an orderly array, the researchers found that varying the lattice points’ shapes and positions caused the laser beam to be emitted in unique directions.
    “What results is a lattice of photonic crystals that looks like a slab of Swiss cheese, where each crystal is calculated to emit the beam in a specific direction,” explains Susumu Noda, who led the team.

    advertisement

    “By eliminating mechanical mirrors, we’ve made a faster and more reliable beam-scanning device.”
    Photonic crystal lasers are a type of ‘semiconductor laser’ whose lattice points can be regarded as nanoscale antennae, which can be arranged to cause a laser beam to be emitted perpendicularly from the surface. But initially the beam would only go in a single direction on a two-dimensional plane; the team needed more area to be covered.
    Arranging the antennae positions cyclically resulted in a successful direction change, but a decrease in power output and deformed shape made this solution unviable.
    “Modulating the antennae positions caused light emitted from adjacent antennae to cancel each other out,” continues Noda, “leading us to try changing antenna sizes.”
    “Eventually, we discovered that adjusting both position and size resulted in a seemingly random photonic crystal, producing an accurate beam without power loss. We called this a ‘dually modulated photonic crystal’.”
    By organizing these crystals — each designed to emit a beam in a unique direction — in a matrix, the team was able to build a compact, switchable, two-dimensional beam scanner without the need for any mechanical parts.
    The scientists have successfully constructed a scanner that can generate beams in one hundred different directions: a resolution of 10×10. This has also been combined with a diverging laser beam, resulting in a new type of LiDAR with enhanced scope to detect objects.
    The team estimates that with further refinements, the resolution could be increased by a factor of 900: up to a 300×300 resolution range.
    “At first there was a great deal of interest in whether a structure that is seemingly so random could actually work,” concludes Noda. “We now believe that eventually we will be able to develop a LiDAR system small enough to hold on a fingertip.”

    Story Source:
    Materials provided by Kyoto University. Note: Content may be edited for style and length. More

  • in

    New green materials could power smart devices using ambient light

    We are increasingly using more smart devices like smartphones, smart speakers, and wearable health and wellness sensors in our homes, offices, and public buildings. However, the batteries they use can deplete quickly and contain toxic and rare environmentally damaging chemicals, so researchers are looking for better ways to power the devices.
    One way to power them is by converting indoor light from ordinary bulbs into energy, in a similar way to how solar panels harvest energy from sunlight, known as solar photovoltaics. However, due to the different properties of the light sources, the materials used for solar panels are not suitable for harvesting indoor light.
    Now, researchers from Imperial College London, Soochow University in China, and the University of Cambridge have discovered that new green materials currently being developed for next-generation solar panels could be useful for indoor light harvesting. They report their findings today in Advanced Energy Materials.
    Co-author Dr Robert Hoye, from the Department of Materials at Imperial, said: “By efficiently absorbing the light coming from lamps commonly found in homes and buildings, the materials we investigated can turn light into electricity with an efficiency already in the range of commercial technologies. We have also already identified several possible improvements, which would allow these materials to surpass the performance of current indoor photovoltaic technologies in the near future.”
    The team investigated ‘perovskite-inspired materials’, which were created to circumvent problems with materials called perovskites, which were developed for next-generation solar cells. Although perovskites are cheaper to make than traditional silicon-based solar panels and deliver similar efficiency, perovskites contain toxic lead substances. This drove the development of perovskite-inspired materials, which are instead based on safer elements like bismuth and antimony.
    Despite being more environmentally friendly, these perovskite-inspired materials are not as efficient at absorbing sunlight. However, the team found that the materials are much more effective at absorbing indoor light, with efficiencies that are promising for commercial applications. Crucially, the researchers demonstrated that the power provided by these materials under indoor illumination is already sufficient to operate electronic circuits.
    Co-author Professor Vincenzo Pecunia, from Soochow University, said: “Our discovery opens up a whole new direction in the search for green, easy-to-make materials to sustainably power our smart devices.
    “In addition to their eco-friendly nature, these materials could potentially be processed onto unconventional substrates such as plastics and fabric, which are incompatible with conventional technologies. Therefore, lead-free perovskite-inspired materials could soon enable battery-free devices for wearables, healthcare monitoring, smart homes, and smart cities.”

    Story Source:
    Materials provided by Imperial College London. Original written by Hayley Dunning. Note: Content may be edited for style and length. More

  • in

    Computer vision app allows easier monitoring of diabetes

    A computer vision technology developed by University of Cambridge engineers has now been developed into a free mobile phone app for regular monitoring of glucose levels in people with diabetes.
    The app uses computer vision techniques to read and record the glucose levels, time and date displayed on a typical glucose test via the camera on a mobile phone. The technology, which doesn’t require an internet or Bluetooth connection, works for any type of glucose meter, in any orientation and in a variety of light levels. It also reduces waste by eliminating the need to replace high-quality non-Bluetooth meters, making it a cost-effective solution to the NHS.
    Working with UK glucose testing company GlucoRx, the Cambridge researchers have developed the technology into a free mobile phone app, called GlucoRx Vision, which is now available on the Apple App Store and Google Play Store.
    To use the app, users simply take a picture of their glucose meter and the results are automatically read and recorded, allowing much easier monitoring of blood glucose levels.
    In addition to the glucose meters which people with diabetes use on a daily basis, many other types of digital meters are used in the medical and industrial sectors. However, many of these meters still do not have wireless connectivity, so connecting them to phone tracking apps often requires manual input.
    “These meters work perfectly well, so we don’t want them sent to landfill just because they don’t have wireless connectivity,” said Dr James Charles from Cambridge’s Department of Engineering. “We wanted to find a way to retrofit them in an inexpensive and environmentally-friendly way using a mobile phone app.”
    In addition to his interest in solving the challenge from an engineering point of view, Charles also had a personal interest in the problem. He has type 1 diabetes and needs to take as many as ten glucose readings per day. Each reading is then manually entered into a tracking app to help determine how much insulin he needs to regulate his blood glucose levels.

    advertisement

    “From a purely selfish point of view, this was something I really wanted to develop,” he said.
    “We wanted something that was efficient, quick and easy to use,” said Professor Roberto Cipolla, also from the Department of Engineering. “Diabetes can affect eyesight or even lead to blindness, so we needed the app to be easy to use for those with reduced vision.”
    The computer vision technology behind the GlucoRx app is made up of two steps. First, the screen of the glucose meter is detected. The researchers used a single training image and augmented it with random backgrounds, particularly backgrounds with people. This helps ensure the system is robust when the user’s face is reflected in the phone’s screen.
    Second, a neural network called LeDigit detects each digit on the screen and reads it. The network is trained with computer-generated synthetic data, avoiding the need for labour-intensive labelling of data which is commonly needed to train a neural network.
    “Since the font on these meters is digital, it’s easy to train the neural network to recognise lots of different inputs and synthesise the data,” said Charles. “This makes it highly efficient to run on a mobile phone.”
    “It doesn’t matter which orientation the meter is in — we tested it in all types of orientations, viewpoints and light levels,” said Cipolla, who is also a Fellow of Jesus College. “The app will vibrate when it’s read the information, so you get a clear signal when you’ve done it correctly. The system is accurate across a range of different types of meters, with read accuracies close to 100%”

    advertisement

    In addition to blood glucose monitor, the researchers also tested their system on different types of digital meters, such as blood pressure monitors, kitchen and bathroom scales. The researchers also recently presented their results at the 31st British Machine Vision Conference.
    Gluco-Rx initially approached Cipolla’s team in 2018 to develop a cost-effective and environmentally-friendly solution to the problem of non-connected glucose meters, and once the technology had been shown to be sufficiently robust, the company worked with the Cambridge researchers to develop the app.
    “We have been working in partnership with Cambridge University on this unique solution, which will help change the management of diabetes for years to come,” said Chris Chapman, Chief Operating Officer of GlucoRx. “We will soon make this solution available to all of our more than 250,000 patients.”
    As for Charles, who has been using the app to track his glucose levels, he said it “makes the whole process easier. I’ve now forgotten what it was like to enter the values in manually, but I do know I wouldn’t want to go back to it. There are a few areas in the system which could still be made even better, but all in all I’m very happy with the outcome.” More

  • in

    Zinc-ion hybrid capacitors with ideal anions in the electrolyte show extra-long performance

    Metal-ion hybrid capacitors combine the properties of capacitors and batteries. One electrode uses the capacitive mechanism, the other the battery-type redox processes. Scientists have now scrutinized the role of anions in the electrolyte. The results, which have been published in the journal Angewandte Chemie, reveal the importance of sulfate anions. Sulfate-based electrolytes gave zinc-ion hybrid capacitors outstanding performance and extra-long operability.
    Capacitors can uptake and release an enormous amount of charge in a short time, whereas batteries can store a lot of energy in a small volume. To combine both properties, scientists are investigating hybrid electrochemical cells, which contain both capacitor- and battery-type electrodes. Among these cells, researchers have identified metal-ion hybrid capacitors as especially promising devices. Here, the positive electrode includes pseudocapacitive properties, which means it can also store energy in the manner of a battery, by intercalation of the metal ions, while the negative electrode is made of a redox-active metal.
    However, their electrolyte has long been neglected, says Chunyi Zhi who is investigating battery materials together with his team at the City University of Hong Kong. The researchers believe the type of electrolyte anion affects the performance of the device. “Paying more attention to the introduction of appropriate anions can effectively improve the power and energy density of a capacitor,” they say.
    The researchers focused their attention on zinc-ion capacitors. This cell type consists of a zinc metal anode and a cathode made of titanium nitride nanofibers. The nanofibers are robust, and their porous surface allows the electrolyte to infiltrate. The scientists argue that the electrolyte anions, when attached to the titanium nitride surface, make the material more conductive. Moreover, the adsorbed anions may directly contribute to the charging process. The charging of the hybrid capacitor involves the extraction of the intercalated zinc ions.
    Zhi and his colleagues compared the effects of three electrolyte anions: sulfate, acetate, and chloride. They looked at both their binding to the electrode surface and the performances of the electrochemical cells. It was a clear result.
    The scientists reported that the sulfate anions stood out among the three anions. They observed that cells based on a zinc sulfate electrolyte performed best, and the sulfates bound stronger to the titanium nitride surface than the other anions. Moreover, sulfate-treated electrodes showed the lowest self-discharging. The authors attributed the findings to the electronic effects of sulfate. Its electron-pulling nature provides tight binding to the surface atoms and prevents the electrode from self-discharging, the authors concluded.
    For a zinc-sulfate-based zinc-ion hybrid capacitor, the scientists reported high-performance operation for more than nine months. Moreover, these devices are flexible, which is especially useful for portable electronics. The scientists tested the device in an electronic watch and found excellent performance.

    Story Source:
    Materials provided by Wiley. Note: Content may be edited for style and length. More

  • in

    Ultra-fast polymer modulators that can take the heat

    Datacenters could benefit from lower cooling costs in part to ultra-fast electro-optic modulators developed by researchers in Japan using a polymer that is stable even at temperatures that would boil water.
    Reported in the journal Nature Communications, the silicon-polymer hybrid modulators can transmit 200 gigabits of data per second at up to 110 °C and could enable optical data interconnections that are both extremely fast and reliable at high temperatures, reducing the need for cooling and expanding applications in harsh environments like rooftops and cars.
    Demand for high-speed data transmission such as for high-definition media streaming has exploded in recent years, and optical communications are central to many of the necessary data connections. A critical component is the modulator, which puts data on a beam of light passing through an electro-optic material that can change its optical properties in response to an electric field.
    Most modulators currently use inorganic semiconductors or crystals as the electro-optic material, but organic-based polymers have the advantages that they can be fabricated with excellent electro-optic properties at a low cost and operated at low voltages.
    “Polymers have great potential for use in modulators, but reliability issues still need to be overcome for many industry applications,” explains Shiyoshi Yokoyama, professor of Kyushu University’s Institute for Materials Chemistry and Engineering and leader of the research collaboration.
    One challenge is that parts of the molecules in the polymer layer must be organized through a process called poling to obtain good electro-optic properties, but this organization can be lost when the layer gets warm enough to begin softening — a point referred to as the glass transition temperature.

    advertisement

    However, if the modulators and other components can operate rapidly and reliably even at high temperatures, datacenters could run warmer, thereby reducing their energy usage — nearly 40% of which is currently estimated to go toward cooling.
    Employing a polymer they designed to exhibit superb electro-optic properties and a high glass transition temperature of 172 °C through the incorporation of appropriate chemical groups, the research team achieved ultra-fast signaling at elevated temperatures in a silicon-polymer hybrid modulator based on a Mach-Zehnder interferometer configuration, which is less sensitive to temperature changes than some other architectures.
    In the modulators, composed of multiple layers including the polymer and silicon, an incoming laser beam is split into two arms of equal length. Applying an electric field across the electro-optic polymer in one of the arms changes the optical properties such that the light wave slightly shifts. When the two arms come back together, interference between the modified and unmodified beams changes the strength of the mixed output beam depending on the amount of phase shift, thereby encoding data in the light.
    Using a simple data signaling scheme of just on and off states, rates of over 100 Gbit/s were achieved, while a more complicated method using four signal levels could achieve a rate of 200 Gbit/s.
    This performance was maintained with negligible changes even when operating the devices over temperatures ranging from 25 °C to 110 °C and after subjecting the devices to 90 °C heat for 100 hours, demonstrating the robustness and stability of the modulators over an extraordinarily wide range of temperatures.
    “Stable operation even when the temperature fluctuates up to 110 °C is wonderful,” says Yokoyama. “This temperature range means operation in controlled environments such as datacenters, even at higher than normal temperatures, and many harsh environments where temperature is not well controlled is possible.”
    The current devices are millimeter sized, making them relatively large compared to other designs, but the researchers are looking into ways to further reduce the footprint for incorporation of a dense arrays of such modulators in a small area.
    “This kind of performance shows just how promising polymers are for future telecommunications technologies,” Yokoyama states.

    Story Source:
    Materials provided by Kyushu University. Note: Content may be edited for style and length. More

  • in

    Key advance for printing circuitry on wearable fabrics

    Electronic shirts that keep the wearer comfortably warm or cool, as well as medical fabrics that deliver drugs, monitor the condition of a wound and perform other tasks, may one day be manufactured more efficiently thanks to a key advance by Oregon State University researchers.
    The breakthrough involves inkjet printing and materials with a crystal structure discovered nearly two centuries ago. The upshot is the ability to apply circuitry, with precision and at low processing temperatures, directly onto cloth — a promising potential solution to the longstanding tradeoff between performance and fabrication costs.
    “Much effort has gone into integrating sensors, displays, power sources and logic circuits into various fabrics for the creation of wearable, electronic textiles,” said Chih-Hung Chang, professor of chemical engineering at Oregon State. “One hurdle is that fabricating rigid devices on cloth, which has a surface that’s both porous and non-uniform, is tedious and expensive, requiring a lot of heat and energy, and is hard to scale up. And first putting the devices onto something solid, and then putting that solid substrate onto fabric, is problematic too — it limits the flexibility and wearability of the fabric and also can necessitate cumbersome changes to the fabric manufacturing process itself.”
    Chang and collaborators in the OSU College of Engineering and at Rutgers University tackled those challenges by coming up with a stable, printable ink, based on binary metal iodide salts, that thermally transforms into a dense compound of cesium, tin and iodine.
    The resulting film of Cs2SnI6 has a crystal structure that makes it a perovskite.
    Perovskites trace their roots to a long-ago discovery by a German mineralogist. In the Ural Mountains in 1839, Gustav Rose came upon an oxide of calcium and titanium with an intriguing crystal structure and named it in honor of Russian nobleman Lev Perovski.

    advertisement

    Perovskite now refers to a range of materials that share the crystal lattice of the original. Interest in them began to accelerate in 2009 after a Japanese scientist, Tsutomu Miyasaka, discovered that some perovskites are effective absorbers of light. Materials with a perovskite structure that are based on a metal and a halogen such as iodine are semiconductors, essential components of most electrical circuits.
    Thanks to the perovskite film, Chang’s team was able to print negative-temperature-coefficient thermistors directly onto woven polyester at temperatures as low as 120 degrees Celsius — just 20 degrees higher than the boiling point of water.
    A thermistor is a type of electrical component known as a resistor, which controls the amount of current entering a circuit. Thermistors are resistors whose resistance is temperature dependent, and this research involved negative-temperature-coefficient, or NTC, thermistors — their resistance decreases as the temperature increases.
    “A change in resistance due to heat is generally not a good thing in a standard resistor, but the effect can be useful in many temperature detection circuits,” Chang said. “NTC thermistors can be used in virtually any type of equipment where temperature plays a role. Even small temperature changes can cause big changes in their resistance, which makes them ideal for accurate temperature measurement and control.”
    The research, which included Shujie Li and Alex Kosek of the OSU College of Engineering and Mohammad Naim Jahangir and Rajiv Malhotra of Rutgers University, demonstrates directly fabricating high-performance NTC thermistors onto fabrics at half the temperature used by current state-of-the-art manufacturers, Chang said.
    “In addition to requiring more energy, the higher temperatures create compatibility issues with many fabrics,” he said. “The simplicity of our ink, the process’ scalability and the thermistor performance are all promising for the future of wearable e-textiles.”
    The Walmart Manufacturing Innovation Foundation and National Science Foundation supported this study. Findings were published in Advanced Functional Materials.

    Story Source:
    Materials provided by Oregon State University. Original written by Steve Lundeberg. Note: Content may be edited for style and length. More