More stories

  • in

    Fully recyclable printed electronics developed

    Engineers at Duke University have developed the world’s first fully recyclable printed electronics. By demonstrating a crucial and relatively complex computer component — the transistor — created with three carbon-based inks, the researchers hope to inspire a new generation of recyclable electronics to help fight the growing global epidemic of electronic waste.
    The work appears online April 26 in the journal Nature Electronics.
    “Silicon-based computer components are probably never going away, and we don’t expect easily recyclable electronics like ours to replace the technology and devices that are already widely used,” said Aaron Franklin, the Addy Professor of Electrical and Computer Engineering at Duke. “But we hope that by creating new, fully recyclable, easily printed electronics and showing what they can do, that they might become widely used in future applications.”
    As people worldwide adopt more electronics into their lives, there’s an ever-growing pile of discarded devices that either don’t work anymore or have been cast away in favor of a newer model. According to a United Nations estimate, less than a quarter of the millions of pounds of electronics thrown away each year is recycled. And the problem is only going to get worse as the world upgrades to 5G devices and the Internet of Things (IoT) continues to expand.
    Part of the problem is that electronic devices are difficult to recycle. Large plants employ hundreds of workers who hack at bulky devices. But while scraps of copper, aluminum and steel can be recycled, the silicon chips at the heart of the devices cannot.
    In the new study, Franklin and his laboratory demonstrate a completely recyclable, fully functional transistor made out of three carbon-based inks that can be easily printed onto paper or other flexible, environmentally friendly surfaces. Carbon nanotubes and graphene inks are used for the semiconductors and conductors, respectively. While these materials are not new to the world of printed electronics, Franklin says, the path to recyclability was opened with the development of a wood-derived insulating dielectric ink called nanocellulose.
    “Nanocellulose is biodegradable and has been used in applications like packaging for years,” said Franklin. “And while people have long known about its potential applications as an insulator in electronics, nobody has figured out how to use it in a printable ink before. That’s one of the keys to making these fully recyclable devices functional.”
    The researchers developed a method for suspending crystals of nanocellulose that were extracted from wood fibers that — with the sprinkling of a little table salt — yields an ink that performs admirably as an insulator in their printed transistors. Using the three inks in an aerosol jet printer at room temperature, the team shows that their all-carbon transistors perform well enough for use in a wide variety of applications, even six months after the initial printing.
    The team then demonstrates just how recyclable their design is. By submerging their devices in a series of baths, gently vibrating them with sound waves and centrifuging the resulting solution, the carbon nanotubes and graphene are sequentially recovered with an average yield of nearly 100%. Both materials can then be reused in the same printing process while losing very little of their performance viability. And because the nanocellulose is made from wood, it can simply be recycled along with the paper it was printed on.
    Compared to a resistor or capacitor, a transistor is a relatively complex computer component used in devices such as power control or logic circuits and various sensors. Franklin explains that, by demonstrating a fully recyclable, multifunctional printed transistor first, he hopes to make a first step toward the technology being commercially pursued for simple devices. For example, Franklin says he could imagine the technology being used in a large building needing thousands of simple environmental sensors to monitor its energy use or customized biosensing patches for tracking medical conditions.
    “Recyclable electronics like this aren’t going to go out and replace an entire half-trillion-dollar industry by any means, and we’re certainly nowhere near printing recyclable computer processors,” said Franklin. “But demonstrating these types of new materials and their functionality is hopefully a stepping stone in the right direction for a new type of electronics lifecycle.”
    This work was supported by the Department of Defense Congressionally Directed Medical Research Program (W81XWH-17-2-0045), the National Institutes of Health (1R01HL146849) and the Air Force Office of Scientific Research (FA9550-18-1-0222).
    Story Source:
    Materials provided by Duke University. Original written by Ken Kingery. Note: Content may be edited for style and length. More

  • in

    Simple robots, smart algorithms

    Anyone with children knows that while controlling one child can be hard, controlling many at once can be nearly impossible. Getting swarms of robots to work collectively can be equally challenging, unless researchers carefully choreograph their interactions — like planes in formation — using increasingly sophisticated components and algorithms. But what can be reliably accomplished when the robots on hand are simple, inconsistent, and lack sophisticated programming for coordinated behavior?
    A team of researchers led by Dana Randall, ADVANCE Professor of Computing and Daniel Goldman, Dunn Family Professor of Physics, both at Georgia Institute of Technology, sought to show that even the simplest of robots can still accomplish tasks well beyond the capabilities of one, or even a few, of them. The goal of accomplishing these tasks with what the team dubbed “dumb robots” (essentially mobile granular particles) exceeded their expectations, and the researchers report being able to remove all sensors, communication, memory and computation — and instead accomplishing a set of tasks through leveraging the robots’ physical characteristics, a trait that the team terms “task embodiment.”
    The team’s BOBbots, or “behaving, organizing, buzzing bots” that were named for granular physics pioneer Bob Behringer, are “about as dumb as they get,” explains Randall. “Their cylindrical chassis have vibrating brushes underneath and loose magnets on their periphery, causing them to spend more time at locations with more neighbors.” The experimental platform was supplemented by precise computer simulations led by Georgia Tech physics student Shengkai Li, as a way to study aspects of the system inconvenient to study in the lab.
    Despite the simplicity of the BOBbots, the researchers discovered that, as the robots move and bump into each other, “compact aggregates form that are capable of collectively clearing debris that is too heavy for one alone to move,” according to Goldman. “While most people build increasingly complex and expensive robots to guarantee coordination, we wanted to see what complex tasks could be accomplished with very simple robots.”
    Their work, as reported April 23, 2021 in the journal Science Advances, was inspired by a theoretical model of particles moving around on a chessboard. A theoretical abstraction known as a self-organizing particle system was developed to rigorously study a mathematical model of the BOBbots. Using ideas from probability theory, statistical physics and stochastic algorithms, the researchers were able to prove that the theoretical model undergoes a phase change as the magnetic interactions increase — abruptly changing from dispersed to aggregating in large, compact clusters, similar to phase changes we see in common everyday systems, like water and ice.
    “The rigorous analysis not only showed us how to build the BOBbots, but also revealed an inherent robustness of our algorithm that allowed some of the robots to be faulty or unpredictable,” notes Randall, who also serves as a professor of computer science and adjunct professor of mathematics at Georgia Tech.
    Story Source:
    Materials provided by Georgia Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Toward new solar cells with active learning

    Scientists from the Theory Department of the Fritz-Haber Institute in Berlin and Technical University of Munich use machine learning to discover suitable molecular materials. To deal with the myriad of possibilities for candidate molecules, the machine decides for itself which data it needs.
    How can I prepare myself for something I do not yet know? Scientists from the Fritz Haber Institute in Berlin and from the Technical University of Munich have addressed this almost philosophical question in the context of machine learning. Learning is no more than drawing on prior experience. In order to deal with a new situation, one needs to have dealt with roughly similar situations before. In machine learning, this correspondingly means that a learning algorithm needs to have been exposed to roughly similar data. But what can we do if there is a nearly infinite amount of possibilities so that it is simply impossible to generate data that covers all situations?
    This problem comes up a lot when dealing with an endless number of possible candidate molecules. Organic semiconductors enable important future technologies such as portable solar cells or rollable displays. For such applications, improved organic molecules — which make up these materials — need to be discovered. Tasks of this nature are increasingly using methods of machine learning, while training on data from computer simulations or experiments. The number of potentially possible small organic molecules is, however, estimated to be on the order of 1033. This overwhelming number of possibilities makes it practically impossible to generate enough data to reflect such a large material diversity. In addition, many of those molecules are not even suitable for organic semiconductors. One is essentially looking for the proverbial needle in a haystack.
    In their work published recently in Nature Communications the team around Prof. Karsten Reuter, Director of the Theory Department at the Fritz-Haber-Institute, addressed this problem using so-called active learning. Instead of learning from existing data, the machine learning algorithm iteratively decides for itself which data it actually needs to learn about the problem. The scientists first carry out simulations on a few smaller molecules, and obtain data related to the molecules’ electrical conductivity — a measure of their usefulness when looking at possible solar cell materials. Based on this data, the algorithm decides if small modifications to these molecules could already lead to useful properties or whether it is uncertain due to a lack of similar data. In both cases, it automatically requests new simulations, improves itself through the newly generated data, considers new molecules, and goes on to repeat this procedure. In their work, the scientists show how new and promising molecules can efficiently be identified this way, while the algorithm continues its exploration into the vast molecular space, even now, at this very moment. Every week new molecules are being proposed that could usher in the next generation of solar cells and the algorithm just keeps getting better and better.
    Story Source:
    Materials provided by Fritz Haber Institute of the Max Planck Society. Note: Content may be edited for style and length. More

  • in

    Ankle exoskeleton enables faster walking

    Being unable to walk quickly can be frustrating and problematic, but it is a common issue, especially as people age. Noting the pervasiveness of slower-than-desired walking, engineers at Stanford University have tested how well a prototype exoskeleton system they have developed — which attaches around the shin and into a running shoe — increased the self-selected walking speed of people in an experimental setting.
    The exoskeleton is externally powered by motors and controlled by an algorithm. When the researchers optimized it for speed, participants walked, on average, 42 percent faster than when they were wearing normal shoes and no exoskeleton. The results of this study were published April 20 in IEEE Transactions on Neural Systems and Rehabilitation Engineering.
    “We were hoping that we could increase walking speed with exoskeleton assistance, but we were really surprised to find such a large improvement,” said Steve Collins, associate professor of mechanical engineering at Stanford and senior author of the paper. “Forty percent is huge.”
    For this initial set of experiments, the participants were young, healthy adults. Given their impressive results, the researchers plan to run future tests with older adults and to look at other ways the exoskeleton design can be improved. They also hope to eventually create an exoskeleton that can work outside the lab, though that goal is still a ways off.
    “My research mission is to understand the science of biomechanics and motor control behind human locomotion and apply that to enhance the physical performance of humans in daily life,” said Seungmoon Song, a postdoctoral fellow in mechanical engineering and lead author of the paper. “I think exoskeletons are very promising tools that could achieve that enhancement in physical quality of life.”
    Walking in the loop
    The ankle exoskeleton system tested in this research is an experimental emulator that serves as a testbed for trying out different designs. It has a frame that fastens around the upper shin and into an integrated running shoe that the participant wears. It is attached to large motors that sit beside the walking surface and pull a tether that runs up the length of the back of the exoskeleton. Controlled by an algorithm, the tether tugs the wearer’s heel upward, helping them point their toe down as they push off the ground. More

  • in

    Quantum steering for more precise measurements

    Quantum systems consisting of several particles can be used to measure magnetic or electric fields more precisely. A young physicist at the University of Basel has now proposed a new scheme for such measurements that uses a particular kind of correlation between quantum particles.
    In quantum information, the fictitious agents Alice and Bob are often used to illustrate complex communication tasks. In one such process, Alice can use entangled quantum particles such as photons to transmit or “teleport” a quantum state — unknown even to herself — to Bob, something that is not feasible using traditional communications.
    However, it has been unclear whether the team Alice-Bob can use similar quantum states for other things besides communication. A young physicist at the University of Basel has now shown how particular types of quantum states can be used to perform measurements with higher precision than quantum physics would ordinarily allow. The results have been published in the scientific journal Nature Communications.
    Quantum steering at a distance
    Together with researchers in Great Britain and France, Dr. Matteo Fadel, who works at the Physics Department of the University of Basel, has thought about how high-precision measurement tasks can be tackled with the help of so-called quantum steering.
    Quantum steering describes the fact that in certain quantum states of systems consisting of two particles, a measurement on the first particle allows one to make more precise predictions about possible measurement results on the second particle than quantum mechanics would allow if only the measurement on the second particle had been made. It is just as if the measurement on the first particle had “steered” the state of the second one. More

  • in

    Machine learning model generates realistic seismic waveforms

    A new machine-learning model that generates realistic seismic waveforms will reduce manual labor and improve earthquake detection, according to a study published recently in JGR Solid Earth.
    “To verify the e?cacy of our generative model, we applied it to seismic ?eld data collected in Oklahoma,” said Youzuo Lin, a computational scientist in Los Alamos National Laboratory’s Geophysics group and principal investigator of the project. “Through a sequence of qualitative and quantitative tests and benchmarks, we saw that our model can generate high-quality synthetic waveforms and improve machine learning-based earthquake detection algorithms.”
    Quickly and accurately detecting earthquakes can be a challenging task. Visual detection done by people has long been considered the gold standard, but requires intensive manual labor that scales poorly to large data sets. In recent years, automatic detection methods based on machine learning have improved the accuracy and efficiency of data collection; however, the accuracy of those methods relies on access to a large amount of high?quality, labeled training data, often tens of thousands of records or more.
    To resolve this data dilemma, the research team developed SeismoGen based on a generative adversarial network (GAN), which is a type of deep generative model that can generate high?quality synthetic samples in multiple domains. In other words, deep generative models train machines to do things and create new data that could pass as real.
    Once trained, the SeismoGen model is capable of producing realistic seismic waveforms of multiple labels. When applied to real Earth seismic datasets in Oklahoma, the team saw that data augmentation from SeismoGen?generated synthetic waveforms could be used to improve earthquake detection algorithms in instances when only small amounts of labeled training data are available.
    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence model predicts which key of the immune system opens the locks of coronavirus

    With an artificial intelligence (AI) method developed by researchers at Aalto University and University of Helsinki, researchers can now link immune cells to their targets and, for example, uncouple which white blood cells recognize SARS-CoV-2. The developed tool has broad applications in understanding the function of the immune system in infections, autoimmune disorders, and cancer.
    The human immune defense is based on the ability of white blood cells to accurately identify disease-causing pathogens and to initiate a defense reaction against them. The immune defense is able to recall the pathogens it has encountered previously, on which, for example, the effectiveness of vaccines is based. Thus, the immune defense the most accurate patient record system that carries a history of all pathogens an individual has faced. This information however has previously been difficult to obtain from patient samples.
    The learning immune system can be roughly divided into two parts, of which B cells are responsible for producing antibodies against pathogens, while T cells are responsible for destroying their targets. The measurement of antibodies by traditional laboratory methods is relatively simple, which is why antibodies already have several uses in healthcare.
    “Although it is known that the role of T cells in the defense response against for example viruses and cancer is essential, identifying the targets of T cells has been difficult despite extensive research,” says Satu Mustjoki, Professor of Translational Hematology.
    AI helps to identify new key-lock pairs
    T cells identify their targets in a key and a lock principle, where the key is the T cell receptor on the surface of the T cell and the key is the protein presented on the surface of an infected cell. An individual is estimated to carry more different T cell keys than there are stars in the Milky Way, making the mapping of T cell targets with laboratory techniques cumbersome. More

  • in

    Scientists glimpse signs of a puzzling state of matter in a superconductor

    Unconventional superconductors contain a number of exotic phases of matter that are thought to play a role, for better or worse, in their ability to conduct electricity with 100% efficiency at much higher temperatures than scientists had thought possible — although still far short of the temperatures that would allow their wide deployment in perfectly efficient power lines, maglev trains and so on.
    Now scientists at the Department of Energy’s SLAC National Accelerator Laboratory have glimpsed the signature of one of those phases, known as pair-density waves or PDW, and confirmed that it’s intertwined with another phase known as charge density wave (CDW) stripes — wavelike patterns of higher and lower electron density in the material.
    Observing and understanding PDW and its correlations with other phases may be essential for understanding how superconductivity emerges in these materials, allowing electrons to pair up and travel with no resistance, said Jun-Sik Lee, a SLAC staff scientist who led the research at the lab’s Stanford Synchrotron Radiation Lightsource (SSRL).
    Even indirect evidence of the PDW phase intertwined with charge stripes, he said, is an important step on the long road toward understanding the mechanism behind unconventional superconductivity, which has eluded scientists over more than 30 years of research.
    Lee added that the method his team used to make this observation, which involved dramatically increasing the sensitivity of a standard X-ray technique known as resonant soft X-ray scattering (RSXS) so it could see the extremely faint signals given off by these phenomena, has potential for directly sighting both the PDW signature and its correlations with other phases in future experiments. That’s what they plan to work on next.
    The scientists described their findings today in Physical Review Letters. More