More stories

  • in

    E-waste recycling emits emerging synthetic antioxidants

    Manufacturers add synthetic antioxidants to plastics, rubbers and other polymers to make them last longer. However, the health effects of these compounds, and how readily they migrate into the environment, are largely unknown. Now, researchers reporting in ACS’ Environmental Science & Technology Letters have detected a broad range of emerging synthetic antioxidants, called hindered phenol and sulfur antioxidants, in dust from electronic waste (e-waste) recycling workshops, possibly posing risks for the workers inside.
    Previous studies revealed widespread environmental pollution and human exposure to a class of compounds called low-molecular weight synthetic phenolic antioxidants. In lab experiments, some of these compounds were toxic to rodents or human cells. Recently, manufacturers introduced a class of high-molecular weight synthetic phenolic antioxidants, also known as hindered phenol antioxidants (HPAs), with improved performance and slower migration from products. In addition to HPAs, compounds called sulfur antioxidants (SAs) are often added to rubber and plastic polymers as “helper” antioxidants. The toxicological effects and environmental occurrence of most of these new compounds are unknown. Therefore, Lixi Zeng and colleagues wanted to investigate the occurrence of emerging HPAs and SAs in dust from e-waste recycling centers — workshops where large amounts of discarded electronics, such as laptop computers, cell phones, tablets, wires and cables are dismantled and processed.
    In August 2020, the researchers collected 45 dust samples from three categories of e-waste recycling workshops in an industrial park in Yichun City, China: wire and cable dismantling, electronic plastic processing, and general e-waste dismantling. Then, they used liquid chromatography/tandem mass spectrometry to screen for 18 emerging HPAs and 6 emerging SAs. All 24 compounds were detected in the dust: 22 for the first time, and some at relatively high levels compared with other e-waste pollutants. Although dust concentrations of SAs were similar for the different categories of workshops, centers that dismantled wires and cables and processed electronic plastics had significantly higher levels of dust HPAs than those that dismantled general e-wastes. Given the ubiquitous occurrence of emerging HPAs and SAs in e-waste dust, further research is needed on their environmental behaviors, fates, toxicities and risks, the researchers say.
    The authors acknowledge funding from the National Natural Science Foundation of China, the Guangdong Special Support Program, the Guangdong (China) Innovative and Entrepreneurial Research Team Program, the Special Fund Project for Science and Technology Innovation Strategy of Guangdong Province and the Fundamental Research Funds for the Central Universities.
    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    Creating the human-robotic dream team

    Using autonomous vehicle guidelines, a team of UBC Okanagan researchers has developed a system to improve interactions between people and robots.
    The way people interact safely with robots is at the forefront of today’s research related to automation and manufacturing, explains Debasita Mukherjee, a doctoral student and lead author of a recently published study. She is one of several researchers at UBC’s Advanced Control and Intelligent Systems Laboratory who are working to develop systems that allow humans and robots to interact safely and efficiently.
    “It is incredibly important for robots in manufacturing to perform their tasks in the safest and most efficient method possible,” Mukherjee says. “In order to make these automated machines as smart as possible, we are developing systems that perceive their environments and carry out tasks in a similar manner as their human partners.”
    To develop such systems, researchers are using artificial intelligence and machine learning to help guide the machines. Mechanical Engineering Professor Homayoun Najjaran says the process is not as straightforward as it seems.
    “Robots don’t think or feel, so they need systems that capture and analyze their environment enabling them to respond,” says Dr. Najjaran. “Often those responses need to be in hundredths of a second to ensure the safety of humans in their vicinity.”
    Traditionally, industrial robots have been fixed and programmed to operate at high speeds and perform tasks such as welding, painting, assembly, pick-and-place and material handling. Social robots, on the other hand, are built to assist people in service industries. They are typically mobile, lightweight and programmed to work in a variety of environments. More

  • in

    Cancer-spotting AI and human experts can be fooled by image-tampering attacks

    Artificial intelligence (AI) models that evaluate medical images have potential to speed up and improve accuracy of cancer diagnoses, but they may also be vulnerable to cyberattacks. In a new study, University of Pittsburgh researchers simulated an attack that falsified mammogram images, fooling both an AI breast cancer diagnosis model and human breast imaging radiologist experts.
    The study, published today in Nature Communications, brings attention to a potential safety issue for medical AI known as “adversarial attacks,” which seek to alter images or other inputs to make models arrive at incorrect conclusions.
    “What we want to show with this study is that this type of attack is possible, and it could lead AI models to make the wrong diagnosis — which is a big patient safety issue,” said senior author Shandong Wu, Ph.D., associate professor of radiology, biomedical informatics and bioengineering at Pitt. “By understanding how AI models behave under adversarial attacks in medical contexts, we can start thinking about ways to make these models safer and more robust.”
    AI-based image recognition technology for cancer detection has advanced rapidly in recent years, and several breast cancer models have U.S. Food and Drug Administration (FDA) approval. According to Wu, these tools can rapidly screen mammogram images and identify those most likely to be cancerous, helping radiologists be more efficient and accurate.
    But such technologies are also at risk from cyberthreats, such as adversarial attacks. Potential motivations for such attacks include insurance fraud from health care providers looking to boost revenue or companies trying to adjust clinical trial outcomes in their favor. Adversarial attacks on medical images range from tiny manipulations that change the AI’s decision, but are imperceptible to the human eye, to more sophisticated versions that target sensitive contents of the image, such as cancerous regions — making them more likely to fool a human.
    To understand how AI would behave under this more complex type of adversarial attack, Wu and his team used mammogram images to develop a model for detecting breast cancer. First, the researchers trained a deep learning algorithm to distinguish cancerous and benign cases with more than 80% accuracy. Next, they developed a so-called “generative adversarial network” (GAN) — a computer program that generates false images by inserting or removing cancerous regions from negative or positive images, respectively, and then they tested how the model classified these adversarial images. More

  • in

    First optical oscilloscope

    A team from UCF has developed the world’s first optical oscilloscope, an instrument that is able to measure the electric field of light. The device converts light oscillations into electrical signals, much like hospital monitorsconvert a patient’s heartbeat into electrical oscillation.
    Until now, reading the electric field of light has been a challenge because of the high speeds at which light waves oscillates. The most advanced techniques, which power our phone and internet communications, can currently clock electric fields at up to gigahertz frequencies — covering the radio frequency and microwave regions of the electromagnetic spectrum. Light waves oscillate at much higher rates, allowing a higher density of information to be transmitted. However, the current tools for measuring light fields could resolve only an average signal associated with a ‘pulse’ of light, and not the peaks and valleys within the pulse. Measuring those peaks and valleys within a single pulse is important because it is in that space that information can be packed and delivered.
    “Fiber optic communications have taken advantage of light to make things faster, but we are still functionally limited by the speed of the oscilloscope,” says Physics Associate Professor Michael Chini, who worked on the research at UCF. “Our optical oscilloscope may be able to increase that speed by a factor of about 10,000.”
    The team’s findings are published in this week’s Nature Photonics journal.
    The team developed the device and demonstrated its capability for real-time measurement of the electric fields of individual laser pulses in Chini’s lab at UCF. The next step for the team is to see how far they can push the speed limits of the technique.
    The lead author of the paper is UCF postdoctoral scholar Yangyang Liu. Other authors include physics alums Jonathan Nesper ’19 ’21MS, who earned his bachelor’s in math and master’s in physics; Shima Gholam-Mirzaei ’18MS ’20PhD; and John E. Beetar ’15 ’17MS ’20PhD.
    Gholam-Mirzaei is now a postdoctoral researcher at the Joint Attosecond Science Laboratory at the National Research Council of Canada and University of Ottawa and Beetar is completing a postdoc at the University of California at Berkeley.
    Chini had the idea for the single-shot waveform measurement scheme and oversaw the research team. Liu led the experimental effort and performed most of the measurements and simulations. Beetar assisted with the measurements of the carrier-envelope phase dependence. Nesper and Gholam-Mirzaei assisted with the construction of the experimental setup and with the data collection. All authors contributed to the data analysis and wrote the journal article.
    The work was supported primarily through a grant from the Air Force Office of Scientific Research under Award No. FA9550-20-1-0284, while Gholam-Mirzaei was supported by the Army Research Office under Award No. W911NF-19-1-0211.
    Story Source:
    Materials provided by University of Central Florida. Original written by Zenaida Gonzalez Kotala. Note: Content may be edited for style and length. More

  • in

    Losing isn’t always bad: Gaining topology from loss

    Losing particles can lead to positive, robust effects.
    An international collaboration has demonstrated a novel topology arising from losses in hybrid light-matter particles, introducing a new avenue to induce the highly-prized effects inherent to conventional topological materials, which can potentially revolutionise electronics.
    Led by Singapore’s Nanyang Technological University (NTU) and the Australian National University (ANU), the study represents the first experimental observation of a non-Hermitian topological invariant in a semiconductor in the strong light-matter coupling regime supporting formation of exciton-polaritons.
    Losing Is Not Always Losing
    Losses, such as friction or electrical resistance, are ubiquitous in nature, but are seen as detrimental to devices.
    In electronics, for example, resistance leads to heating and limits computing efficiency. More

  • in

    Technique speeds up thermal actuation for soft robotics

    Researchers from North Carolina State University have come up with a new design for thermal actuators, which can be used to create rapid movement in soft robotic devices.
    “Using thermal actuation is not new for soft robots, but the biggest challenge for soft thermal actuators was that they were relatively slow — and we’ve made them fast,” says Yong Zhu, corresponding author of the paper and the Andrew A. Adams Distinguished Professor of Mechanical and Aerospace Engineering at NC State.
    Actuators are the parts of a device — such as a soft robot — that create motion by converting energy into work.
    “What makes this new actuator design work is a structure with a bi-stable design,” says Shuang Wu, first author of the paper and a Ph.D. student at NC State. “Think of a snap hair clip. It’s stable until you apply a certain amount of energy (by bending it over), and then it snaps into a different shape — which is also stable.”
    In the case of the new thermal actuator, the material is bi-stable, but which shape the material prefers is dictated by temperature.
    Here’s how that works. The researchers layer two materials on top of each other, with silver nanowires in the middle. The two materials have different coefficients of thermal expansion, which means they expand at different rates as they heat up. In practical terms, that means the structure bends when you heat it.
    This layered material is then shaped into a design that gives it a default curvature in one direction — let’s say that it curves downward. When voltage is applied to the silver nanowires, the material heats up, making it bend in the other direction. Once you reach a certain temperature — the critical temperature — the material snaps into its new default shape, curving up rapidly. When the voltage is removed, the temperature goes back down. Once it cools past another critical temperature, the material snaps back to its previous default shape, curving down rapidly. Note that the two critical temperatures are different; the first one is higher. By applying current to the nanowires in a regular pattern, you can make the material snap back and forth.
    To demonstrate the technique, the researchers created two prototypes. One of the prototypes emulates the snapping behavior of a Venus flytrap, while the other is a “crawler” capable of moving more than one body length per second.
    “Potential applications range from biomedical applications to prosthetic devices to high-end manufacturing,” Zhu says. “Any application in which you’d want to be able to move quickly, but also want to avoid rigid materials and conventional robotics.”
    Next steps include developing sensor and control mechanisms that could more fully automate the actuation process, allowing it to operate more efficiently than purely manual controls.
    “We’re also interested in exploring other possible materials, so that we could fine-tune the thermal and mechanical properties,” Zhu says. “This could allow us to tailor both actuator speed and force.”
    Story Source:
    Materials provided by North Carolina State University. Original written by Matt Shipman. Note: Content may be edited for style and length. More

  • in

    Gunfire or plastic bag popping? Trained computer can tell the difference

    According to the Gun Violence Archive, there have been 296 mass shootings in the United States this year. Sadly, 2021 is on pace to be America’s deadliest year of gun violence in the last two decades.
    Discerning between a dangerous audio event like a gun firing and a non-life-threatening event, such as a plastic bag bursting, can mean the difference between life and death. Additionally, it also can determine whether or not to deploy public safety workers. Humans, as well as computers, often confuse the sounds of a plastic bag popping and real gunshot sounds.
    Over the past few years, there has been a degree of hesitation over the implementation of some of the well-known available acoustic gunshot detector systems since they can be costly and often unreliable.
    In an experimental study, researchers from Florida Atlantic University’s College of Engineering and Computer Study focused on addressing the reliability of these detection systems as it relates to the false positive rate. The ability of a model to correctly discern sounds, even in the subtlest of scenarios, will differentiate a well-trained model from one that is not very efficient.
    With the daunting task of accounting for all sounds that are similar to a gunshot sound, the researchers created a new dataset comprised of audio recordings of plastic bag explosions collected over a variety of environments and conditions, such as plastic bag size and distance from the recording microphones. Recordings from the audio clips ranged from 400 to 600 milliseconds in duration.
    Researchers also developed a classification algorithm based on a convolutional neural network (CNN), as a baseline, to illustrate the relevance of this data collection effort. The data was then used, together with a gunshot sound dataset, to train a classification model based on a CNN to differentiate life-threatening gunshot events from non-life-threatening plastic bag explosion events. More

  • in

    How organic neuromorphic electronics can think and act

    The processor is the brain of a computer — an often-quoted phrase. But processors work fundamentally differently than the human brain. Transistors perform logic operations by means of electronic signals. In contrast, the brain works with nerve cells, so-called neurons, which are connected via biological conductive paths, so-called synapses. At a higher level, this signaling is used by the brain to control the body and perceive the surrounding environment. The reaction of the body/brain system when certain stimuli are perceived — for example, via the eyes, ears or sense of touch — is triggered through a learning process. For example, children learn not to reach twice for a hot stove: one input stimulus leads to a learning process with a clear behavioral outcome.
    Scientists working with Paschalis Gkoupidenis, group leader in Paul Blom’s department at the Max Planck Institute for Polymer Research, have now applied this basic principle of learning through experience in a simplified form and steered a robot through a maze using a so-called organic neuromorphic circuit. The work was an extensive collaboration between the Universities of Eindhoven, Stanford, Brescia, Oxford and KAUST.
    “We wanted to use this simple setup to show how powerful such ‘organic neuromorphic devices’ can be in real-world conditions,” says Imke Krauhausen, a doctoral student in Gkoupidenis’ group and at TU Eindhoven (van de Burgt group), and first author of the scientific paper.
    To achieve the navigation of the robot inside the maze, the researchers fed the smart adaptive circuit with sensory signals coming from the environment. The path of maze towards the exit is indicated visually at each maze intersects. Initially, the robot often misinterprets the visual signs, thus it makes the wrong “turning” decisions at the maze intersects and loses the way out. When the robot takes these decisions and follows wrong dead-end paths, it is being discouraged to take these wrong decisions by receiving corrective stimuli. The corrective stimuli, for example when the robot hits a wall, are directly applied at the organic circuit via electrical signals induced by a touch sensor attached to the robot. With each subsequent execution of the experiment, the robot gradually learns to make the right “turning” decisions at the intersects, i. e. to avoid receiving corrective stimuli, and after a few trials it finds the way out of the maze. This learning process happens exclusively on the organic adaptive circuit.
    “We were really glad to see that the robot can pass through the maze after some runs by learning on a simple organic circuit. We have shown here a first, very simple setup. In the distant future, however, we hope that organic neuromorphic devices could also be used for local and distributed computing/learning. This will open up entirely new possibilities for applications in real-world robotics, human-machine interfaces and point-of-care diagnostics. Novel platforms for rapid prototyping and education, at the intersection of materials science and robotics, are also expected to emerge.” Gkoupidenis says.
    Story Source:
    Materials provided by Max Planck Institute for Polymer Research. Note: Content may be edited for style and length. More