More stories

  • in

    Soft semiconductors that stretch like human skin can detect ultra-low light levels

    Semiconductors are moving away from rigid substrates, which are cut or formed into thin discs or wafers, to more flexible plastic material and even paper thanks to new material and fabrication discoveries. The trend toward more flexible substrates has led to fabrication of numerous devices, from light-emitting diodes to solar cells and transistors.
    Georgia Tech researchers have created a material that acts like a second skin layer and is up to 200% more stretchable than its original dimension without significantly losing its electric current. The researchers say the soft flexible photodetectors could enhance the utility of medical wearable sensors and implantable devices, among other applications. The research will be published on Dec. 15 in the journal Science Advances.
    Georgia Tech researchers from both mechanical and computing engineering labs collaborated over three years to demonstrate a new level of stretchability for a photodetector, a device made from a synthetic polymer and an elastomer that absorbs light to produce an electrical current.
    Photodetectors today are used as wearables for health monitoring, such as rigid fingertip pulse oximeter reading devices. They convert light signals into electrical ones and are commonly used on wearable electronics.
    Stretchable like a Rubber Band
    Given that conventional flexible semiconductors break under a few percentages of strain, the Georgia Tech findings are “an order-of-magnitude improvement,” said Olivier Pierron, professor in the George W. Woodruff School of Mechanical Engineering, whose lab measures the mechanical properties and reliability of flexible electronics under extreme conditions. More

  • in

    Exotic quantum particles — less magnetic field required

    Exotic quantum particles and phenomena are like the world’s most daring elite athletes. Like the free solo climbers who scale impossibly steep cliff faces without a rope or harness, only the most extreme conditions will entice them to show up. For exotic phenomena like superconductivity or particles that carry a fraction of the charge of an electron, that means extremely low temperatures or extremely high magnetic fields.
    But what if you could get these particles and phenomena to show up under less extreme conditions? Much has been made of the potential of room-temperature superconductivity, but generating exotic fractionally charged particles at low-to-zero magnetic field is equally important to the future of quantum materials and applications, including new types of quantum computing.
    Now, a team of researchers from Harvard University led by Amir Yacoby, Professor of Physics and of Applied Physics at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Ashvin Vishwanath, Professor of Physics in the Department of Physics, in collaboration with Pablo Jarillo-Herrero at the Massachusetts Institute of Technology, have observed exotic fractional states at low magnetic field in twisted bilayer graphene for the first time.
    The research is published in Nature.
    “One of the holy grails in the field of condensed matter physics is getting exotic particles with low to zero magnetic field,” said Yacoby, senior author of the study. “There have been theoretical predictions that we should be able to see these bizarre particles with low to zero magnetic field, but no one has been able to observe it until now.”
    The researchers were interested in a specific exotic quantum state known as fractional Chern insulators. Chern insulators are topological insulators, meaning they conduct electricity on their surface or edge, but not in the middle. More

  • in

    Quantum theory needs complex numbers

    Physicists construct theories to describe nature. Let us explain it through an analogy with something that we can do in our everyday life, like going on a hike in the mountains. To avoid getting lost, we generally use a map. The map is a representation of the mountain, with its houses, rivers, paths, etc. By using it, it is rather easy to find our way to the top of the mountain. But the map is not the mountain. The map constitutes the theory we use to represent the mountain’s reality.
    Physical theories are expressed in terms of mathematical objects, such as equations, integrals or derivatives. During history, physics theories evolved, making use of more elaborate mathematical concepts to describe more complicated physics phenomena. Introduced in the early 20th century to represent the microscopic world, the advent of quantum theory was a game changer. Among the many drastic changes it brought, it was the first theory phrased in terms of complex numbers.
    Invented by mathematicians centuries ago, complex numbers are made of a real and imaginary part. It was Descartes, the famous philosopher considered as the father of rational sciences, who coined the term “imaginary,” to strongly contrast it with what he called “real” numbers. Despite their fundamental role in mathematics, complex numbers were not expected to have a similar role in physics because of this imaginary part. And in fact, before quantum theory, Newton’s mechanics or Maxwell’s electromagnetism used real numbers to describe, say, how objects move, as well as how electro-magnetic fields propagate. The theories sometimes employ complex numbers to simplify some calculations, but their axioms only make use of real numbers.
    Schrödinger’s bewilderment
    Quantum theory radically challenged this state of affairsbecause its building postulates were phrased in terms of complex numbers. The new theory, even if very useful for predicting the results of experiments, and for instance perfectly explains the hydrogen atom energy levels, went against the intuition in favor of real numbers. Looking for a description of electrons, Schrödinger was the first to introduce complex numbers in quantum theory through his famous equation. However, he could not conceive that complex numbers could actually be necessary in physics at that fundamental level. It was as though he had found a map to represent the mountains but this map was actually made out of abstract and non-intuitive drawings. Such was his bewilderment that he wrote a letter to Lorentz on June 6, 1926, stating “What is unpleasant here, and indeed directly to be objected to, is the use of complex numbers. ? is surely fundamentally a real function.” Several decades later, in 1960, Prof. E.C.G. Stueckelberg, from the University of Geneva, demonstrated that all predictions of quantum theory for single-particle experiments could equally be derived using only real numbers. Since then, the consensus was that complex numbers in quantum theory were only a convenient tool.
    However, in a recent study published in Nature, ICFO researchers Marc-Olivier Renou and ICREA Prof. at ICFO Antonio Acín, in collaboration with Prof. Nicolas Gisin from the University of Geneva and the Schaffhausen Institute of Technology, Armin Tavakoli from the Vienna University of Technology, and David Trillo, Mirjam Weilenmann, and Thinh P. Le, led by Prof. Miguel Navascués, from the Institute of Quantum Optics and Quantum Information (IQOQI) of the Austrian Academy of Sciences in Vienna have proven that if the quantum postulates were phrased in terms of real numbers, instead of complex, then some predictions about quantum networks would necessarily differ. Indeed, the team of researchers came up with a concrete experimental proposal involving three parties connected by two sources of particles where the prediction by standard complex quantum theory cannot be expressed by its real counterpart.
    Two sources and three nodes
    To do this, they thought of a specific scenario that involves two independent sources (S and R), placed between three measurement nodes (A, B and C) in an elementary quantum network. The source S emits two particles, say photons, one to A, and the second to B. The two photons are prepared in an entangled state, say in polarization. That is, they have correlated polarization in a way which is allowed by (both complex and real) quantum theory but impossible classically. The source R does exactly the same, emits two other photons prepared in an entangled state and sends them to B and to C, respectively. The key point in this study was to find the appropriate way to measure these four photons in the nodes A, B, C in order to obtain predictions which cannot be explained when quantum theory is restricted to real numbers.
    As ICFO researcher Marc-Olivier Renou comments “When we found this result, the challenge was to see if our thought experiment could be done with current technologies. After discussing with colleagues from Shenzhen-China, we found a way to adapt our protocol to make it feasible with their state-of-the-art devices. And, as expected, the experimental results match the predictions!.” This remarkable experiment, realized in collaboration with Zheng-Da Li,Ya-Li Mao,Hu Chen, Lixin Feng, Sheng-Jun Yang, Jingyun Fan from the Southern University of Science and Technology, and Zizhu Wang from the University of Electronic Science and Technology is published at the same time as the Nature paper in Physical Review Letters.
    The results published in Nature can be seen as a generalization of Bell’s theorem, which provides a quantum experiment which cannot be explained by any local physics formalism. Bell’s experiment involves one quantum source S that emits two entangled photons, one to A, and the second to B, prepared in an entangled state. Here, in contrast, one needs two independent sources, the assumed independence is crucial and was carefully designed in the experiment.
    The study also shows how outstanding predictions can be when combining the concept of a quantum network with Bell’s ideas. For sure, the tools developed to obtain this first result are such that they will allow physicists to achieve a better understanding of quantum theory, and will one day trigger the realization and materialization of so far unfathomable applications for the quantum internet. More

  • in

    E-waste recycling emits emerging synthetic antioxidants

    Manufacturers add synthetic antioxidants to plastics, rubbers and other polymers to make them last longer. However, the health effects of these compounds, and how readily they migrate into the environment, are largely unknown. Now, researchers reporting in ACS’ Environmental Science & Technology Letters have detected a broad range of emerging synthetic antioxidants, called hindered phenol and sulfur antioxidants, in dust from electronic waste (e-waste) recycling workshops, possibly posing risks for the workers inside.
    Previous studies revealed widespread environmental pollution and human exposure to a class of compounds called low-molecular weight synthetic phenolic antioxidants. In lab experiments, some of these compounds were toxic to rodents or human cells. Recently, manufacturers introduced a class of high-molecular weight synthetic phenolic antioxidants, also known as hindered phenol antioxidants (HPAs), with improved performance and slower migration from products. In addition to HPAs, compounds called sulfur antioxidants (SAs) are often added to rubber and plastic polymers as “helper” antioxidants. The toxicological effects and environmental occurrence of most of these new compounds are unknown. Therefore, Lixi Zeng and colleagues wanted to investigate the occurrence of emerging HPAs and SAs in dust from e-waste recycling centers — workshops where large amounts of discarded electronics, such as laptop computers, cell phones, tablets, wires and cables are dismantled and processed.
    In August 2020, the researchers collected 45 dust samples from three categories of e-waste recycling workshops in an industrial park in Yichun City, China: wire and cable dismantling, electronic plastic processing, and general e-waste dismantling. Then, they used liquid chromatography/tandem mass spectrometry to screen for 18 emerging HPAs and 6 emerging SAs. All 24 compounds were detected in the dust: 22 for the first time, and some at relatively high levels compared with other e-waste pollutants. Although dust concentrations of SAs were similar for the different categories of workshops, centers that dismantled wires and cables and processed electronic plastics had significantly higher levels of dust HPAs than those that dismantled general e-wastes. Given the ubiquitous occurrence of emerging HPAs and SAs in e-waste dust, further research is needed on their environmental behaviors, fates, toxicities and risks, the researchers say.
    The authors acknowledge funding from the National Natural Science Foundation of China, the Guangdong Special Support Program, the Guangdong (China) Innovative and Entrepreneurial Research Team Program, the Special Fund Project for Science and Technology Innovation Strategy of Guangdong Province and the Fundamental Research Funds for the Central Universities.
    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    Creating the human-robotic dream team

    Using autonomous vehicle guidelines, a team of UBC Okanagan researchers has developed a system to improve interactions between people and robots.
    The way people interact safely with robots is at the forefront of today’s research related to automation and manufacturing, explains Debasita Mukherjee, a doctoral student and lead author of a recently published study. She is one of several researchers at UBC’s Advanced Control and Intelligent Systems Laboratory who are working to develop systems that allow humans and robots to interact safely and efficiently.
    “It is incredibly important for robots in manufacturing to perform their tasks in the safest and most efficient method possible,” Mukherjee says. “In order to make these automated machines as smart as possible, we are developing systems that perceive their environments and carry out tasks in a similar manner as their human partners.”
    To develop such systems, researchers are using artificial intelligence and machine learning to help guide the machines. Mechanical Engineering Professor Homayoun Najjaran says the process is not as straightforward as it seems.
    “Robots don’t think or feel, so they need systems that capture and analyze their environment enabling them to respond,” says Dr. Najjaran. “Often those responses need to be in hundredths of a second to ensure the safety of humans in their vicinity.”
    Traditionally, industrial robots have been fixed and programmed to operate at high speeds and perform tasks such as welding, painting, assembly, pick-and-place and material handling. Social robots, on the other hand, are built to assist people in service industries. They are typically mobile, lightweight and programmed to work in a variety of environments. More

  • in

    Cancer-spotting AI and human experts can be fooled by image-tampering attacks

    Artificial intelligence (AI) models that evaluate medical images have potential to speed up and improve accuracy of cancer diagnoses, but they may also be vulnerable to cyberattacks. In a new study, University of Pittsburgh researchers simulated an attack that falsified mammogram images, fooling both an AI breast cancer diagnosis model and human breast imaging radiologist experts.
    The study, published today in Nature Communications, brings attention to a potential safety issue for medical AI known as “adversarial attacks,” which seek to alter images or other inputs to make models arrive at incorrect conclusions.
    “What we want to show with this study is that this type of attack is possible, and it could lead AI models to make the wrong diagnosis — which is a big patient safety issue,” said senior author Shandong Wu, Ph.D., associate professor of radiology, biomedical informatics and bioengineering at Pitt. “By understanding how AI models behave under adversarial attacks in medical contexts, we can start thinking about ways to make these models safer and more robust.”
    AI-based image recognition technology for cancer detection has advanced rapidly in recent years, and several breast cancer models have U.S. Food and Drug Administration (FDA) approval. According to Wu, these tools can rapidly screen mammogram images and identify those most likely to be cancerous, helping radiologists be more efficient and accurate.
    But such technologies are also at risk from cyberthreats, such as adversarial attacks. Potential motivations for such attacks include insurance fraud from health care providers looking to boost revenue or companies trying to adjust clinical trial outcomes in their favor. Adversarial attacks on medical images range from tiny manipulations that change the AI’s decision, but are imperceptible to the human eye, to more sophisticated versions that target sensitive contents of the image, such as cancerous regions — making them more likely to fool a human.
    To understand how AI would behave under this more complex type of adversarial attack, Wu and his team used mammogram images to develop a model for detecting breast cancer. First, the researchers trained a deep learning algorithm to distinguish cancerous and benign cases with more than 80% accuracy. Next, they developed a so-called “generative adversarial network” (GAN) — a computer program that generates false images by inserting or removing cancerous regions from negative or positive images, respectively, and then they tested how the model classified these adversarial images. More

  • in

    First optical oscilloscope

    A team from UCF has developed the world’s first optical oscilloscope, an instrument that is able to measure the electric field of light. The device converts light oscillations into electrical signals, much like hospital monitorsconvert a patient’s heartbeat into electrical oscillation.
    Until now, reading the electric field of light has been a challenge because of the high speeds at which light waves oscillates. The most advanced techniques, which power our phone and internet communications, can currently clock electric fields at up to gigahertz frequencies — covering the radio frequency and microwave regions of the electromagnetic spectrum. Light waves oscillate at much higher rates, allowing a higher density of information to be transmitted. However, the current tools for measuring light fields could resolve only an average signal associated with a ‘pulse’ of light, and not the peaks and valleys within the pulse. Measuring those peaks and valleys within a single pulse is important because it is in that space that information can be packed and delivered.
    “Fiber optic communications have taken advantage of light to make things faster, but we are still functionally limited by the speed of the oscilloscope,” says Physics Associate Professor Michael Chini, who worked on the research at UCF. “Our optical oscilloscope may be able to increase that speed by a factor of about 10,000.”
    The team’s findings are published in this week’s Nature Photonics journal.
    The team developed the device and demonstrated its capability for real-time measurement of the electric fields of individual laser pulses in Chini’s lab at UCF. The next step for the team is to see how far they can push the speed limits of the technique.
    The lead author of the paper is UCF postdoctoral scholar Yangyang Liu. Other authors include physics alums Jonathan Nesper ’19 ’21MS, who earned his bachelor’s in math and master’s in physics; Shima Gholam-Mirzaei ’18MS ’20PhD; and John E. Beetar ’15 ’17MS ’20PhD.
    Gholam-Mirzaei is now a postdoctoral researcher at the Joint Attosecond Science Laboratory at the National Research Council of Canada and University of Ottawa and Beetar is completing a postdoc at the University of California at Berkeley.
    Chini had the idea for the single-shot waveform measurement scheme and oversaw the research team. Liu led the experimental effort and performed most of the measurements and simulations. Beetar assisted with the measurements of the carrier-envelope phase dependence. Nesper and Gholam-Mirzaei assisted with the construction of the experimental setup and with the data collection. All authors contributed to the data analysis and wrote the journal article.
    The work was supported primarily through a grant from the Air Force Office of Scientific Research under Award No. FA9550-20-1-0284, while Gholam-Mirzaei was supported by the Army Research Office under Award No. W911NF-19-1-0211.
    Story Source:
    Materials provided by University of Central Florida. Original written by Zenaida Gonzalez Kotala. Note: Content may be edited for style and length. More

  • in

    Losing isn’t always bad: Gaining topology from loss

    Losing particles can lead to positive, robust effects.
    An international collaboration has demonstrated a novel topology arising from losses in hybrid light-matter particles, introducing a new avenue to induce the highly-prized effects inherent to conventional topological materials, which can potentially revolutionise electronics.
    Led by Singapore’s Nanyang Technological University (NTU) and the Australian National University (ANU), the study represents the first experimental observation of a non-Hermitian topological invariant in a semiconductor in the strong light-matter coupling regime supporting formation of exciton-polaritons.
    Losing Is Not Always Losing
    Losses, such as friction or electrical resistance, are ubiquitous in nature, but are seen as detrimental to devices.
    In electronics, for example, resistance leads to heating and limits computing efficiency. More