More stories

  • in

    A mathematical model may give more efficient climate talks

    Achieving consensus among countries in global climate negotiations is a long and complicated process. Researchers at Linköping University have developed a mathematical model that describes the achievement of the 2015 Paris Agreement and that may contribute to more efficient negotiations when striving for unanimity.
    Global climate targets have been in focus this autumn as world leaders met at COP26 in Glasgow. The intention was that countries would negotiate how to work together to keep the global temperature rise below two degrees Celsius, and preferably below 1.5 degrees.
    Climate agreements need unanimity, and achieving unanimity takes time, commitment, and a good organisation structure. The Paris Agreement in 2015, for example, was the result of complex diplomatic negotiations that took more than a decade to complete. And the time aspect is something that researchers at Linköping University have examined in depth.
    “Our model investigates how the negotiations should be organised in order to achieve unanimity, and what factors can slow down or speed up convergence,” says Claudio Altafini, professor in the Division for Automatic Control, Department of Electrical Engineering, at Linköping University.
    The results have been published in the scientific journal Science Advances.
    Based on observed data
    The model is dynamical and based on observed data from the documents and the minutes of nearly 300 UN-sponsored climate conferences in the period 2001-2015, leading to the Paris Agreement. The documents have enabled the LiU researchers to investigate, among other things, the pattern according to which countries participated and expressed their opinions in the sequence of meetings. More

  • in

    Soft semiconductors that stretch like human skin can detect ultra-low light levels

    Semiconductors are moving away from rigid substrates, which are cut or formed into thin discs or wafers, to more flexible plastic material and even paper thanks to new material and fabrication discoveries. The trend toward more flexible substrates has led to fabrication of numerous devices, from light-emitting diodes to solar cells and transistors.
    Georgia Tech researchers have created a material that acts like a second skin layer and is up to 200% more stretchable than its original dimension without significantly losing its electric current. The researchers say the soft flexible photodetectors could enhance the utility of medical wearable sensors and implantable devices, among other applications. The research will be published on Dec. 15 in the journal Science Advances.
    Georgia Tech researchers from both mechanical and computing engineering labs collaborated over three years to demonstrate a new level of stretchability for a photodetector, a device made from a synthetic polymer and an elastomer that absorbs light to produce an electrical current.
    Photodetectors today are used as wearables for health monitoring, such as rigid fingertip pulse oximeter reading devices. They convert light signals into electrical ones and are commonly used on wearable electronics.
    Stretchable like a Rubber Band
    Given that conventional flexible semiconductors break under a few percentages of strain, the Georgia Tech findings are “an order-of-magnitude improvement,” said Olivier Pierron, professor in the George W. Woodruff School of Mechanical Engineering, whose lab measures the mechanical properties and reliability of flexible electronics under extreme conditions. More

  • in

    Exotic quantum particles — less magnetic field required

    Exotic quantum particles and phenomena are like the world’s most daring elite athletes. Like the free solo climbers who scale impossibly steep cliff faces without a rope or harness, only the most extreme conditions will entice them to show up. For exotic phenomena like superconductivity or particles that carry a fraction of the charge of an electron, that means extremely low temperatures or extremely high magnetic fields.
    But what if you could get these particles and phenomena to show up under less extreme conditions? Much has been made of the potential of room-temperature superconductivity, but generating exotic fractionally charged particles at low-to-zero magnetic field is equally important to the future of quantum materials and applications, including new types of quantum computing.
    Now, a team of researchers from Harvard University led by Amir Yacoby, Professor of Physics and of Applied Physics at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Ashvin Vishwanath, Professor of Physics in the Department of Physics, in collaboration with Pablo Jarillo-Herrero at the Massachusetts Institute of Technology, have observed exotic fractional states at low magnetic field in twisted bilayer graphene for the first time.
    The research is published in Nature.
    “One of the holy grails in the field of condensed matter physics is getting exotic particles with low to zero magnetic field,” said Yacoby, senior author of the study. “There have been theoretical predictions that we should be able to see these bizarre particles with low to zero magnetic field, but no one has been able to observe it until now.”
    The researchers were interested in a specific exotic quantum state known as fractional Chern insulators. Chern insulators are topological insulators, meaning they conduct electricity on their surface or edge, but not in the middle. More

  • in

    Quantum theory needs complex numbers

    Physicists construct theories to describe nature. Let us explain it through an analogy with something that we can do in our everyday life, like going on a hike in the mountains. To avoid getting lost, we generally use a map. The map is a representation of the mountain, with its houses, rivers, paths, etc. By using it, it is rather easy to find our way to the top of the mountain. But the map is not the mountain. The map constitutes the theory we use to represent the mountain’s reality.
    Physical theories are expressed in terms of mathematical objects, such as equations, integrals or derivatives. During history, physics theories evolved, making use of more elaborate mathematical concepts to describe more complicated physics phenomena. Introduced in the early 20th century to represent the microscopic world, the advent of quantum theory was a game changer. Among the many drastic changes it brought, it was the first theory phrased in terms of complex numbers.
    Invented by mathematicians centuries ago, complex numbers are made of a real and imaginary part. It was Descartes, the famous philosopher considered as the father of rational sciences, who coined the term “imaginary,” to strongly contrast it with what he called “real” numbers. Despite their fundamental role in mathematics, complex numbers were not expected to have a similar role in physics because of this imaginary part. And in fact, before quantum theory, Newton’s mechanics or Maxwell’s electromagnetism used real numbers to describe, say, how objects move, as well as how electro-magnetic fields propagate. The theories sometimes employ complex numbers to simplify some calculations, but their axioms only make use of real numbers.
    Schrödinger’s bewilderment
    Quantum theory radically challenged this state of affairsbecause its building postulates were phrased in terms of complex numbers. The new theory, even if very useful for predicting the results of experiments, and for instance perfectly explains the hydrogen atom energy levels, went against the intuition in favor of real numbers. Looking for a description of electrons, Schrödinger was the first to introduce complex numbers in quantum theory through his famous equation. However, he could not conceive that complex numbers could actually be necessary in physics at that fundamental level. It was as though he had found a map to represent the mountains but this map was actually made out of abstract and non-intuitive drawings. Such was his bewilderment that he wrote a letter to Lorentz on June 6, 1926, stating “What is unpleasant here, and indeed directly to be objected to, is the use of complex numbers. ? is surely fundamentally a real function.” Several decades later, in 1960, Prof. E.C.G. Stueckelberg, from the University of Geneva, demonstrated that all predictions of quantum theory for single-particle experiments could equally be derived using only real numbers. Since then, the consensus was that complex numbers in quantum theory were only a convenient tool.
    However, in a recent study published in Nature, ICFO researchers Marc-Olivier Renou and ICREA Prof. at ICFO Antonio Acín, in collaboration with Prof. Nicolas Gisin from the University of Geneva and the Schaffhausen Institute of Technology, Armin Tavakoli from the Vienna University of Technology, and David Trillo, Mirjam Weilenmann, and Thinh P. Le, led by Prof. Miguel Navascués, from the Institute of Quantum Optics and Quantum Information (IQOQI) of the Austrian Academy of Sciences in Vienna have proven that if the quantum postulates were phrased in terms of real numbers, instead of complex, then some predictions about quantum networks would necessarily differ. Indeed, the team of researchers came up with a concrete experimental proposal involving three parties connected by two sources of particles where the prediction by standard complex quantum theory cannot be expressed by its real counterpart.
    Two sources and three nodes
    To do this, they thought of a specific scenario that involves two independent sources (S and R), placed between three measurement nodes (A, B and C) in an elementary quantum network. The source S emits two particles, say photons, one to A, and the second to B. The two photons are prepared in an entangled state, say in polarization. That is, they have correlated polarization in a way which is allowed by (both complex and real) quantum theory but impossible classically. The source R does exactly the same, emits two other photons prepared in an entangled state and sends them to B and to C, respectively. The key point in this study was to find the appropriate way to measure these four photons in the nodes A, B, C in order to obtain predictions which cannot be explained when quantum theory is restricted to real numbers.
    As ICFO researcher Marc-Olivier Renou comments “When we found this result, the challenge was to see if our thought experiment could be done with current technologies. After discussing with colleagues from Shenzhen-China, we found a way to adapt our protocol to make it feasible with their state-of-the-art devices. And, as expected, the experimental results match the predictions!.” This remarkable experiment, realized in collaboration with Zheng-Da Li,Ya-Li Mao,Hu Chen, Lixin Feng, Sheng-Jun Yang, Jingyun Fan from the Southern University of Science and Technology, and Zizhu Wang from the University of Electronic Science and Technology is published at the same time as the Nature paper in Physical Review Letters.
    The results published in Nature can be seen as a generalization of Bell’s theorem, which provides a quantum experiment which cannot be explained by any local physics formalism. Bell’s experiment involves one quantum source S that emits two entangled photons, one to A, and the second to B, prepared in an entangled state. Here, in contrast, one needs two independent sources, the assumed independence is crucial and was carefully designed in the experiment.
    The study also shows how outstanding predictions can be when combining the concept of a quantum network with Bell’s ideas. For sure, the tools developed to obtain this first result are such that they will allow physicists to achieve a better understanding of quantum theory, and will one day trigger the realization and materialization of so far unfathomable applications for the quantum internet. More

  • in

    Quantum physics requires imaginary numbers to explain reality

    Imaginary numbers might seem like unicorns and goblins — interesting but irrelevant to reality. 

    But for describing matter at its roots, imaginary numbers turn out to be essential. They seem to be woven into the fabric of quantum mechanics, the math describing the realm of molecules, atoms and subatomic particles. A theory obeying the rules of quantum physics needs imaginary numbers to describe the real world, two new experiments suggest.

    Imaginary numbers result from taking the square root of a negative number. They often pop up in equations as a mathematical tool to make calculations easier. But everything we can actually measure about the world is described by real numbers, the normal, nonimaginary figures we’re used to (SN: 5/8/18). That’s true in quantum physics too. Although imaginary numbers appear in the inner workings of the theory, all possible measurements generate real numbers.

    Quantum theory’s prominent use of complex numbers — sums of imaginary and real numbers — was disconcerting to its founders, including physicist Erwin Schrödinger. “From the early days of quantum theory, complex numbers were treated more as a mathematical convenience than a fundamental building block,” says physicist Jingyun Fan of the Southern University of Science and Technology in Shenzhen, China.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Some physicists have attempted to build quantum theory using real numbers only, avoiding the imaginary realm with versions called “real quantum mechanics.” But without an experimental test of such theories, the question remained whether imaginary numbers were truly necessary in quantum physics, or just a useful computational tool.

    A type of experiment known as a Bell test resolved a different quantum quandary, proving that quantum mechanics really requires strange quantum linkages between particles called entanglement (SN: 8/28/15). “We started thinking about whether an experiment of this sort could also refute real quantum mechanics,” says theoretical physicist Miguel Navascués of the Institute for Quantum Optics and Quantum Information Vienna. He and colleagues laid out a plan for an experiment in a paper posted online at arXiv.org in January 2021 and published December 15 in Nature.

    In this plan, researchers would send pairs of entangled particles from two different sources to three different people, named according to conventional physics lingo as Alice, Bob and Charlie. Alice receives one particle, and can measure it using various settings that she chooses. Charlie does the same. Bob receives two particles and performs a special type of measurement to entangle the particles that Alice and Charlie receive. A real quantum theory, with no imaginary numbers, would predict different results than standard quantum physics, allowing the experiment to distinguish which one is correct.

    Fan and colleagues performed such an experiment using photons, or particles of light, they report in a paper to be published in Physical Review Letters. By studying how Alice, Charlie and Bob’s results compare across many measurements, Fan, Navascués and colleagues show that the data could be described only by a quantum theory with complex numbers.

    Another team of physicists conducted an experiment based on the same concept using a quantum computer made with superconductors, materials which conduct electricity without resistance. Those researchers, too, found that quantum physics requires complex numbers, they report in another paper to be published in Physical Review Letters. “We are curious about why complex numbers are necessary and play a fundamental role in quantum mechanics,” says quantum physicist Chao-Yang Lu of the University of Science and Technology of China in Hefei, a coauthor of the study.

    But the results don’t rule out all theories that eschew imaginary numbers, notes theoretical physicist Jerry Finkelstein of Lawrence Berkeley National Laboratory in California, who was not involved with the new studies. The study eliminated certain theories based on real numbers, namely those that still follow the conventions of quantum mechanics. It’s still possible to explain the results without imaginary numbers by using a theory that breaks standard quantum rules. But those theories run into other conceptual issues, making them “ugly,” he says. But “if you’re willing to put up with the ugliness, then you can have a real quantum theory.”

    Despite the caveat, other physicists agree that the quandaries raised by the new findings are compelling. “I find it intriguing when you ask questions about why is quantum mechanics the way it is,” says physicist Krister Shalm of the National Institute of Standards and Technology in Boulder, Colo. Asking whether quantum theory could be simpler or if it contains anything unnecessary, “these are very interesting and thought-provoking questions.” More

  • in

    E-waste recycling emits emerging synthetic antioxidants

    Manufacturers add synthetic antioxidants to plastics, rubbers and other polymers to make them last longer. However, the health effects of these compounds, and how readily they migrate into the environment, are largely unknown. Now, researchers reporting in ACS’ Environmental Science & Technology Letters have detected a broad range of emerging synthetic antioxidants, called hindered phenol and sulfur antioxidants, in dust from electronic waste (e-waste) recycling workshops, possibly posing risks for the workers inside.
    Previous studies revealed widespread environmental pollution and human exposure to a class of compounds called low-molecular weight synthetic phenolic antioxidants. In lab experiments, some of these compounds were toxic to rodents or human cells. Recently, manufacturers introduced a class of high-molecular weight synthetic phenolic antioxidants, also known as hindered phenol antioxidants (HPAs), with improved performance and slower migration from products. In addition to HPAs, compounds called sulfur antioxidants (SAs) are often added to rubber and plastic polymers as “helper” antioxidants. The toxicological effects and environmental occurrence of most of these new compounds are unknown. Therefore, Lixi Zeng and colleagues wanted to investigate the occurrence of emerging HPAs and SAs in dust from e-waste recycling centers — workshops where large amounts of discarded electronics, such as laptop computers, cell phones, tablets, wires and cables are dismantled and processed.
    In August 2020, the researchers collected 45 dust samples from three categories of e-waste recycling workshops in an industrial park in Yichun City, China: wire and cable dismantling, electronic plastic processing, and general e-waste dismantling. Then, they used liquid chromatography/tandem mass spectrometry to screen for 18 emerging HPAs and 6 emerging SAs. All 24 compounds were detected in the dust: 22 for the first time, and some at relatively high levels compared with other e-waste pollutants. Although dust concentrations of SAs were similar for the different categories of workshops, centers that dismantled wires and cables and processed electronic plastics had significantly higher levels of dust HPAs than those that dismantled general e-wastes. Given the ubiquitous occurrence of emerging HPAs and SAs in e-waste dust, further research is needed on their environmental behaviors, fates, toxicities and risks, the researchers say.
    The authors acknowledge funding from the National Natural Science Foundation of China, the Guangdong Special Support Program, the Guangdong (China) Innovative and Entrepreneurial Research Team Program, the Special Fund Project for Science and Technology Innovation Strategy of Guangdong Province and the Fundamental Research Funds for the Central Universities.
    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    Creating the human-robotic dream team

    Using autonomous vehicle guidelines, a team of UBC Okanagan researchers has developed a system to improve interactions between people and robots.
    The way people interact safely with robots is at the forefront of today’s research related to automation and manufacturing, explains Debasita Mukherjee, a doctoral student and lead author of a recently published study. She is one of several researchers at UBC’s Advanced Control and Intelligent Systems Laboratory who are working to develop systems that allow humans and robots to interact safely and efficiently.
    “It is incredibly important for robots in manufacturing to perform their tasks in the safest and most efficient method possible,” Mukherjee says. “In order to make these automated machines as smart as possible, we are developing systems that perceive their environments and carry out tasks in a similar manner as their human partners.”
    To develop such systems, researchers are using artificial intelligence and machine learning to help guide the machines. Mechanical Engineering Professor Homayoun Najjaran says the process is not as straightforward as it seems.
    “Robots don’t think or feel, so they need systems that capture and analyze their environment enabling them to respond,” says Dr. Najjaran. “Often those responses need to be in hundredths of a second to ensure the safety of humans in their vicinity.”
    Traditionally, industrial robots have been fixed and programmed to operate at high speeds and perform tasks such as welding, painting, assembly, pick-and-place and material handling. Social robots, on the other hand, are built to assist people in service industries. They are typically mobile, lightweight and programmed to work in a variety of environments. More

  • in

    Cancer-spotting AI and human experts can be fooled by image-tampering attacks

    Artificial intelligence (AI) models that evaluate medical images have potential to speed up and improve accuracy of cancer diagnoses, but they may also be vulnerable to cyberattacks. In a new study, University of Pittsburgh researchers simulated an attack that falsified mammogram images, fooling both an AI breast cancer diagnosis model and human breast imaging radiologist experts.
    The study, published today in Nature Communications, brings attention to a potential safety issue for medical AI known as “adversarial attacks,” which seek to alter images or other inputs to make models arrive at incorrect conclusions.
    “What we want to show with this study is that this type of attack is possible, and it could lead AI models to make the wrong diagnosis — which is a big patient safety issue,” said senior author Shandong Wu, Ph.D., associate professor of radiology, biomedical informatics and bioengineering at Pitt. “By understanding how AI models behave under adversarial attacks in medical contexts, we can start thinking about ways to make these models safer and more robust.”
    AI-based image recognition technology for cancer detection has advanced rapidly in recent years, and several breast cancer models have U.S. Food and Drug Administration (FDA) approval. According to Wu, these tools can rapidly screen mammogram images and identify those most likely to be cancerous, helping radiologists be more efficient and accurate.
    But such technologies are also at risk from cyberthreats, such as adversarial attacks. Potential motivations for such attacks include insurance fraud from health care providers looking to boost revenue or companies trying to adjust clinical trial outcomes in their favor. Adversarial attacks on medical images range from tiny manipulations that change the AI’s decision, but are imperceptible to the human eye, to more sophisticated versions that target sensitive contents of the image, such as cancerous regions — making them more likely to fool a human.
    To understand how AI would behave under this more complex type of adversarial attack, Wu and his team used mammogram images to develop a model for detecting breast cancer. First, the researchers trained a deep learning algorithm to distinguish cancerous and benign cases with more than 80% accuracy. Next, they developed a so-called “generative adversarial network” (GAN) — a computer program that generates false images by inserting or removing cancerous regions from negative or positive images, respectively, and then they tested how the model classified these adversarial images. More