More stories

  • in

    Topological superconductors: Fertile ground for elusive Majorana ('angel') particle

    A new, multi-node FLEET review investigates the search for Majorana fermions in iron-based superconductors.
    The elusive Majorana fermion, or ‘angel particle’ proposed by Ettore Majorana in 1937, simultaneously behaves like a particle and an antiparticle — and surprisingly remains stable rather than being self-destructive.
    Majorana fermions promise information and communications technology with zero resistance, addressing the rising energy consumption of modern electronics (already 8% of global electricity consumption), and promising a sustainable future for computing.
    Additionally, it is the presence of Majorana zero-energy modes in topological superconductors that have made those exotic quantum materials the main candidate materials for realizing topological quantum computing.
    The existence of Majorana fermions in condensed-matter systems will help in FLEET’s search for future low-energy electronic technologies.
    The angel particle: both matter and antimatter
    Fundamental particles such as electrons, protons, neutrons, quarks and neutrinos (called fermions) each have their distinct antiparticles. An antiparticle has the same mass as it’s ordinary partner, but opposite electric charge and magnetic moment. More

  • in

    Nanostructured surfaces for future quantum computer chips

    Quantum computers are one of the key future technologies of the 21st century. Researchers at Paderborn University, working under Professor Thomas Zentgraf and in cooperation with colleagues from the Australian National University and Singapore University of Technology and Design, have developed a new technology for manipulating light that can be used as a basis for future optical quantum computers. The results have now been published in the journal Nature Photonics.
    New optical elements for manipulating light will allow for more advanced applications in modern information technology, particularly in quantum computers. However, a major challenge that remains is non-reciprocal light propagation through nanostructured surfaces, where these surfaces have been manipulated at a tiny scale. Professor Thomas Zentgraf, head of the working group for ultrafast nanophotonics at Paderborn University, explains, “In reciprocal propagation, light can take the same path forward and backward through a structure; however, non-reciprocal propagation is comparable to a one-way street where it can only spread out in one direction.” Non-reciprocity is a special characteristic in optics that causes light to produce different material characteristics when its direction is reversed. One example would be a window made of glass that is transparent from one side and lets light through, but which acts as a mirror on the other side and reflects the light. This is known as duality. “In the field of photonics, such a duality can be very helpful in developing innovative optical elements for manipulating light,” says Zentgraf.
    In a current collaboration between his working group at Paderborn University and researchers at the Australian National University and Singapore University of Technology and Design, non-reciprocal light propagation was combined with a frequency conversion of laser light, in other words a change in the frequency and thus also the colour of the light. “We used the frequency conversion in the specially designed structures, with dimensions in the range of a few hundred nanometres, to convert infrared light — which is invisible to the human eye — into visible light,” explains Dr. Sergey Kruk, Marie Curie Fellow in Zentgraf’s group. The experiments show that this conversion process takes place only in one illumination direction for the nanostructured surface, while it is completely suppressed in the opposite illumination direction. This duality in the frequency conversion characteristics was used to code images into an otherwise transparent surface. “We arranged the various nanostructures in such a way that they produce a different image depending on whether the sample surface is illuminated from the front or the back,” says Zentgraf, adding, “The images only became visible when we used infrared laser light for the illumination.”
    In their first experiments, the intensity of the frequency-converted light within the visible range was still very small. The next step, therefore, is to further improve efficiency so that less infrared light is needed for the frequency conversion. In future optically integrated circuits, the direction control for the frequency conversion could be used to switch light directly with a different light, or to produce specific photon conditions for quantum-optical calculations directly on a small chip. “Maybe we will see an application in future optical quantum computers where the directed production of individual photons using frequency conversion plays an important role,” says Zentgraf.
    Story Source:
    Materials provided by Universität Paderborn. Note: Content may be edited for style and length. More

  • in

    Tiny fish-shaped robot 'swims' around picking up microplastics

    Microplastics are found nearly everywhere on Earth and can be harmful to animals if they’re ingested. But it’s hard to remove such tiny particles from the environment, especially once they settle into nooks and crannies at the bottom of waterways. Now, researchers in ACS’ Nano Letters have created a light-activated fish robot that “swims” around quickly, picking up and removing microplastics from the environment.
    Because microplastics can fall into cracks and crevices, they’ve been hard to remove from aquatic environments. One solution that’s been proposed is using small, flexible and self-propelled robots to reach these pollutants and clean them up. But the traditional materials used for soft robots are hydrogels and elastomers, and they can be damaged easily in aquatic environments. Another material called mother-of-pearl, also known as nacre, is strong and flexible, and is found on the inside surface of clam shells. Nacre layers have a microscopic gradient, going from one side with lots of calcium carbonate mineral-polymer composites to the other side with mostly a silk protein filler. Inspired by this natural substance, Xinxing Zhang and colleagues wanted to try a similar type of gradient structure to create a durable and bendable material for soft robots.
    The researchers linked β-cyclodextrin molecules to sulfonated graphene, creating composite nanosheets. Then solutions of the nanosheets were incorporated with different concentrations into polyurethane latex mixtures. A layer-by-layer assembly method created an ordered concentration gradient of the nanocomposites through the material from which the team formed a tiny fish robot that was 15-mm (about half-an-inch) long. Rapidly turning a near-infrared light laser on and off at a fish’s tail caused it to flap, propelling the robot forward. The robot could move 2.67 body lengths per second — a speed that’s faster than previously reported for other soft swimming robots and that is about the same speed as active phytoplankton moving in water. The researchers showed that the swimming fish robot could repeatedly adsorb nearby polystyrene microplastics and transport them elsewhere. The material could also heal itself after being cut, still maintaining its ability to adsorb microplastics. Because of the durability and speed of the fish robot, the researchers say that it could be used for monitoring microplastics and other pollutants in harsh aquatic environments.
    The authors acknowledge funding from a National Key Research and Development Program of China Grant, National Natural Science Foundation of China Grants and the Sichuan Provincial Natural Science Fund for Distinguished Young Scholars.
    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    Technology helps self-driving cars learn from own 'memories'

    Researchers at Cornell University have developed a way to help autonomous vehicles create “memories” of previous experiences and use them in future navigation, especially during adverse weather conditions when the car cannot safely rely on its sensors.
    Cars using artificial neural networks have no memory of the past and are in a constant state of seeing the world for the first time — no matter how many times they’ve driven down a particular road before.
    The researchers have produced three concurrent papers with the goal of overcoming this limitation. Two are being presented at the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2022), being held June 19-24 in New Orleans.
    “The fundamental question is, can we learn from repeated traversals?” said senior author Kilian Weinberger, professor of computer science. “For example, a car may mistake a weirdly shaped tree for a pedestrian the first time its laser scanner perceives it from a distance, but once it is close enough, the object category will become clear. So, the second time you drive past the very same tree, even in fog or snow, you would hope that the car has now learned to recognize it correctly.”
    Spearheaded by doctoral student Carlos Diaz-Ruiz, the group compiled a dataset by driving a car equipped with LiDAR (Light Detection and Ranging) sensors repeatedly along a 15-kilometer loop in and around Ithaca, 40 times over an 18-month period. The traversals capture varying environments (highway, urban, campus), weather conditions (sunny, rainy, snowy) and times of day. This resulting dataset has more than 600,000 scenes.
    “It deliberately exposes one of the key challenges in self-driving cars: poor weather conditions,” said Diaz-Ruiz. “If the street is covered by snow, humans can rely on memories, but without memories a neural network is heavily disadvantaged.”
    HINDSIGHT is an approach that uses neural networks to compute descriptors of objects as the car passes them. It then compresses these descriptions, which the group has dubbed SQuaSH?(Spatial-Quantized Sparse History) features, and stores them on a virtual map, like a “memory” stored in a human brain. More

  • in

    Quantum sensor can detect electromagnetic signals of any frequency

    Quantum sensors, which detect the most minute variations in magnetic or electrical fields, have enabled precision measurements in materials science and fundamental physics. But these sensors have only been capable of detecting a few specific frequencies of these fields, limiting their usefulness. Now, researchers at MIT have developed a method to enable such sensors to detect any arbitrary frequency, with no loss of their ability to measure nanometer-scale features.
    The new method, for which the team has already applied for patent protection, is described in the journal Physical Review X, in a paper by graduate student Guoqing Wang, professor of nuclear science and engineering and of physics Paola Cappellaro, and four others at MIT and Lincoln Laboratory.
    Quantum sensors can take many forms; they’re essentially systems in which some particles are in such a delicately balanced state that they are affected by even tiny variations in the fields they are exposed to. These can take the form of neutral atoms, trapped ions, and solid-state spins, and research using such sensors has grown rapidly. For example, physicists use them to investigate exotic states of matter, including so-called time crystals and topological phases, while other researchers use them to characterize practical devices such as experimental quantum memory or computation devices. But many other phenomena of interest span a much broader frequency range than today’s quantum sensors can detect.
    The new system the team devised, which they call a quantum mixer, injects a second frequency into the detector using a beam of microwaves. This converts the frequency of the field being studied into a different frequency — the difference between the original frequency and that of the added signal — which is tuned to the specific frequency that the detector is most sensitive to. This simple process enables the detector to home in on any desired frequency at all, with no loss in the nanoscale spatial resolution of the sensor.
    In their experiments, the team used a specific device based on an array of nitrogen-vacancy centers in diamond, a widely used quantum sensing system, and successfully demonstrated detection of a signal with a frequency of 150 megahertz, using a qubit detector with frequency of 2.2 gigahertz — a detection that would be impossible without the quantum multiplexer. They then did detailed analyses of the process by deriving a theoretical framework, based on Floquet theory, and testing the numerical predictions of that theory in a series of experiments.
    While their tests used this specific system, Wang says, “the same principle can be also applied to any kind of sensors or quantum devices.” The system would be self-contained, with the detector and the source of the second frequency all packaged in a single device.
    Wang says that this system could be used, for example, to characterize in detail the performance of a microwave antenna. “It can characterize the distribution of the field [generated by the antenna] with nanoscale resolution, so it’s very promising in that direction,” he says.
    There are other ways of altering the frequency sensitivity of some quantum sensors, but these require the use of large devices and strong magnetic fields that blur out the fine details and make it impossible to achieve the very high resolution that the new system offers. In such systems today, Wang says, “you need to use a strong magnetic field to tune the sensor, but that magnetic field can potentially break the quantum material properties, which can influence the phenomena that you want to measure.”
    The system may open up new applications in biomedical fields, according to Cappellaro, because it can make accessible a range of frequencies of electrical or magnetic activity at the level of a single cell. It would be very difficult to get useful resolution of such signals using current quantum sensing systems, she says. It may be possible using this system to detect output signals from a single neuron in response to some stimulus, for example, which typically include a great deal of noise, making such signals hard to isolate.
    The system could also be used to characterize in detail the behavior of exotic materials such as 2D materials that are being intensely studied for their electromagnetic, optical, and physical properties.
    In ongoing work, the team is exploring the possibility of finding ways to expand the system to be able to probe a range of frequencies at once, rather than the present system’s single frequency targeting. They will also be continuing to define the system’s capabilities using more powerful quantum sensing devices at Lincoln Laboratory, where some members of the research team are based.
    The team included Yi-Xiang Liu at MIT and Jennifer Schloss, Scott Alsid and Danielle Braje at Lincoln Laboratory. The work was supported by the Defense Advanced Research Projects Agency (DARPA) and Q-Diamond. More

  • in

    How the brain interprets motion while in motion

    Imagine you’re sitting on a train. You look out the window and see another train on an adjacent track that appears to be moving. But, has your train stopped while the other train is moving, or are you moving while the other train is stopped?
    The same sensory experience — viewing a train — can yield two very different perceptions, leading you to feel either a sensation of yourself in motion or a sensation of being stationary while an object moves around you.
    Human brains are constantly faced with such ambiguous sensory inputs. In order to resolve the ambiguity and correctly perceive the world, our brains employ a process known as causal inference.
    Causal inference is a key to learning, reasoning, and decision making, but researchers currently know little about the neurons involved in the process.
    In a new paper published in the journal eLife, researchers at the University of Rochester, including Greg DeAngelis, the George Eastman Professor of Brain and Cognitive Sciences, and his colleagues at Sungkyunkwan University and New York University, describe a novel neural mechanism involved in causal inference that helps the brain detect object motion during self-motion.
    The research offers new insights into how the brain interprets sensory information and may have applications in designing artificial intelligence devices and developing treatments and therapies to treat brain disorders.
    “While much has been learned previously about how the brain processes visual motion, most laboratory studies of neurons have ignored the complexities introduced by self-motion,” DeAngelis says. “Under natural conditions, identifying how objects move in the world is much more challenging for the brain.”
    Now imagine a still, crouching lion waiting to spot prey; it is easy for the lion to spot a moving gazelle. Just like the still lion, when an observer is stationary, it is easy for her to detect when objects move in the world, because motion in the world directly maps to motion on the retina. However, when the observer is also moving, her eyes are taking in motion everywhere on her retina as she moves relative to objects in the scene. This causes a complex pattern of motion that makes it more difficult for the brain to detect when an object is moving in the world and when it is stationary; in this case, the brain has to distinguish between image motion that results from the observer herself versus image motion of other objects around the self.
    The researchers discovered a type of neuron in the brain that has a particular combination of response properties, which makes the neuron well-suited to contribute to the task of distinguishing between self-motion and the motion of other objects.
    “Although the brain probably uses multiple tricks to solve this problem, this new mechanism has the advantage that it can be performed in parallel at each local region of the visual field, and thus may be faster to implement than more global processes,” DeAngelis says. “This mechanism might also be applicable to autonomous vehicles, which also need to rapidly detect moving objects.”
    The research may additionally have important applications in developing treatments and therapies for neural disorders such as autism and schizophrenia, conditions in which casual inference is thought to be impaired.
    “While the project is basic science focused on understanding the fundamental mechanisms of causal inference, this knowledge should eventually be applicable to the treatment of these disorders,” DeAngelis says.
    Story Source:
    Materials provided by University of Rochester. Original written by Lindsey Valich. Note: Content may be edited for style and length. More

  • in

    Robotic lightning bugs take flight

    Fireflies that light up dusky backyards on warm summer evenings use their luminescence for communication — to attract a mate, ward off predators, or lure prey.
    These glimmering bugs also sparked the inspiration of scientists at MIT. Taking a cue from nature, they built electroluminescent soft artificial muscles for flying, insect-scale robots. The tiny artificial muscles that control the robots’ wings emit colored light during flight.
    This electroluminescence could enable the robots to communicate with each other. If sent on a search-and-rescue mission into a collapsed building, for instance, a robot that finds survivors could use lights to signal others and call for help.
    The ability to emit light also brings these microscale robots, which weigh barely more than a paper clip, one step closer to flying on their own outside the lab. These robots are so lightweight that they can’t carry sensors, so researchers must track them using bulky infrared cameras that don’t work well outdoors. Now, they’ve shown that they can track the robots precisely using the light they emit and just three smartphone cameras.
    “If you think of large-scale robots, they can communicate using a lot of different tools — Bluetooth, wireless, all those sorts of things. But for a tiny, power-constrained robot, we are forced to think about new modes of communication. This is a major step toward flying these robots in outdoor environments where we don’t have a well-tuned, state-of-the-art motion tracking system,” says Kevin Chen, who is the D. Reid Weedon, Jr. Assistant Professor in the Department of Electrical Engineering and Computer Science (EECS), the head of the Soft and Micro Robotics Laboratory in the Research Laboratory of Electronics (RLE), and the senior author of the paper.
    He and his collaborators accomplished this by embedding miniscule electroluminescent particles into the artificial muscles. The process adds just 2.5 percent more weight without impacting the flight performance of the robot. More

  • in

    Robots turn racist and sexist with flawed AI, study finds

    A robot operating with a popular Internet-based artificial intelligence system consistently gravitates to men over women, white people over people of color, and jumps to conclusions about peoples’ jobs after a glance at their face.
    The work, led by Johns Hopkins University, Georgia Institute of Technology, and University of Washington researchers, is believed to be the first to show that robots loaded with an accepted and widely-used model operate with significant gender and racial biases. The work is set to be presented and published this week at the 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT).
    “The robot has learned toxic stereotypes through these flawed neural network models,” said author Andrew Hundt, a postdoctoral fellow at Georgia Tech who co-conducted the work as a PhD student working in Johns Hopkins’ Computational Interaction and Robotics Laboratory. “We’re at risk of creating a generation of racist and sexist robots but people and organizations have decided it’s OK to create these products without addressing the issues.”
    Those building artificial intelligence models to recognize humans and objects often turn to vast datasets available for free on the Internet. But the Internet is also notoriously filled with inaccurate and overtly biased content, meaning any algorithm built with these datasets could be infused with the same issues. Joy Buolamwini, Timinit Gebru, and Abeba Birhane demonstrated race and gender gaps in facial recognition products, as well as in a neural network that compares images to captions called CLIP.
    Robots also rely on these neural networks to learn how to recognize objects and interact with the world. Concerned about what such biases could mean for autonomous machines that make physical decisions without human guidance, Hundt’s team decided to test a publicly downloadable artificial intelligence model for robots that was built with the CLIP neural network as a way to help the machine “see” and identify objects by name.
    The robot was tasked to put objects in a box. Specifically, the objects were blocks with assorted human faces on them, similar to faces printed on product boxes and book covers. More