More stories

  • in

    Engineers devise a recipe for improving any autonomous robotic system

    Autonomous robots have come a long way since the fastidious Roomba. In recent years, artificially intelligent systems have been deployed in self-driving cars, last-mile food delivery, restaurant service, patient screening, hospital cleaning, meal prep, building security, and warehouse packing.
    Each of these robotic systems is a product of an ad hoc design process specific to that particular system. In designing an autonomous robot, engineers must run countless trial-and-error simulations, often informed by intuition. These simulations are tailored to a particular robot’s components and tasks, in order to tune and optimize its performance. In some respects, designing an autonomous robot today is like baking a cake from scratch, with no recipe or prepared mix to ensure a successful outcome.
    Now, MIT engineers have developed a general design tool for roboticists to use as a sort of automated recipe for success. The team has devised an optimization code that can be applied to simulations of virtually any autonomous robotic system and can be used to automatically identify how and where to tweak a system to improve a robot’s performance.
    The team showed that the tool was able to quickly improve the performance of two very different autonomous systems: one in which a robot navigated a path between two obstacles, and another in which a pair of robots worked together to move a heavy box.
    The researchers hope the new general-purpose optimizer can help to speed up the development of a wide range of autonomous systems, from walking robots and self-driving vehicles, to soft and dexterous robots, and teams of collaborative robots.
    The team, composed of Charles Dawson, an MIT graduate student, and ChuChu Fan, assistant professor in MIT’s Department of Aeronautics and Astronautics, will present its findings later this month at the annual Robotics: Science and Systems conference in New York. More

  • in

    Optical microphone sees sound like never before

    A camera system developed by Carnegie Mellon University researchers can see sound vibrations with such precision and detail that it can reconstruct the music of a single instrument in a band or orchestra.
    Even the most high-powered and directed microphones can’t eliminate nearby sounds, ambient noise and the effect of acoustics when they capture audio. The novel system developed in the School of Computer Science’s Robotics Institute (RI) uses two cameras and a laser to sense high-speed, low-amplitude surface vibrations. These vibrations can be used to reconstruct sound, capturing isolated audio without inference or a microphone.
    “We’ve invented a new way to see sound,” said Mark Sheinin, a post-doctoral research associate at the Illumination and Imaging Laboratory (ILIM) in the RI. “It’s a new type of camera system, a new imaging device, that is able to see something invisible to the naked eye.”
    The team completed several successful demos of their system’s effectiveness in sensing vibrations and the quality of the sound reconstruction. They captured isolated audio of separate guitars playing at the same time and individual speakers playing different music simultaneously. They analyzed the vibrations of a tuning fork, and used the vibrations of a bag of Doritos near a speaker to capture the sound coming from a speaker. This demo pays tribute to prior work done by MIT researchers who developed one of the first visual microphones in 2014.
    The CMU system dramatically improves upon past attempts to capture sound using computer vision. The team’s work uses ordinary cameras that cost a fraction of the high-speed versions employed in past research while producing a higher quality recording. The dual-camera system can capture vibrations from objects in motion, such as the movements of a guitar while a musician plays it, and simultaneously sense individual sounds from multiple points.
    “We’ve made the optical microphone much more practical and usable,” said Srinivasa Narasimhan, a professor in the RI and head of the ILIM. “We’ve made the quality better while bringing the cost down.”
    The system works by analyzing the differences in speckle patterns from images captured with a rolling shutter and a global shutter. An algorithm computes the difference in the speckle patterns from the two video streams and converts those differences into vibrations to reconstruct the sound. More

  • in

    Topological superconductors: Fertile ground for elusive Majorana ('angel') particle

    A new, multi-node FLEET review investigates the search for Majorana fermions in iron-based superconductors.
    The elusive Majorana fermion, or ‘angel particle’ proposed by Ettore Majorana in 1937, simultaneously behaves like a particle and an antiparticle — and surprisingly remains stable rather than being self-destructive.
    Majorana fermions promise information and communications technology with zero resistance, addressing the rising energy consumption of modern electronics (already 8% of global electricity consumption), and promising a sustainable future for computing.
    Additionally, it is the presence of Majorana zero-energy modes in topological superconductors that have made those exotic quantum materials the main candidate materials for realizing topological quantum computing.
    The existence of Majorana fermions in condensed-matter systems will help in FLEET’s search for future low-energy electronic technologies.
    The angel particle: both matter and antimatter
    Fundamental particles such as electrons, protons, neutrons, quarks and neutrinos (called fermions) each have their distinct antiparticles. An antiparticle has the same mass as it’s ordinary partner, but opposite electric charge and magnetic moment. More

  • in

    Nanostructured surfaces for future quantum computer chips

    Quantum computers are one of the key future technologies of the 21st century. Researchers at Paderborn University, working under Professor Thomas Zentgraf and in cooperation with colleagues from the Australian National University and Singapore University of Technology and Design, have developed a new technology for manipulating light that can be used as a basis for future optical quantum computers. The results have now been published in the journal Nature Photonics.
    New optical elements for manipulating light will allow for more advanced applications in modern information technology, particularly in quantum computers. However, a major challenge that remains is non-reciprocal light propagation through nanostructured surfaces, where these surfaces have been manipulated at a tiny scale. Professor Thomas Zentgraf, head of the working group for ultrafast nanophotonics at Paderborn University, explains, “In reciprocal propagation, light can take the same path forward and backward through a structure; however, non-reciprocal propagation is comparable to a one-way street where it can only spread out in one direction.” Non-reciprocity is a special characteristic in optics that causes light to produce different material characteristics when its direction is reversed. One example would be a window made of glass that is transparent from one side and lets light through, but which acts as a mirror on the other side and reflects the light. This is known as duality. “In the field of photonics, such a duality can be very helpful in developing innovative optical elements for manipulating light,” says Zentgraf.
    In a current collaboration between his working group at Paderborn University and researchers at the Australian National University and Singapore University of Technology and Design, non-reciprocal light propagation was combined with a frequency conversion of laser light, in other words a change in the frequency and thus also the colour of the light. “We used the frequency conversion in the specially designed structures, with dimensions in the range of a few hundred nanometres, to convert infrared light — which is invisible to the human eye — into visible light,” explains Dr. Sergey Kruk, Marie Curie Fellow in Zentgraf’s group. The experiments show that this conversion process takes place only in one illumination direction for the nanostructured surface, while it is completely suppressed in the opposite illumination direction. This duality in the frequency conversion characteristics was used to code images into an otherwise transparent surface. “We arranged the various nanostructures in such a way that they produce a different image depending on whether the sample surface is illuminated from the front or the back,” says Zentgraf, adding, “The images only became visible when we used infrared laser light for the illumination.”
    In their first experiments, the intensity of the frequency-converted light within the visible range was still very small. The next step, therefore, is to further improve efficiency so that less infrared light is needed for the frequency conversion. In future optically integrated circuits, the direction control for the frequency conversion could be used to switch light directly with a different light, or to produce specific photon conditions for quantum-optical calculations directly on a small chip. “Maybe we will see an application in future optical quantum computers where the directed production of individual photons using frequency conversion plays an important role,” says Zentgraf.
    Story Source:
    Materials provided by Universität Paderborn. Note: Content may be edited for style and length. More

  • in

    Tiny fish-shaped robot 'swims' around picking up microplastics

    Microplastics are found nearly everywhere on Earth and can be harmful to animals if they’re ingested. But it’s hard to remove such tiny particles from the environment, especially once they settle into nooks and crannies at the bottom of waterways. Now, researchers in ACS’ Nano Letters have created a light-activated fish robot that “swims” around quickly, picking up and removing microplastics from the environment.
    Because microplastics can fall into cracks and crevices, they’ve been hard to remove from aquatic environments. One solution that’s been proposed is using small, flexible and self-propelled robots to reach these pollutants and clean them up. But the traditional materials used for soft robots are hydrogels and elastomers, and they can be damaged easily in aquatic environments. Another material called mother-of-pearl, also known as nacre, is strong and flexible, and is found on the inside surface of clam shells. Nacre layers have a microscopic gradient, going from one side with lots of calcium carbonate mineral-polymer composites to the other side with mostly a silk protein filler. Inspired by this natural substance, Xinxing Zhang and colleagues wanted to try a similar type of gradient structure to create a durable and bendable material for soft robots.
    The researchers linked β-cyclodextrin molecules to sulfonated graphene, creating composite nanosheets. Then solutions of the nanosheets were incorporated with different concentrations into polyurethane latex mixtures. A layer-by-layer assembly method created an ordered concentration gradient of the nanocomposites through the material from which the team formed a tiny fish robot that was 15-mm (about half-an-inch) long. Rapidly turning a near-infrared light laser on and off at a fish’s tail caused it to flap, propelling the robot forward. The robot could move 2.67 body lengths per second — a speed that’s faster than previously reported for other soft swimming robots and that is about the same speed as active phytoplankton moving in water. The researchers showed that the swimming fish robot could repeatedly adsorb nearby polystyrene microplastics and transport them elsewhere. The material could also heal itself after being cut, still maintaining its ability to adsorb microplastics. Because of the durability and speed of the fish robot, the researchers say that it could be used for monitoring microplastics and other pollutants in harsh aquatic environments.
    The authors acknowledge funding from a National Key Research and Development Program of China Grant, National Natural Science Foundation of China Grants and the Sichuan Provincial Natural Science Fund for Distinguished Young Scholars.
    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    Technology helps self-driving cars learn from own 'memories'

    Researchers at Cornell University have developed a way to help autonomous vehicles create “memories” of previous experiences and use them in future navigation, especially during adverse weather conditions when the car cannot safely rely on its sensors.
    Cars using artificial neural networks have no memory of the past and are in a constant state of seeing the world for the first time — no matter how many times they’ve driven down a particular road before.
    The researchers have produced three concurrent papers with the goal of overcoming this limitation. Two are being presented at the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2022), being held June 19-24 in New Orleans.
    “The fundamental question is, can we learn from repeated traversals?” said senior author Kilian Weinberger, professor of computer science. “For example, a car may mistake a weirdly shaped tree for a pedestrian the first time its laser scanner perceives it from a distance, but once it is close enough, the object category will become clear. So, the second time you drive past the very same tree, even in fog or snow, you would hope that the car has now learned to recognize it correctly.”
    Spearheaded by doctoral student Carlos Diaz-Ruiz, the group compiled a dataset by driving a car equipped with LiDAR (Light Detection and Ranging) sensors repeatedly along a 15-kilometer loop in and around Ithaca, 40 times over an 18-month period. The traversals capture varying environments (highway, urban, campus), weather conditions (sunny, rainy, snowy) and times of day. This resulting dataset has more than 600,000 scenes.
    “It deliberately exposes one of the key challenges in self-driving cars: poor weather conditions,” said Diaz-Ruiz. “If the street is covered by snow, humans can rely on memories, but without memories a neural network is heavily disadvantaged.”
    HINDSIGHT is an approach that uses neural networks to compute descriptors of objects as the car passes them. It then compresses these descriptions, which the group has dubbed SQuaSH?(Spatial-Quantized Sparse History) features, and stores them on a virtual map, like a “memory” stored in a human brain. More

  • in

    Quantum sensor can detect electromagnetic signals of any frequency

    Quantum sensors, which detect the most minute variations in magnetic or electrical fields, have enabled precision measurements in materials science and fundamental physics. But these sensors have only been capable of detecting a few specific frequencies of these fields, limiting their usefulness. Now, researchers at MIT have developed a method to enable such sensors to detect any arbitrary frequency, with no loss of their ability to measure nanometer-scale features.
    The new method, for which the team has already applied for patent protection, is described in the journal Physical Review X, in a paper by graduate student Guoqing Wang, professor of nuclear science and engineering and of physics Paola Cappellaro, and four others at MIT and Lincoln Laboratory.
    Quantum sensors can take many forms; they’re essentially systems in which some particles are in such a delicately balanced state that they are affected by even tiny variations in the fields they are exposed to. These can take the form of neutral atoms, trapped ions, and solid-state spins, and research using such sensors has grown rapidly. For example, physicists use them to investigate exotic states of matter, including so-called time crystals and topological phases, while other researchers use them to characterize practical devices such as experimental quantum memory or computation devices. But many other phenomena of interest span a much broader frequency range than today’s quantum sensors can detect.
    The new system the team devised, which they call a quantum mixer, injects a second frequency into the detector using a beam of microwaves. This converts the frequency of the field being studied into a different frequency — the difference between the original frequency and that of the added signal — which is tuned to the specific frequency that the detector is most sensitive to. This simple process enables the detector to home in on any desired frequency at all, with no loss in the nanoscale spatial resolution of the sensor.
    In their experiments, the team used a specific device based on an array of nitrogen-vacancy centers in diamond, a widely used quantum sensing system, and successfully demonstrated detection of a signal with a frequency of 150 megahertz, using a qubit detector with frequency of 2.2 gigahertz — a detection that would be impossible without the quantum multiplexer. They then did detailed analyses of the process by deriving a theoretical framework, based on Floquet theory, and testing the numerical predictions of that theory in a series of experiments.
    While their tests used this specific system, Wang says, “the same principle can be also applied to any kind of sensors or quantum devices.” The system would be self-contained, with the detector and the source of the second frequency all packaged in a single device.
    Wang says that this system could be used, for example, to characterize in detail the performance of a microwave antenna. “It can characterize the distribution of the field [generated by the antenna] with nanoscale resolution, so it’s very promising in that direction,” he says.
    There are other ways of altering the frequency sensitivity of some quantum sensors, but these require the use of large devices and strong magnetic fields that blur out the fine details and make it impossible to achieve the very high resolution that the new system offers. In such systems today, Wang says, “you need to use a strong magnetic field to tune the sensor, but that magnetic field can potentially break the quantum material properties, which can influence the phenomena that you want to measure.”
    The system may open up new applications in biomedical fields, according to Cappellaro, because it can make accessible a range of frequencies of electrical or magnetic activity at the level of a single cell. It would be very difficult to get useful resolution of such signals using current quantum sensing systems, she says. It may be possible using this system to detect output signals from a single neuron in response to some stimulus, for example, which typically include a great deal of noise, making such signals hard to isolate.
    The system could also be used to characterize in detail the behavior of exotic materials such as 2D materials that are being intensely studied for their electromagnetic, optical, and physical properties.
    In ongoing work, the team is exploring the possibility of finding ways to expand the system to be able to probe a range of frequencies at once, rather than the present system’s single frequency targeting. They will also be continuing to define the system’s capabilities using more powerful quantum sensing devices at Lincoln Laboratory, where some members of the research team are based.
    The team included Yi-Xiang Liu at MIT and Jennifer Schloss, Scott Alsid and Danielle Braje at Lincoln Laboratory. The work was supported by the Defense Advanced Research Projects Agency (DARPA) and Q-Diamond. More

  • in

    How the brain interprets motion while in motion

    Imagine you’re sitting on a train. You look out the window and see another train on an adjacent track that appears to be moving. But, has your train stopped while the other train is moving, or are you moving while the other train is stopped?
    The same sensory experience — viewing a train — can yield two very different perceptions, leading you to feel either a sensation of yourself in motion or a sensation of being stationary while an object moves around you.
    Human brains are constantly faced with such ambiguous sensory inputs. In order to resolve the ambiguity and correctly perceive the world, our brains employ a process known as causal inference.
    Causal inference is a key to learning, reasoning, and decision making, but researchers currently know little about the neurons involved in the process.
    In a new paper published in the journal eLife, researchers at the University of Rochester, including Greg DeAngelis, the George Eastman Professor of Brain and Cognitive Sciences, and his colleagues at Sungkyunkwan University and New York University, describe a novel neural mechanism involved in causal inference that helps the brain detect object motion during self-motion.
    The research offers new insights into how the brain interprets sensory information and may have applications in designing artificial intelligence devices and developing treatments and therapies to treat brain disorders.
    “While much has been learned previously about how the brain processes visual motion, most laboratory studies of neurons have ignored the complexities introduced by self-motion,” DeAngelis says. “Under natural conditions, identifying how objects move in the world is much more challenging for the brain.”
    Now imagine a still, crouching lion waiting to spot prey; it is easy for the lion to spot a moving gazelle. Just like the still lion, when an observer is stationary, it is easy for her to detect when objects move in the world, because motion in the world directly maps to motion on the retina. However, when the observer is also moving, her eyes are taking in motion everywhere on her retina as she moves relative to objects in the scene. This causes a complex pattern of motion that makes it more difficult for the brain to detect when an object is moving in the world and when it is stationary; in this case, the brain has to distinguish between image motion that results from the observer herself versus image motion of other objects around the self.
    The researchers discovered a type of neuron in the brain that has a particular combination of response properties, which makes the neuron well-suited to contribute to the task of distinguishing between self-motion and the motion of other objects.
    “Although the brain probably uses multiple tricks to solve this problem, this new mechanism has the advantage that it can be performed in parallel at each local region of the visual field, and thus may be faster to implement than more global processes,” DeAngelis says. “This mechanism might also be applicable to autonomous vehicles, which also need to rapidly detect moving objects.”
    The research may additionally have important applications in developing treatments and therapies for neural disorders such as autism and schizophrenia, conditions in which casual inference is thought to be impaired.
    “While the project is basic science focused on understanding the fundamental mechanisms of causal inference, this knowledge should eventually be applicable to the treatment of these disorders,” DeAngelis says.
    Story Source:
    Materials provided by University of Rochester. Original written by Lindsey Valich. Note: Content may be edited for style and length. More