More stories

  • in

    Physicists arrange atoms in extremely close proximity

    Proximity is key for many quantum phenomena, as interactions between atoms are stronger when the particles are close. In many quantum simulators, scientists arrange atoms as close together as possible to explore exotic states of matter and build new quantum materials.
    They typically do this by cooling the atoms to a stand-still, then using laser light to position the particles as close as 500 nanometers apart — a limit that is set by the wavelength of light. Now, MIT physicists have developed a technique that allows them to arrange atoms in much closer proximity, down to a mere 50 nanometers. For context, a red blood cell is about 1,000 nanometers wide.
    The physicists demonstrated the new approach in experiments with dysprosium, which is the most magnetic atom in nature. They used the new approach to manipulate two layers of dysprosium atoms, and positioned the layers precisely 50 nanometers apart. At this extreme proximity, the magnetic interactions were 1,000 times stronger than if the layers were separated by 500 nanometers.
    What’s more, the scientists were able to measure two new effects caused by the atoms’ proximity. Their enhanced magnetic forces caused “thermalization,” or the transfer of heat from one layer to another, as well as synchronized oscillations between layers. These effects petered out as the layers were spaced farther apart.
    “We have gone from positioning atoms from 500 nanometers to 50 nanometers apart, and there is a lot you can do with this,” says Wolfgang Ketterle, the John D. MacArthur Professor of Physics at MIT. “At 50 nanometers, the behavior of atoms is so much different that we’re really entering a new regime here.”
    Ketterle and his colleagues say the new approach can be applied to many other atoms to study quantum phenomena. For their part, the group plans to use the technique to manipulate atoms into configurations that could generate the first purely magnetic quantum gate — a key building block for a new type of quantum computer.
    The team has published their results today in the journal Science. The study’s co-authors include lead author and physics graduate student Li Du, along with Pierre Barral, Michael Cantara, Julius de Hond, and Yu-Kun Lu — all members of the MIT-Harvard Center for Ultracold Atoms, the Department of Physics, and the Research Laboratory of Electronics at MIT.

    Peaks and valleys
    To manipulate and arrange atoms, physicists typically first cool a cloud of atoms to temperatures approaching absolute zero, then use a system of laser beams to corral the atoms into an optical trap.
    Laser light is an electromagnetic wave with a specific wavelength (the distance between maxima of the electric field) and frequency. The wavelength limits the smallest pattern into which light can be shaped to typically 500 nanometers, the so-called optical resolution limit. Since atoms are attracted by laser light of certain frequencies, atoms will be positioned at the points of peak laser intensity. For this reason, existing techniques have been limited in how close they can position atomic particles, and could not be used to explore phenomena that happen at much shorter distances.
    “Conventional techniques stop at 500 nanometers, limited not by the atoms but by the wavelength of light,” Ketterle explains. “We have found now a new trick with light where we can break through that limit.”
    The team’s new approach, like current techniques, starts by cooling a cloud of atoms — in this case, to about 1 microkelvin, just a hair above absolute zero — at which point, the atoms come to a near-standstill. Physicists can then use lasers to move the frozen particles into desired configurations.
    Then, Du and his collaborators worked with two laser beams, each with a different frequency, or color, and circular polarization, or direction of the laser’s electric field. When the two beams travel through a super-cooled cloud of atoms, the atoms can orient their spin in opposite directions, following either of the two lasers’ polarization. The result is that the beams produce two groups of the same atoms, only with opposite spins.

    Each laser beam formed a standing wave, a periodic pattern of electric field intensity with a spatial period of 500 nanometers. Due to their different polarizations, each standing wave attracted and corralled one of two groups of atoms, depending on their spin. The lasers could be overlaid and tuned such that the distance between their respective peaks is as small as 50 nanometers, meaning that the atoms gravitating to each respective laser’s peaks would be separated by the same 50 nanometers.
    But in order for this to happen, the lasers would have to be extremely stable and immune to all external noise, such as from shaking or even breathing on the experiment. The team realized they could stabilize both lasers by directing them through an optical fiber, which served to lock the light beams in place in relation to each other.
    “The idea of sending both beams through the optical fiber meant the whole machine could shake violently, but the two laser beams stayed absolutely stable with respect to each others,” Du says.
    Magnetic forces at close range
    As a first test of their new technique, the team used atoms of dysprosium — a rare-earth metal that is one of the strongest magnetic elements in the periodic table, particularly at ultracold temperatures. However, at the scale of atoms, the element’s magnetic interactions are relatively weak at distances of even 500 nanometers. As with common refrigerator magnets, the magnetic attraction between atoms increases with proximity, and the scientists suspected that if their new technique could space dysprosium atoms as close as 50 nanometers apart, they might observe the emergence of otherwise weak interactions between the magnetic atoms.
    “We could suddenly have magnetic interactions, which used to be almost negligible but now are really strong,” Ketterle says.
    The team applied their technique to dysprosium, first super-cooling the atoms, then passing two lasers through to split the atoms into two spin groups, or layers. They then directed the lasers through an optical fiber to stabilize them, and found that indeed, the two layers of dysprosium atoms gravitated to their respective laser peaks, which in effect separated the layers of atoms by 50 nanometers — the closest distance that any ultracold atom experiment has been able to achieve.
    At this extremely close proximity, the atoms’ natural magnetic interactions were significantly enhanced, and were 1,000 times stronger than if they were positioned 500 nanometers apart. The team observed that these interactions resulted in two novel quantum phenomena: collective oscillation, in which one layer’s vibrations caused the other layer to vibrate in sync; and thermalization, in which one layer transferred heat to the other, purely through magnetic fluctuations in the atoms.
    “Until now, heat between atoms could only by exchanged when they were in the same physical space and could collide,” Du notes. “Now we have seen atomic layers, separated by vacuum, and they exchange heat via fluctuating magnetic fields.”
    The team’s results introduce a new technique that can be used to position many types of atom in close proximity. They also show that atoms, placed close enough together, can exhibit interesting quantum phenomena, that could be harnessed to build new quantum materials, and potentially, magnetically-driven atomic systems for quantum computers.
    “We are really bringing super-resolution methods to the field, and it will become a general tool for doing quantum simulations,” Ketterle says. “There are many variants possible, which we are working on.”
    This research was funded, in part, by the National Science Foundation and the Department of Defense. More

  • in

    Robots invited to help make wind turbine blades

    Researchers at the U.S. Department of Energy’s National Renewable Energy Laboratory (NREL) have successfully leveraged robotic assistance in the manufacture of wind turbine blades, allowing for the elimination of difficult working conditions for humans and the potential to improve the consistency of the product.
    Although robots have been used by the wind energy industry to paint and polish blades, automation has not been widely adopted. Research at the laboratory demonstrates the ability of a robot to trim, grind, and sand blades. Those necessary steps occur after the two sides of the blade are made using a mold and then bonded together.
    “I would consider it a success,” said Hunter Huth, a robotics engineer at NREL and lead author of a newly published paper detailing the work. “Not everything operated as well as we wanted it to, but we learned all the lessons we think we need to make it meet or exceed our expectations.”
    The paper, “Toolpath Generation for Automated Wind Turbine Blade Finishing Operations,” appears in the journal Wind Energy. The coauthors, all from NREL, are Casey Nichols, Scott Lambert, Petr Sindler, Derek Berry, David Barnes, Ryan Beach, and David Snowberg.
    The post-molding operations to manufacture wind turbine blades require workers to perch on scaffolding and wear protective suits including respiratory gear. Automation, the researchers noted, will boost employee safety and well-being and help manufacturers retain skilled labor.
    “This work is critical to enable significant U.S.-based blade manufacturing for the domestic wind turbine market,” said Daniel Laird, director of the National Wind Technology Center at NREL. “Though it may not be obvious, automating some of the labor in blade manufacture can lead to more U.S. jobs because it improves the economics of domestic blades versus imported blades.”
    “The motive of this research was to develop automation methods that could be used to make domestically manufactured blades cost competitive globally,” Huth said. “Currently offshore blades are not produced in the U.S. due to high labor rates. The finishing process is very labor intensive and has a high job-turnover rate due to the harsh nature of the work. By automating the finishing process, domestic offshore blade manufacturing can become more economically viable.”
    The research was conducted at the Composites Manufacturing Education and Technology (CoMET) facility at NREL’s Flatirons Campus. The robot worked on a 5-meter-long blade segment. Wind turbine blades are considerably longer, but because they bend and deflect under their own weight, a robot would have to be programmed to work on the bigger blades section by section.

    The researchers used a series of scans to create a 3D representation of the position of the blade and to identify precisely the front and rear sections of the airfoil — a special shape of the blade that helps the air flow smoothly over the blade. From there, the team programmed the robot to perform a series of tasks, after which it was judged on accuracy and speed. The researchers found areas for improvement, particularly when it came to grinding. The robot ground down too much in some parts of the blade and not enough in others.
    “As we’ve gone through this research, we’ve been moving the goal posts for what this system needs to do to be effective,” Huth said.
    The robot was not compared to how a human would perform the same functions.
    Huth said an automated system would provide consistency in blade manufacturing that is not possible when humans are doing all the work. He also said a robot would be able to use “tougher, more aggressive abrasives” than a human could tolerate. More

  • in

    Scientists test for quantum nature of gravity

    Einstein’s theory of general relativity explains that gravity is caused by a curvature of the directions of space and time. The most familiar manifestation of this is the Earth’s gravity, which keeps us on the ground and explains why balls fall to the floor and individuals have weight when stepping on a scale.
    In the field of high-energy physics, on the other hand, scientists study tiny invisible objects that obey the laws of quantum mechanics — characterized by random fluctuations that create uncertainty in the positions and energies of particles like electrons, protons and neutrons. Understanding the randomness of quantum mechanics is required to explain the behavior of matter and light on a subatomic scale.
    For decades, scientists have been trying to unite those two fields of study to achieve a quantum description of gravity. This would combine the physics of curvature associated with general relativity with the mysterious random fluctuations associated with quantum mechanics.
    A new study in Nature Physics from physicists at The University of Texas at Arlington reports on a deep new probe into the interface between these two theories, using ultra-high energy neutrino particles detected by a particle detector set deep into the Antarctic glacier at the south pole.
    “The challenge of unifying quantum mechanics with the theory of gravitation remains one of the most pressing unsolved problems in physics,” said co-author Benjamin Jones, associate professor of physics. “If the gravitational field behaves in a similar way to the other fields in nature, its curvature should exhibit random quantum fluctuations.”
    Jones and UTA graduate students Akshima Negi and Grant Parker were part of an international IceCube Collaboration team that included more than 300 scientists from around the U.S., as well as Australia, Belgium, Canada, Denmark, Germany, Italy, Japan, New Zealand, Korea, Sweden, Switzerland, Taiwan and the United Kingdom.
    To search for signatures of quantum gravity, the team placed thousands of sensors throughout one square kilometer near the south pole in Antarctica that monitored neutrinos, unusual but abundant subatomic particles that are neutral in charge and have no mass. The team was able to study more than 300,000 neutrinos. They were looking to see whether these ultra-high-energy particles were bothered by random quantum fluctuations in spacetime that would be expected if gravity were quantum mechanical, as they travel long distances across the Earth.
    “We searched for those fluctuations by studying the flavors of neutrinos detected by the IceCube Observatory,” Negi said. “Our work resulted in a measurement that was far more sensitive than previous ones (over a million times more, for some of the models), but it did not find evidence of the expected quantum gravitational effects.”
    This non-observation of a quantum geometry of spacetime is a powerful statement about the still-unknown physics that operate at the interface of quantum physics and general relativity.
    “This analysis represents the final chapter in UTA’s nearly decade-long contribution to the IceCube Observatory,” said Jones. “My group is now pursuing new experiments that aim to understand the origin and value of the neutrinos mass using atomic, molecular and optical physics techniques.” More

  • in

    Random robots are more reliable

    Northwestern University engineers have developed a new artificial intelligence (AI) algorithm designed specifically for smart robotics. By helping robots rapidly and reliably learn complex skills, the new method could significantly improve the practicality — and safety — of robots for a range of applications, including self-driving cars, delivery drones, household assistants and automation.
    Called Maximum Diffusion Reinforcement Learning (MaxDiff RL), the algorithm’s success lies in its ability to encourage robots to explore their environments as randomly as possible in order to gain a diverse set of experiences. This “designed randomness” improves the quality of data that robots collect regarding their own surroundings. And, by using higher-quality data, simulated robots demonstrated faster and more efficient learning, improving their overall reliability and performance.
    When tested against other AI platforms, simulated robots using Northwestern’s new algorithm consistently outperformed state-of-the-art models. The new algorithm works so well, in fact, that robots learned new tasks and then successfully performed them within a single attempt — getting it right the first time. This starkly contrasts current AI models, which enable slower learning through trial and error.
    The research will be published on Thursday (May 2) in the journal Nature Machine Intelligence.
    “Other AI frameworks can be somewhat unreliable,” said Northwestern’s Thomas Berrueta, who led the study. “Sometimes they will totally nail a task, but, other times, they will fail completely. With our framework, as long as the robot is capable of solving the task at all, every time you turn on your robot you can expect it to do exactly what it’s been asked to do. This makes it easier to interpret robot successes and failures, which is crucial in a world increasingly dependent on AI.”
    Berrueta is a Presidential Fellow at Northwestern and a Ph.D. candidate in mechanical engineering at the McCormick School of Engineering. Robotics expert Todd Murphey, a professor of mechanical engineering at McCormick and Berrueta’s adviser, is the paper’s senior author. Berrueta and Murphey co-authored the paper with Allison Pinosky, also a Ph.D. candidate in Murphey’s lab.
    The disembodied disconnect
    To train machine-learning algorithms, researchers and developers use large quantities of big data, which humans carefully filter and curate. AI learns from this training data, using trial and error until it reaches optimal results. While this process works well for disembodied systems, like ChatGPT and Google Gemini (formerly Bard), it does not work for embodied AI systems like robots. Robots, instead, collect data by themselves — without the luxury of human curators.

    “Traditional algorithms are not compatible with robotics in two distinct ways,” Murphey said. “First, disembodied systems can take advantage of a world where physical laws do not apply. Second, individual failures have no consequences. For computer science applications, the only thing that matters is that it succeeds most of the time. In robotics, one failure could be catastrophic.”
    To solve this disconnect, Berrueta, Murphey and Pinosky aimed to develop a novel algorithm that ensures robots will collect high-quality data on-the-go. At its core, MaxDiff RL commands robots to move more randomly in order to collect thorough, diverse data about their environments. By learning through self-curated random experiences, robots acquire necessary skills to accomplish useful tasks.
    Getting it right the first time
    To test the new algorithm, the researchers compared it against current, state-of-the-art models. Using computer simulations, the researchers asked simulated robots to perform a series of standard tasks. Across the board, robots using MaxDiff RL learned faster than the other models. They also correctly performed tasks much more consistently and reliably than others.
    Perhaps even more impressive: Robots using the MaxDiff RL method often succeeded at correctly performing a task in a single attempt. And that’s even when they started with no knowledge.
    “Our robots were faster and more agile — capable of effectively generalizing what they learned and applying it to new situations,” Berrueta said. “For real-world applications where robots can’t afford endless time for trial and error, this is a huge benefit.”
    Because MaxDiff RL is a general algorithm, it can be used for a variety of applications. The researchers hope it addresses foundational issues holding back the field, ultimately paving the way for reliable decision-making in smart robotics.

    “This doesn’t have to be used only for robotic vehicles that move around,” Pinosky said. “It also could be used for stationary robots — such as a robotic arm in a kitchen that learns how to load the dishwasher. As tasks and physical environments become more complicated, the role of embodiment becomes even more crucial to consider during the learning process. This is an important step toward real systems that do more complicated, more interesting tasks.”
    The study, “Maximum diffusion reinforcement learning,” was supported by the U.S. Army Research Office (grant number W911NF-19-1-0233) and the U.S. Office of Naval Research (grant number N00014-21-1-2706). More

  • in

    Significant new discovery in teleportation research — Noise can improve the quality of quantum teleportation

    In teleportation, the state of a quantum particle, or qubit, is transferred from one location to another without sending the particle itself. This transfer requires quantum resources, such as entanglement between an additional pair of qubits. In an ideal case, the transfer and teleportation of the qubit state can be done perfectly. However, real-world systems are vulnerable to noise and disturbances — and this reduces and limits the quality of the teleportation.
    Researchers from the University of Turku, Finland, and the University of Science and Technology of China, Hefei, have now proposed a theoretical idea and made corresponding experiments to overcome this problem. In other words, the new approach enables reaching high-quality teleportation despite the presence of noise.
    “The work is based on an idea of distributing entanglement — prior to running the teleportation protocol — beyond the used qubits, i.e., exploiting the hybrid entanglement between different physical degrees of freedom,” says Professor Jyrki Piilo from the University of Turku.
    Conventionally, the polarisation of photons has been used for the entanglement of qubits in teleportation, while the current approach exploits the hybrid entanglement between the photons’ polarisation and frequency.
    “This allows for a significant change in how the noise influences the protocol, and as a matter of fact our discovery reverses the role of the noise from being harmful to being beneficial to teleportation,” Piilo describes.
    With conventional qubit entanglement in the presence of noise, the teleportation protocol does not work. In a case where there is initially hybrid entanglement and no noise, the teleportation does not work either.
    “However, when we have hybrid entanglement and add noise, the teleportation and quantum state transfer occur in almost perfect manner,” says Dr Olli Siltanen whose doctoral dissertation presented the theoretical part of the current research.

    In general, the discovery enables almost ideal teleportation despite the presence of certain type of noise when using photons for teleportation.
    “While we have done numerous experiments on different facets of quantum physics with photons in our laboratory, it was very thrilling and rewarding to see this very challenging teleportation experiment successfully completed,” says Dr Zhao-Di Liu from the University of Science and Technology of China, Hefei.
    “This is a significant proof-of-principle experiment in the context of one of the most important quantum protocols,” says Professor Chuan-Feng Li from the University of Science and Technology of China, Hefei.
    Teleportation has important applications, e.g., in transmitting quantum information, and it is of utmost importance to have approaches that protect this transmission from noise and can be used for other quantum applications. The results of the current study can be considered as basic research that carries significant fundamental importance and opens intriguing pathways for future work to extend the approach to general types of noise sources and other quantum protocols. More

  • in

    Toxic chemicals can be detected with new AI method

    Swedish researchers at Chalmers University of Technology and the University of Gothenburg have developed an AI method that improves the identification of toxic chemicals — based solely on knowledge of the molecular structure. The method can contribute to better control and understanding of the ever-growing number of chemicals used in society, and can also help reduce the amount of animal tests.
    The use of chemicals in society is extensive, and they occur in everything from household products to industrial processes. Many chemicals reach our waterways and ecosystems, where they may cause negative effects on humans and other organisms. One example is PFAS, a group of problematic substances which has recently been found in concerning concentrations in both groundwater and drinking water. It has been used, for example, in firefighting foam and in many consumer products.
    Negative effects for humans and the environment arise despite extensive chemical regulations, that often require time-consuming animal testing to demonstrate when chemicals can be considered as safe. In the EU alone, more than two million animals are used annually to comply with various regulations. At the same time, new chemicals are developed at a rapid pace, and it is a major challenge to determine which of these that need to be restricted due to their toxicity to humans or the environment.
    Valuable help in the development of chemicals
    The new method developed by the Swedish researchers utilises artificial intelligence for rapid and cost-effective assessment of chemical toxicity. It can therefore be used to identify toxic substances at an early phase and help reduce the need for animal testing.
    “Our method is able to predict whether a substance is toxic or not based on its chemical structure. It has been developed and refined by analysing large datasets from laboratory tests performed in the past. The method has thereby been trained to make accurate assessments for previously untested chemicals,” says Mikael Gustavsson, researcher at the Department of Mathematical Sciences at Chalmers University of Technology, and at the Department of Biology and Environmental Sciences at the University of Gothenburg.
    “There are currently more than 100,000 chemicals on the market, but only a small part of these have a well-described toxicity towards humans or the environment. To assess the toxicity of all these chemicals using conventional methods, including animal testing, is not practically possible. Here, we see that our method can offer a new alternative,” says Erik Kristiansson, professor at the Department of Mathematical Sciences at Chalmers and at the University of Gothenburg.

    The researchers believe that the method can be very useful within environmental research, as well as for authorities and companies that use or develop new chemicals. They have therefore made it open and publicly available.
    Broader and more accurate than today’s computational tools
    Computational tools for finding toxic chemicals already exist, but so far, they have had too narrow applicability domains or too low accuracy to replace laboratory tests to any greater extent. In the researchers’ study, they compared their method with three other, commonly used, computational tools, and found that the new method has both a higher accuracy and that it is more generally applicable.
    “The type of AI we use is based on advanced deep learning methods,” says Erik Kristiansson. “Our results show that AI-based methods are already on par with conventional computational approaches, and as the amount of available data continues to increase, we expect AI methods to improve further. Thus, we believe that AI has the potential to markedly improve computational assessment of chemical toxicity.”
    The researchers predict that AI systems will be able to replace laboratory tests to an increasingly greater extent.
    “This would mean that the number of animal experiments could be reduced, as well as the economic costs when developing new chemicals. The possibility to rapidly prescreen large and diverse bodies of data can therefore aid the development of new and safer chemicals and help find substitutes for toxic substances that are currently in use. We thus believe that AI-based methods will help reduce the negative impacts of chemical pollution on humans and on ecosystem services,” says Erik Kristiansson.

    More about: the new AI method
    The method is based on transformers, an AI model for deep learning that was originally developed for language processing. Chat GPT — whose abbreviation means Generative Pretrained Transformer — is one example of the applications.
    The model has recently also proved highly efficient at capturing information from chemical structures. Transformers can identify properties in the structure of molecules that cause toxicity, in a more sophisticated way than has been previously possible.
    Using this information, the toxicity of the molecule can then be predicted by a deep neural network. Neural networks and transformers belong to the type of AI that continuously improves itself by using training data — in this case, large amounts of data from previous laboratory tests of the effects of thousands of different chemicals on various animals and plants. More

  • in

    Unveiling a polarized world — in a single shot

    Think of all the information we get based on how an object interacts with wavelengths of light — a.k.a. color. Color can tell us if food is safe to eat or if a piece of metal is hot. Color is an important diagnostic tool in medicine, helping practitioners diagnose diseased tissue, inflammation, or problems in blood flow.
    Companies have invested heavily to improve color in digital imaging, but wavelength is just one property of light. Polarization — how the electric field oscillates as light propagates — is also rich with information, but polarization imaging remains mostly confined to table-top laboratory settings, relying on traditional optics such as waveplates and polarizers on bulky rotational mounts.
    Now, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a compact, single-shot polarization imaging system that can provide a complete picture of polarization. By using just two thin metasurfaces, the imaging system could unlock the vast potential of polarization imaging for a range of existing and new applications, including biomedical imaging, augmented and virtual reality systems and smart phones.
    The research is published in Nature Photonics.
    “This system, which is free of any moving parts or bulk polarization optics, will empower applications in real-time medical imaging, material characterization, machine vision, target detection, and other important areas,” said Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at SEAS and senior author of the paper.
    In previous research, Capasso and his team developed a first-of-its-kind compact polarization camera to capture so-called Stokes images, images of the polarization signature reflecting off an object — without controlling the incident illumination.
    “Just as the shade or even the color of an object can appear different depending on the color of the incident illumination, the polarization signature of an object depends on the polarization profile of the illumination,” said Aun Zaidi, a recent PhD graduate from Capasso’s group and first author of the paper. “In contrast to conventional polarization imaging, ‘active’ polarization imaging, known as Mueller matrix imaging, can capture the most complete polarization response of an object by controlling the incident polarization.”
    Currently, Mueller matrix imaging requires a complex optical set-up with multiple rotating plates and polarizers that sequentially capture a series of images which are combined to realize a matrix representation of the image.

    The simplified system developed by Capasso and his team uses two extremely thin metasurfaces — one to illuminate an object and the other to capture and analyze the light on the other side.
    The first metasurface generates what’s known as polarized structured light, in which the polarization is designed to vary spatially in a unique pattern. When this polarized light reflects off or transmits through the object being illuminated, the polarization profile of the beam changes. That change is captured and analyzed by the second metasurface to construct the final image — in a single shot.
    The technique allows for real-time advanced imaging, which is important for applications such as endoscopic surgery, facial recognition in smartphones, and eye tracking in AR/VR systems. It could also be combined with powerful machine learning algorithms for applications in medical diagnostics, material classification and pharmaceuticals.
    “We have brought together two seemingly separate fields of structured light and polarized imaging to design a single system that captures the most complete polarization information. Our use of nanoengineered metasurfaces, which replace many components that would traditionally be required in a system such as this, greatly simplifies its design,” said Zaidi.
    “Our single-shot and compact system provides a viable pathway for the widespread adoption of this type of imaging to empower applications requiring advanced imaging,” said Capasso.
    The Harvard Office of Technology Development has protected the intellectual property associated with this project out of Prof. Capasso’s lab and licensed the technology to Metalenz for further development.
    The research was co-authored by Noah Rubin, Maryna Meretska, Lisa Li, Ahmed Dorrah and Joon-Suh Park. It was supported by the Air Force Office of Scientific Research under award Number FA9550-21-1-0312, the Office of Naval Research (ONR) under award number N00014-20-1-2450, the National Aeronautics and Space Administration (NASA) under award numbers 80NSSC21K0799 and 80NSSC20K0318, and the National Science Foundation under award no. ECCS-2025158. More

  • in

    This highly reflective black paint makes objects more visible to autonomous cars

    Driving at night might be a scary challenge for a new driver, but with hours of practice it soon becomes second nature. For self-driving cars, however, practice may not be enough because the lidar sensors that often act as these vehicles’ “eyes” have difficulty detecting dark-colored objects. Research published in ACS Applied Materials & Interfaces describes a highly reflective black paint that could help these cars see dark objects and make autonomous driving safer.
    Lidar, short for light detection and ranging, is a system used in a variety of applications, including geologic mapping and self-driving vehicles. The system works like echolocation, but instead of emitting sound waves, lidar emits tiny pulses of near-infrared light. The light pulses bounce off objects and back to the sensor, allowing the system to map the 3D environment it’s in. But lidar falls short when objects absorb more of that near-infrared light than they reflect, which can occur on black-painted surfaces. Lidar can’t detect these dark objects on its own, so one common solution is to have the system rely on other sensors or software to fill in the information gaps. However, this solution could still lead to accidents in some situations. Rather than reinventing the lidar sensors, though, Chang-Min Yoon and colleagues wanted to make dark objects easier to detect with existing technology by developing a specially formulated, highly reflective black paint.
    To produce the new paint, the team first formed a thin layer of titanium dioxide (TiO2) on small fragments of glass. Then the glass was etched away with hydrofluoric acid, leaving behind a hollow layer of white, highly reflective TiO2. This was reduced with sodium borohydride to produce a black material that maintained its reflective qualities. By mixing this material with varnish, it could be applied as a paint. The team next tested the new paint with two types of commercially available lidar sensors: a mirror-based sensor and a 360-degree rotating type sensor. For comparison, a traditional carbon black-based version was also evaluated. Both sensors easily recognized the specially formulated, TiO2-based paint but did not readily detect the traditional paint. The researchers say that their highly reflective material could help improve safety on the roads by making dark objects more visible to autonomous vehicles already equipped with existing lidar technology.
    The authors acknowledge funding from the Korea Ministry of SMEs and Startups and the National Research Foundation of Korea. More