More stories

  • in

    Fastest industry standard optical fiber

    An optical fibre about the thickness of a human hair can now carry the equivalent of more than 10 million fast home internet connections running at full capacity.
    A team of Japanese, Australian, Dutch, and Italian researchers has set a new speed record for an industry standard optical fibre, achieving 1.7 Petabits over a 67km length of fibre. The fibre, which contains 19 cores that can each carry a signal, meets the global standards for fibre size, ensuring that it can be adopted without massive infrastructure change. And it uses less digital processing, greatly reducing the power required per bit transmitted.
    Macquarie University researchers supported the invention by developing a 3D laser-printed glass chip that allows low loss access to the 19 streams of light carried by the fibre and ensures compatibility with existing transmission equipment.
    The fibre was developed by the Japanese National Institute of Information and Communications Technology (NICT, Japan) and Sumitomo Electric Industries, Ltd. (SEI, Japan) and the work was performed in collaboration with the Eindhoven University of Technology, University of L’Aquila, and Macquarie University.
    All the world’s internet traffic is carried through optical fibres which are each 125 microns thick (comparable to the thickness of a human hair). These industry standard fibres link continents, data centres, mobile phone towers, satellite ground stations and our homes and businesses.
    Back in 1988, the first subsea fibre-optic cable across the Atlantic had a capacity of 20 Megabits or 40,000 telephone calls, in two pairs of fibres. Known as TAT 8, it came just in time to support the development of the World Wide Web. But it was soon at capacity.

    The latest generation of subsea cables such as the Grace Hopper cable, which went into service in 2022, carries 22 Terabits in each of 16 fibre pairs. That’s a million times more capacity than TAT 8, but it’s still not enough to meet the demand for streaming TV, video conferencing and all our other global communication.
    “Decades of optics research around the world has allowed the industry to push more and more data through single fibres,” says Dr Simon Gross from Macquarie University’s School of Engineering. “They’ve used different colours, different polarisations, light coherence and many other tricks to manipulate light.”
    Most current fibres have a single core that carries multiple light signals. But this current technology is practically limited to only a few Terabits per second due to interference between the signals.
    “We could increase capacity by using thicker fibres. But thicker fibres would be less flexible, more fragile, less suitable for long-haul cables, and would require massive reengineering of optical fibre infrastructure,” says Dr Gross.
    “We could just add more fibres. But each fibre adds equipment overhead and cost and we’d need a lot more fibres.”
    To meet the exponentially growing demand for movement of data, telecommunication companies need technologies that offer greater data flow for reduced cost.

    The new fibre contains 19 cores that can each carry a signal.
    “Here at Macquarie University, we’ve created a compact glass chip with a wave guide pattern etched into it by a 3D laser printing technology. It allows feeding of signals into the 19 individual cores of the fibre simultaneously with uniform low losses. Other approaches are lossy and limited in the number of cores,” says Dr Gross.
    “It’s been exciting to work with the Japanese leaders in optical fibre technology. I hope we’ll see this technology in subsea cables within five to 10 years.”
    Another researcher involved in the experiment, Professor Michael Withford from Macquarie University’s School of Mathematical and Physical Sciences, believes this breakthrough in optical fibre technology has far-reaching implications.
    “The optical chip builds on decades of research into optics at Macquarie University,” says Professor Withford. “The underlying patented technology has many applications including finding planets orbiting distant stars, disease detection, even identifying damage in sewage pipes.” More

  • in

    Symmetry breaking by ultrashort light pulses opens new quantum pathways for coherent phonons

    Atoms in a crystal form a regular lattice, in which they can move over small distances from their equilibrium positions. Such phonon excitations are represented by quantum states. A superposition of phonon states defines a so-called phonon wavepacket, which is connected with collective coherent oscillations of the atoms in the crystal. Coherent phonons can be generated by excitation of the crystal with a femtosecond light pulse and their motions in space and time be followed by scattering an ultrashort x-ray pulse from the excited material. The pattern of scattered x-rays gives direct insight in the momentary position of and distances between the atoms. A sequence of such patterns provides a ‘movie’ of the atomic motions.
    The physical properties of coherent phonons are determined by the symmetry of the crystal, which represents a periodic arrangement of identical unit cells. Weak optical excitation does not change the symmetry properties of the crystal. In this case, coherent phonons with identical atomic motions in all unit cells are excited . In contrast, strong optical excitation can break the symmetry of the crystal and make atoms in adjacent unit cells oscillate differently. While this mechanism holds potential for accessing other phonons, it has not been explored so far.
    In the journal Physical Review B, researchers from the Max-Born-Institute in Berlin in collaboration with researchers from the University of Duisburg-Essen have demonstrated a novel concept for exciting and probing coherent phonons in crystals of a transiently broken symmetry. The key of this concept lies in reducing the symmetry of a crystal by appropriate optical excitation, as has been shown with the prototypical crystalline semimetal bismuth (Bi).
    Ultrafast mid-infrared excitation of electrons in Bi modifies the spatial charge distribution and, thus, reduces the crystal symmetry transiently. In the reduced symmetry, new quantum pathways for the excitation of coherent phonons open up. The symmetry reduction causes a doubling of the unit-cell size from the red framework with two Bi atoms to the blue framework with four Bi atoms. In addition to the unidirectional atomic motion, the unit cell with 4 Bi atoms allows for coherent phonon wave packets with bidirectional atomic motions.
    Probing the transient crystal structure directly by femtosecond x-ray diffraction reveals oscillations of diffracted intensity, which persist on a picosecond time scale. The oscillations arise from coherent wave packet motions along phonon coordinates in the crystal of reduced symmetry. Their frequency of 2.6 THz is different from that of phonon oscillations at low excitation level. Interestingly, this behavior occurs only above a threshold of the optical pump fluence and reflects the highly nonlinear, so-called non-perturbative character of the optical excitation process.
    In summary, optically induced symmetry breaking allows for modifying the excitation spectrum of a crystal on ultrashort time scales. These results may pave the way for steering material properties transiently and, thus, implementing new functions in optoacoustics and optical switching. More

  • in

    Self-driving cars lack social intelligence in traffic

    Should I go or give way? It is one of the most basic questions in traffic, whether merging in on a motorway or at the door of the metro. The decision is one that humans typically make quickly and intuitively, because doing so relies on social interactions trained from the time we begin to walk.
    Self-driving cars on the other hand, which are already on the road in several parts of the world, still struggle when navigating these social interactions in traffic. This has been demonstrated in new research conducted at the University of Copenhagen’s Department of Computer Science. Researchers analyzed an array of videos uploaded by YouTube users of self-driving cars in various traffic situations. The results show that self-driving cars have a particularly tough time understanding when to ‘yield’ — when to give way and when to drive on.
    “The ability to navigate in traffic is based on much more than traffic rules. Social interactions, including body language, play a major role when we signal each other in traffic. This is where the programming of self-driving cars still falls short. That is why it is difficult for them to consistently understand when to stop and when someone is stopping for them, which can be both annoying and dangerous,” says Professor Barry Brown, who has studied the evolution of self-driving car road behavior for the past five years.
    Sorry, it’s a self-driving car!
    Companies like Waymo and Cruise have launched taxi services with self-driving cars in parts of the United States. Tesla has rolled out their FSD model (full self-driving) to about 100,000 volunteer drivers in the US and Canada. And the media is brimming with stories about how good self-driving cars perform. But according to Professor Brown and his team, their actual road performance is a well-kept trade secret that very few have insight into. Therefore, the researchers performed in-depth analyses using 18 hours of YouTube footage filmed by enthusiasts testing cars from the back seat.
    One of their video examples shows a family of four standing by the curb of a residential street in the United States. There is no pedestrian crossing, but the family would like to cross the road. As the driverless car approaches, it slows, causing the two adults in the family to wave their hands as a sign for the car to drive on. Instead, the car stops right next to them for 11 seconds. Then, as the family begins walking across the road, the car starts moving again, causing them to jump back onto the sidewalk, whereupon the person in the back seat rolls down the window and yells, “Sorry, self-driving car!.”
    “The situation is similar to the main problem we found in our analysis and demonstrates the inability of self-driving cars to understand social interactions in traffic. The driverless vehicle stops so as to not hit pedestrians, but ends up driving into them anyway because it doesn’t understand the signals. Besides creating confusion and wasted time in traffic, it can also be downright dangerous,” says Professor Brown.

    A drive in foggy Frisco
    In tech centric San Francisco, the performance of self-driving cars can be judged up close. Here, driverless cars have been unleashed in several parts of the city as buses and taxis, navigating the hilly streets among people and other natural phenomena. And according to the researcher, this has created plenty of resistance among the city’s residents:
    “Self-driving cars are causing traffic jams and problems in San Francisco because they react inappropriately to other road users. Recently, the city’s media wrote of a chaotic traffic event caused by self-driving cars due to fog. Fog caused the self-driving cars to overreact, stop and block traffic, even though fog is extremely common in the city,” says Professor Brown.
    Robotic cars have been in the works for 10 years and the industry behind them has spent over DKK 40 billion to push their development. Yet the outcome has been cars that still drive with many mistakes, blocking other drivers and disrupting the smooth flow of traffic.
    Why do you think it’s so difficult to program self-driving cars to understand social interactions in traffic?
    “I think that part of the answer is that we take the social element for granted. We don’t think about it when we get into a car and drive — we just do it automatically. But when it comes to designing systems, you need to describe everything we take for granted and incorporate it into the design. The car industry could learn from having a more sociological approach. Understanding social interactions that are part of traffic should be used to design self-driving cars’ interactions with other road users, similar to how research has helped improve the usability of mobile phones and technology more broadly.”
    About the study: The researchers analyzed 18 hours of video footage of self-driving cars from 70 different YouTube videos. Using different video analysis techniques the researchers studied the video sequences in depth, rather than making a broader superficial analysis. The study is called: “The Halting Problem: Video analysis of self-driving cars in traffic” has just been presented at the 2023 CHI Conference on Human Factors in Computing Systems, where it won the conference’s best paper award. The study was conducted by Barry Brown of the University of Copenhagen and Stockholm University, Mathias Broth of Linköping University, and Erik Vinkhuyzen of Kings College, London. More

  • in

    New tool may help spot ‘invisible’ brain damage in college athletes

    An artificial intelligence computer program that processes magnetic resonance imaging (MRI) can accurately identify changes in brain structure that result from repeated head injury, a new study in student athletes shows. These variations have not been captured by other traditional medical images such as computerized tomography (CT) scans. The new technology, researchers say, may help design new diagnostic tools to better understand subtle brain injuries that accumulate over time.
    Experts have long known about potential risks of concussion among young athletes, particularly for those who play high-contact sports such as football, hockey, and soccer. Evidence is now mounting that repeated head impacts, even if they at first appear mild, may add up over many years and lead to cognitive loss. While advanced MRI identifies microscopic changes in brain structure that result from head trauma, researchers say the scans produce vast amounts of data that is difficult to navigate.
    Led by researchers in the Department of Radiology at NYU Grossman School of Medicine, the new study showed for the first time that the new tool, using an AI technique called machine learning, could accurately distinguish between the brains of male athletes who played contact sports like football versus noncontact sports like track and field. The results linked repeated head impacts with tiny, structural changes in the brains of contact-sport athletes who had not been diagnosed with a concussion.
    “Our findings uncover meaningful differences between the brains of athletes who play contact sports compared to those who compete in noncontact sports,” said study senior author and neuroradiologist Yvonne Lui, MD. “Since we expect these groups to have similar brain structure, these results suggest that there may be a risk in choosing one sport over another,” adds Lui, a professor and vice chair for research in the Department of Radiology at NYU Langone Health.
    Lui adds that beyond spotting potential damage, the machine-learning technique used in their investigation may also help experts to better understand the underlying mechanisms behind brain injury.
    The new study, which published online May 22 in The Neuroradiology Journal, involved hundreds of brain images from 36 contact-sport college athletes (mostly football players) and 45 noncontact-sport college athletes (mostly runners and baseball players). The work was meant to clearly link changes detected by the AI tool in the brain scans of football players to head impacts. It builds on a previous study that had identified brain-structure differences in football players, comparing those with and without concussions to athletes who competed in noncontact sports.

    For the investigation, the researchers analyzed MRI scans from 81 male athletes taken between 2016 through 2018, none of whom had a known diagnosis of concussion within that time period. Contact-sport athletes played football, lacrosse, and soccer, while noncontact-sport athletes participated in baseball, basketball, track and field, and cross-country.
    As part of their analysis, the research team designed statistical techniques that gave their computer program the ability to “learn” how to predict exposure to repeated head impacts using mathematical models. These were based on data examples fed into them, with the program getting “smarter” as the amount of training data grew.
    The study team trained the program to identify unusual features in brain tissue and distinguish between athletes with and without repeated exposure to head injuries based on these factors. They also ranked how useful each feature was for detecting damage to help uncover which of the many MRI metrics might contribute most to diagnoses.
    Two metrics most accurately flagged structural changes that resulted from head injury, say the authors. The first, mean diffusivity, measures how easily water can move through brain tissue and is often used to spot strokes on MRI scans. The second, mean kurtosis, examines the complexity of brain-tissue structure and can indicate changes in the parts of the brain involved in learning, memory, and emotions.
    “Our results highlight the power of artificial intelligence to help us see things that we could not see before, particularly ‘invisible injuries’ that do not show up on conventional MRI scans,” said study lead author Junbo Chen, MS, a doctoral candidate at NYU Tandon School of Engineering. “This method may provide an important diagnostic tool not only for concussion, but also for detecting the damage that stems from subtler and more frequent head impacts.”
    Chen adds that the study team next plans to explore the use of their machine-learning technique for examining head injury in female athletes.
    Funding for the study was provided by National Institute of Health grants P41EB017183 and C63000NYUPG118117. Further funding was provided by Department of Defense grant W81XWH2010699.
    In addition to Lui and Chen, other NYU researchers involved in the study were Sohae Chung, PhD; Tianhao Li, MS; Els Fieremans, PhD; Dmitry Novikov, PhD; and Yao Wang, PhD. More

  • in

    Source-shifting metastructures composed of only one resin for location camouflaging

    The field of transformation optics has flourished over the past decade, allowing scientists to design metamaterial-based structures that shape and guide the flow of light. One of the most dazzling inventions potentially unlocked by transformation optics is the invisibility cloak — a theoretical fabric that bends incoming light away from the wearer, rendering them invisible. Interestingly, such illusions are not restricted to the manipulations of light alone.
    Many of the techniques used in transformation optics have been applied to sound waves, giving rise to the parallel field of transformation acoustics. In fact, researchers have already made substantial progress by developing the “acoustic cloak,” the analog of the invisibility cloak for sounds. While research on acoustic illusion has focused on the concept of masking the presence of an object, not much progress has been made on the problem of location camouflaging.
    The concept of an acoustic source-shifter utilizes a structure that makes the location of the sound source appear different from its actual location. Such devices capable of “acoustic location camouflaging” could find applications in advanced holography and virtual reality. Unfortunately, the nature of location camouflaging has been scarcely studied, and the development of accessible materials and surfaces that would provide a decent performance has proven challenging.
    Against this backdrop, Professor Garuda Fujii, affiliated with the Institute of Engineering and Energy Landscape Architectonics Brain Bank (ELab2) at Shinshu University, Japan, has now made progress in developing high-performance source-shifters. In a recent study published in the Journal of Sound and Vibration online on May 5, 2023, Prof. Fujii presented an innovative approach to designing source-shifter structures out of acrylonitrile butadiene styrene (ABS), an elastic polymer commonly used in 3D printing.
    Prof. Fujii’s approach is centered around a core concept: inverse design based on topology optimization. The numerical approach builds on the reproduction of pressure fields (sound) emitted by a virtual source, i.e., the source that nearby listeners would mistakenly perceive as real. Next, the pressure fields emitted by the actual source are manipulated to camouflage the location and make it sound as if coming from a different location in space. This can be achieved with the optimum design of a metastructure that, by the virtue of its geometry and elastic properties, minimizes the difference between the pressure fields emitted from the actual and virtual sources.
    Utilizing this approach, Prof. Fujii implemented an iterative algorithm to numerically determine the optimal design of ABS resin source-shifters according to various design criteria. His models and simulations had to account for the acoustic-elastic interactions between fluids (air) and solid elastic structures, as well as the actual limitations of modern manufacturing technology.
    The simulation results revealed that the optimized structures could reduce the difference between the emitted pressure fields of the masked source and those of a bare source at the virtual location to as low as 0.6%. “The optimal structure configurations obtained via topology optimization exhibited good performances at camouflaging the actual source location despite the simple composition of ABS that did not comprise complex acoustic metamaterials”, remarks Prof. Fujii.
    To shed more light on the underlying camouflaging mechanisms, Prof. Fujii analyzed the importance of the distance between the virtual and actual sources. He found that a greater distance did not necessarily degrade the source-shifter’s performance. He also investigated the effect of changing the frequency of the emitted sound on the performance as the source-shifters had been optimized for only one target frequency. Finally, he explored whether a source-shifter could be topologically optimized to operate at multiple sound frequencies.
    While his approach requires further fine-tuning, the findings of this study will surely help advance illusion acoustics. He concludes, “The proposed optimization method for designing high-performance source-shifters will help in the development of acoustic location camouflage and the advancement of holography technology.” More

  • in

    Robot centipedes go for a walk

    Researchers from the Department of Mechanical Science and Bioengineering at Osaka University have invented a new kind of walking robot that takes advantage of dynamic instability to navigate. By changing the flexibility of the couplings, the robot can be made to turn without the need for complex computational control systems. This work may assist the creation of rescue robots that are able to traverse uneven terrain.
    Most animals on Earth have evolved a robust locomotion system using legs that provides them with a high degree of mobility over a wide range of environments. Somewhat disappointingly, engineers who have attempted to replicate this approach have often found that legged robots are surprisingly fragile. The breakdown of even one leg due to the repeated stress can severely limit the ability of these robots to function. In addition, controlling a large number of joints so the robot can transverse complex environments requires a lot of computer power. Improvements in this design would be extremely useful for building autonomous or semi-autonomous robots that could act as exploration or rescue vehicles and enter dangerous areas.
    Now, investigators from Osaka University have developed a biomimetic “myriapod” robot that takes advantage of a natural instability that can convert straight walking into curved motion. In a study published recently in Soft Robotics, researchers from Osaka University describe their robot, which consists of six segments (with two legs connected to each segment) and flexible joints. Using an adjustable screw, the flexibility of the couplings can be modified with motors during the walking motion. The researchers showed that increasing the flexibility of the joints led to a situation called a “pitchfork bifurcation,” in which straight walking becomes unstable. Instead, the robot transitions to walking in a curved pattern, either to the right or to the left. Normally, engineers would try to avoid creating instabilities. However, making controlled use of them can enable efficient maneuverability. “We were inspired by the ability of certain extremely agile insects that allows them to control the dynamic instability in their own motion to induce quick movement changes,” says Shinya Aoi, an author of the study. Because this approach does not directly steer the movement of the body axis, but rather controls the flexibility, it can greatly reduce both the computational complexity as well as the energy requirements.
    The team tested the robot’s ability to reach specific locations and found that it could navigate by taking curved paths toward targets. “We can foresee applications in a wide variety of scenarios, such as search and rescue, working in hazardous environments or exploration on other planets,” says Mau Adachi, another study author. Future versions may include additional segments and control mechanisms. More

  • in

    Super low-cost smartphone attachment brings blood pressure monitoring to your fingertips

    Engineers at the University of California San Diego have developed a simple, low-cost clip that uses a smartphone’s camera and flash to monitor blood pressure at the user’s fingertip. The clip works with a custom smartphone app and currently costs about 80 cents to make. The researchers estimate that the cost could be as low as 10 cents apiece when manufactured at scale.
    The technology was published May 29 in Scientific Reports.
    Researchers say it could help make regular blood pressure monitoring easy, affordable and accessible to people in resource-poor communities. It could benefit older adults and pregnant women, for example, in managing conditions such as hypertension.
    “We’ve created an inexpensive solution to lower the barrier to blood pressure monitoring,” said study first author Yinan (Tom) Xuan, an electrical and computer engineering Ph.D. student at UC San Diego.
    “Because of their low cost, these clips could be handed out to anyone who needs them but cannot go to a clinic regularly,” said study senior author Edward Wang, a professor of electrical and computer engineering at UC San Diego and director of the Digital Health Lab. “A blood pressure monitoring clip could be given to you at your checkup, much like how you get a pack of floss and toothbrush at your dental visit.”
    Another key advantage of the clip is that it does not need to be calibrated to a cuff.

    “This is what distinguishes our device from other blood pressure monitors,” said Wang. Other cuffless systems being developed for smartwatches and smartphones, he explained, require obtaining a separate set of measurements with a cuff so that their models can be tuned to fit these measurements.
    “Our is a calibration-free system, meaning you can just use our device without touching another blood pressure monitor to get a trustworthy blood pressure reading.”
    To measure blood pressure, the user simply presses on the clip with a fingertip. A custom smartphone app guides the user on how hard and long to press during the measurement.
    The clip is a 3D-printed plastic attachment that fits over a smartphone’s camera and flash. It features an optical design similar to that of a pinhole camera. When the user presses on the clip, the smartphone’s flash lights up the fingertip. That light is then projected through a pinhole-sized channel to the camera as an image of a red circle. A spring inside the clip allows the user to press with different levels of force. The harder the user presses, the bigger the red circle appears on the camera.
    The smartphone app extracts two main pieces of information from the red circle. By looking at the size of the circle, the app can measure the amount of pressure that the user’s fingertip applies. And by looking at the brightness of the circle, the app can measure the volume of blood going in and out of the fingertip. An algorithm converts this information into systolic and diastolic blood pressure readings.

    The researchers tested the clip on 24 volunteers from the UC San Diego Medical Center. Results were comparable to those taken by a blood pressure cuff.
    “Using a standard blood pressure cuff can be awkward to put on correctly, and this solution has the potential to make it easier for older adults to self-monitor blood pressure,” said study co-author and medical collaborator Alison Moore, chief of the Division of Geriatrics in the Department of Medicine at UC San Diego School of Medicine.
    While the team has only proven the solution on a single smartphone model, the clip’s current design theoretically should work on other phone models, said Xuan.
    Wang and one of his lab members, Colin Barry, a co-author on the paper who is an electrical and computer engineering student at UC San Diego, co-founded a company, Billion Labs Inc., to refine and commercialize the technology.
    Next steps include making the technology more user friendly, especially for older adults; testing its accuracy across different skin tones; and creating a more universal design.
    Paper: “Ultra-low-cost Mechanical Smartphone Attachment for No-Calibration Blood Pressure Measurement.” Co-authors include Jessica De Souza, Jessica Wen and Nick Antipa, all at UC San Diego.
    This work is supported by the National Institute of Aging Massachusetts AI and Technology Center for Connected Care in Aging and Alzheimer’s Disease (MassAITC P30AG073107 Subaward 23-016677 N 00), the Altman Clinical and Translational Research Institute Galvanizing Engineering in Medicine (GEM) Awards, and a Google Research Scholar Award.
    Disclosures: Edward Wang and Colin Barry are co-founders of and have a financial interest in Billion Labs Inc. Wang is also the CEO of Billion Labs Inc. The other authors declare that they have no competing interests. The terms of this arrangement have been reviewed and approved by the University of California San Diego in accordance with its conflict-of-interest policies. More

  • in

    Emergence of solvated dielectrons observed for the first time

    Solvated dielectrons are the subject of many hypotheses among scientists, but have never been directly observed. They are described as a pair of electrons that is dissolved in liquids such as water or liquid ammonia. To make space for the electrons a cavity forms in the liquid, which the two electrons occupy. An international research team around Dr. Sebastian Hartweg, initially at Synchrotron SOLEIL (France), now at the Institute of Physics at the University of Freiburg and Prof. Dr. Ruth Signorell from ETH Zurich, including scientists from the synchrotron SOLEIL and Auburn University (US) has now succeeded in discovering a formation and decay process of the solvated dielectron. In experiments at the synchrotron SOLEIL (DESIRS beamline), the consortium found direct evidence supported by quantum chemical calculations for the formation of these electron pairs by excitation with ultraviolet light in tiny ammonia droplets containing a single sodium atom. The results were recently published in the scientific journal Science.
    Traces of an unusual process
    When dielectrons are formed by excitation with ultraviolet light in tiny ammonia droplets containing a sodium atom, they leave traces in an unusual process that scientists have now been able to observe for the first time. In this process, one of the two electrons migrates to the neighbouring solvent molecules, while at the same time the other electron is ejected. “The surprising thing about this is that similar processes have previously been observed mainly at much higher excitation energies,” says Hartweg. The team focused on this second electron because there could be interesting applications for it. On the one hand, the ejected electron is produced with very low kinetic energy, so it moves very slowly. On the other hand, this energy can be controlled by the irradiated UV light, which starts the whole process. Solvated dielectrons could thus serve as a good source of low-energy electrons.
    Generated specifically with variable energy
    Such slow electrons can set a wide variety of chemical processes in motion. For example, they play a role in the cascade of processes that lead to radiation damage in biological tissue. They are also important in synthetic chemistry, where they serve as effective reducing agents. By being able to selectively generate slow electrons with variable energy, the mechanisms of such chemical processes can be studied in more detail in the future. In addition, the energy made available to the electrons in a controlled manner might also be used to increase the effectiveness of reduction reactions. “These are interesting prospects for possible applications in the future,” says Hartweg. “Our work provides the basis for this and helps to understand these exotic and still enigmatic solvated dielectrons a little better.” More