More stories

  • in

    Computer game in school made students better at detecting fake news

    A computer game helped upper secondary school students become better at distinguishing between reliable and misleading news. This is shown by a study conducted by researchers at Uppsala University and elsewhere.
    “This is an important step towards equipping young people with the tools they need to navigate in a world full of disinformation. We all need to become better at identifying manipulative strategies — prebunking, as it is known — since it is virtually impossible to discern deep fakes, for example, and other AI-generated disinformation with the naked eye,” says Thomas Nygren, Professor of Education at Uppsala University.
    Along with three other researchers, he conducted a study involving 516 Swedish upper secondary school students in different programmes at four schools. The study, published in the Journal of Research on Technology in Education, investigated the effect of the game Bad News in a classroom setting — this is the first time the game has been scientifically tested in a normal classroom. The game has been created for research and teaching, and the participants assume the role of spreader of misleading news. The students in the study either played the game individually, in pairs or in whole class groups with a shared scorecard — all three methods had positive effects. This surprised the researchers, who believed students would learn more by working at the computer together.
    “The students improved their ability to identify manipulative techniques in social media posts and to distinguish between reliable and misleading news,” Nygren comments.
    The study also showed that students who already had a positive attitude towards trustworthy news sources were better at distinguishing disinformation, and this attitude became significantly more positive after playing the game. Moreover, many students improved their assessments of credibility and were able to explain how they could identify manipulative techniques in a more sophisticated way.
    The researchers noted that competitive elements in the game made for greater interest and enhanced its benefit. They therefore conclude that the study contributes insights for teachers into how serious games can be used in formal instruction to promote media and information literacy.
    “Some people believe that gamification can enhance learning in school. However, our results show that more gamification in the form of competitive elements does not necessarily mean that students learn more — though it can be perceived as more fun and interesting,” Nygren says.
    Participating researchers: Carl-Anton Werner Axelsson (Mälardalen and Uppsala), Thomas Nygren (Uppsala), Jon Roozenbeek (Cambridge) and Sander van der Linden (Cambridge). More

  • in

    Holographic displays offer a glimpse into an immersive future

    Setting the stage for a new era of immersive displays, researchers are one step closer to mixing the real and virtual worlds in an ordinary pair of eyeglasses using high-definition 3D holographic images, according to a study led by Princeton University researchers.
    Holographic images have real depth because they are three dimensional, whereas monitors merely simulate depth on a 2D screen. Because we see in three dimensions, holographic images could be integrated seamlessly into our normal view of the everyday world.
    The result is a virtual and augmented reality display that has the potential to be truly immersive, the kind where you can move your head normally and never lose the holographic images from view. “To get a similar experience using a monitor, you would need to sit right in front of a cinema screen,” said Felix Heide, assistant professor of computer science and senior author on a paper published April 22 in Nature Communications.
    And you wouldn’t need to wear a screen in front of your eyes to get this immersive experience. Optical elements required to create these images are tiny and could potentially fit on a regular pair of glasses. Virtual reality displays that use a monitor, as current displays do, require a full headset. And they tend to be bulky because they need to accommodate a screen and the hardware necessary to operate it.
    “Holography could make virtual and augmented reality displays easily usable, wearable and ultrathin,” said Heide. They could transform how we interact with our environments, everything from getting directions while driving, to monitoring a patient during surgery, to accessing plumbing instructions while doing a home repair.
    One of the most important challenges is quality. Holographic images are created by a small chip-like device called a spatial light modulator. Until now, these modulators could only create images that are either small and clear or large and fuzzy. This tradeoff between image size and clarity results in a narrow field of view, too narrow to give the user an immersive experience. “If you look towards the corners of the display, the whole image may disappear,” said Nathan Matsuda, research scientist at Meta and co-author on the paper.
    Heide, Matsuda and Ethan Tseng, doctoral student in computer science, have created a device to improve image quality and potentially solve this problem. Along with their collaborators, they built a second optical element to work in tandem with the spatial light modulator. Their device filters the light from the spatial light modulator to expand the field of view while preserving the stability and fidelity of the image. It creates a larger image with only a minimal drop in quality.
    Image quality has been a core challenge preventing the practical applications of holographic displays, said Matsuda. “The research brings us one step closer to resolving this challenge,” he said.
    The new optical element is like a very small custom-built piece of frosted glass, said Heide. The pattern etched into the frosted glass is the key. Designed using AI and optical techniques, the etched surface scatters light created by the spatial light modulator in a very precise way, pushing some elements of an image into frequency bands that are not easily perceived by the human eye. This improves the quality of the holographic image and expands the field of view.
    Still, hurdles to making a working holographic display remain. The image quality isn’t yet perfect, said Heide, and the fabrication process for the optical elements needs to be improved. “A lot of technology has to come together to make this feasible,” said Heide. “But this research shows a path forward.” More

  • in

    AI tool recognizes serious ocular disease in horses

    Researchers at the LMU Equine Clinic have developed a deep learning tool that is capable of reliably diagnosing moon blindness in horses based on photos.
    Colloquially known as moon blindness, equine recurrent uveitis (ERU) is an inflammatory ocular disease in horses, which can lead to blindness or loss of the affected eye. It is one of the most common eye diseases in horses and has a major economic impact. Correct and swift diagnosis is very important to minimize lasting damage. A team led by Professor Anna May from the LMU Equine Clinic has developed and trained a deep learning tool that reliably recognizes the disease and can support veterinary doctors in the making of diagnoses, as the researchers report in a current study.
    In an online survey, the researchers asked some 150 veterinarians to evaluate 40 photos. The pictures showed a mixture of healthy eyes, eyes with ERU, and eyes with other diseases. Working on the basis of image analyses, the deep learning tool was given the task of evaluating the same photos. Subsequently, May compared the results of the veterinarians against those of the AI. She discovered that veterinary doctors specialized in horses interpreted the pictures correctly 76 percent of the time, while the remaining vets from small animal or mixed practices were right 67 percent of the time. “With the deep learning tool, the probability of getting a correct answer was 93 percent,” says May. “Although the differences were not statistically significant, they nonetheless show that the AI reliably recognizes an ERU and has great potential as a tool for supporting veterinary doctors.”
    The tool is web-app-based and simple to use. All you need is a smartphone. “It’s not meant to replace veterinarians, but can help them reach the correct diagnosis. It is particularly valuable for less experienced professionals or for horse owners in regions where vets are few and far between,” emphasizes May. Through the early detection of ERU, affected horses can receive appropriate treatment more quickly, which can be decisive in slowing down the progress of the disease and saving the afflicted eyes. More

  • in

    Researchers show it’s possible to teach old magnetic cilia new tricks

    Magnetic cilia — artificial hairs whose movement is powered by embedded magnetic particles — have been around for a while, and are of interest for applications in soft robotics, transporting objects and mixing liquids. However, existing magnetic cilia move in a fixed way. Researchers have now demonstrated a technique for creating magnetic cilia that can be “reprogrammed,” changing their magnetic properties at room temperature to change the motion of the cilia as needed.
    Most magnetic cilia make use of ‘soft’ magnets, which do not generate a magnetic field but become magnetic in the presence of a magnetic field. Only a few previous magnetic cilia have made use of ‘hard’ magnets, which are capable of producing their own magnetic field. One of the advantages of using hard magnets is that they can be programmed, meaning that you can give the magnetic field generated by the material a specific polarization. Controlling the magnetic polarization — or magnetization — allows you to essentially dictate precisely how the cilia will flex when an external magnetic field is applied.
    “What’s novel about this work, is that we have demonstrated a technique that allows us to not only program magnetic cilia, but also controllably reprogram them,” says Joe Tracy, corresponding author of a paper on the work and professor of materials science and engineering at North Carolina State University. “We can change the direction of the material’s magnetization at room temperature, which in turn allows us to completely change how the cilia flex. It’s like getting a swimmer to change their stroke.”
    For this work, the researchers created magnetic cilia consisting of a polymer embedded with magnetic microparticles. Specifically, the microparticles are neodymium magnets — powerful magnets made of neodymium, iron and boron.
    To make the cilia, the researchers introduce the magnetic microparticles into a polymer dissolved in a liquid. This slurry is then exposed to an electromagnetic field that is sufficiently powerful to give all of the microparticles the same magnetization. By then applying a less powerful magnetic field as the liquid polymer dries, the researchers are able to control the behavior of the microparticles, resulting in the formation of cilia that are regularly spaced across the substrate.
    “This regularly ordered cilia carpet is initially programmed to behave in a uniform way when exposed to an external magnetic field,” Tracy says. “But what’s really interesting here, is that we can reprogram that behavior, so that the cilia can be repurposed to have a completely different actuation.”
    To do that, the researchers first embed the cilia in ice, which fixes all of the cilia in the desired direction. The researchers then expose the cilia to a damped, alternating magnetic field which has the effect of disordering the magnetization of the microparticles. In other words, they substantially erase the preprogrammed magnetization that was shared by all of the microparticles when the cilia were fabricated.

    “The reprogramming step is fairly straightforward,” Tracy says. “We apply an oscillating field to reset the magnetization, then apply a strong magnetic field to the cilia which allows us to magnetize the microparticles in a new direction.”
    “By mostly erasing the initial magnetization, we’re better able to reprogram the magnetization of the microparticles,” says Matt Clary, first author of the paper and a Ph.D. student at NC State. “We show in this work that if you leave out that erasing step you have less control over the orientation of the microparticles’ magnetization when reprogramming.”
    “We also found that when the magnetization of the microparticles is perpendicular to the long axis of the cilia, we can cause the cilia to ‘snap’ in a rotating field, meaning they abruptly change their orientation,” says Tracy.
    In addition, the research team developed a computational model that allows users to predict the bending behavior of magnetic cilia based on hard magnets, depending on the orientation of the cilia’s polarization.
    “This model could be used in the future to guide the design of hard-magnetic cilia and related soft actuators,” says Ben Evans, coauthor of the paper and professor of physics at Elon University.
    “Ultimately, we think this work is valuable to the field because it allows repurposing of magnetic cilia for new functions or applications, especially in remote environments,” Tracy says. “Methods developed in this work may also be applied to the broader field of magnetic soft actuators.”
    The paper, “Magnetic Reprogramming of Self-Assembled Hard-Magnetic Cilia,” is published open access in the journal Advanced Materials Technologies. The paper was co-authored by Saarah Cantu, a former graduate student at NC State; and Jessica Liu, a former Ph.D. student at NC State.
    This work was done with support from the National Science Foundation, under grants 1663416 and 1662641; and from the Higher Education Emergency Relief Fund. More

  • in

    Opening up the potential of thin-film electronics for flexible chip design

    The mass production of conventional silicon chips relies on a successful business model with large ‘semiconductor fabrication plants’ or ‘foundries’. New research by KU Leuven and imec shows that this ‘foundry’ model can also be applied to the field of flexible, thin-film electronics. Adopting this approach would give innovation in the field a huge boost.
    Silicon semiconductors have become the ‘oil’ of the computer age, which was also demonstrated recently by the chip shortage crisis. However, one of the disadvantages of conventional silicon chips is that they’re not mechanically flexible. On the other hand you have the field of flexible electronics, which is driven by an alternative semiconductor technology: the thin-film transistor, or TFT. The applications in which TFTs can be used are legion: from wearable healthcare patches and neuroprobes over digital microfluidics and robotic interfaces to bendable displays and Internet of Things (IoT) electronics.
    TFT technology has well evolved, but unlike with conventional semiconductor technology the potential to use it in various applications has barely been exploited. In fact, TFTs are currently mainly mass-produced with the purpose of integrating them in displays of smartphones, laptops and smart TVs — where they are used to control pixels individually. This limits the freedom of chip designers who dream of using TFTs in flexible microchips and to come up with innovative, TFT-based applications. “This field can benefit hugely from a foundry business model similar to that of the conventional chip industry,” says Kris Myny, professor at the KU Leuven’s Emerging technologies, Systems and Security unit in Diepenbeek, and also a guest professor at imec.
    Foundry business model
    At the heart of the worldwide microchip market is the so-called foundry model. In this business model, large ‘semiconductor fabrication plants’ or ‘foundries’ (like TSMC from Taiwan) focus on the mass production of chips on silicon wafers. These are then used by the foundries’ clients — the companies that design and order the chips — to integrate them in specific applications. Thanks to this business model, the latter companies have access to complex semiconductor manufacturing to design the chips they need.
    Myny’s group has now shown that such a business model is also viable in the field of thin-film electronics. They designed a specific TFT-based microprocessor and let it be produced in two foundries, after which they tested it in their lab, with success. The same chip was produced in two versions, based on two separate TFT technologies (using different substrates) that are both mainstream. Their research paper is published in Nature.
    Multi-project approach
    The microprocessor Myny and his colleagues built is the iconic MOS 6502. Today this chip is a ‘museum piece’, but in the 70s it was the driver of the first Apple, Commodore and Nintendo computers. The group developed the 6502 chip on a wafer (using amorphous indium-gallium-zinc-oxide) and on a plate (using low-temperature polycrystalline silicon). In both cases the chips were manufactured on the substrate together with other chips, or ‘projects’. This ‘multi-project’ approach enables foundries to produce different chips on-demand from designers on single substrates.
    The chip Myny’s group made is less than 30 micrometer thick, less than a human hair. That makes it ideal for, for example, medical applications like wearable patches. Such ultra-thin wearables can be used to make electrocardiograms or electromyograms, to study the condition of respectively the heart and muscles. They would feel just like a sticker, while patches with a silicon-based chip always feel knobbly.
    Although the performance of the 6502 microprocessor is not comparable with modern ones, this research demonstrates that also flexible chips can be designed and produced in a multi-project approach, analogue to the way this happens in the conventional chip industry. Myny concludes: “We will not compete with silicon-based chips, we want to stimulate and accelerate innovation based on flexible, thin-film electronics.” More

  • in

    A simple ‘twist’ improves the engine of clean fuel generation

    Researchers have found a way to super-charge the ‘engine’ of sustainable fuel generation — by giving the materials a little twist.
    The researchers, led by the University of Cambridge, are developing low-cost light-harvesting semiconductors that power devices for converting water into clean hydrogen fuel, using just the power of the sun. These semiconducting materials, known as copper oxides, are cheap, abundant and non-toxic, but their performance does not come close to silicon, which dominates the semiconductor market.
    However, the researchers found that by growing the copper oxide crystals in a specific orientation so that electric charges move through the crystals at a diagonal, the charges move much faster and further, greatly improving performance. Tests of a copper oxide light harvester, or photocathode, based on this fabrication technique showed a 70% improvement over existing state-of-the-art oxide photocathodes, while also showing greatly improved stability.
    The researchers say their results, reported in the journal Nature, show how low-cost materials could be fine-tuned to power the transition away from fossil fuels and toward clean, sustainable fuels that can be stored and used with existing energy infrastructure.
    Copper (I) oxide, or cuprous oxide, has been touted as a cheap potential replacement for silicon for years, since it is reasonably effective at capturing sunlight and converting it into electric charge. However, much of that charge tends to get lost, limiting the material’s performance.
    “Like other oxide semiconductors, cuprous oxide has its intrinsic challenges,” said co-first author Dr Linfeng Pan from Cambridge’s Department of Chemical Engineering and Biotechnology. “One of those challenges is the mismatch between how deep light is absorbed and how far the charges travel within the material, so most of the oxide below the top layer of material is essentially dead space.”
    “For most solar cell materials, it’s defects on the surface of the material that cause a reduction in performance, but with these oxide materials, it’s the other way round: the surface is largely fine, but something about the bulk leads to losses,” said Professor Sam Stranks, who led the research. “This means the way the crystals are grown is vital to their performance.”
    To develop cuprous oxides to the point where they can be a credible contender to established photovoltaic materials, they need to be optimised so they can efficiently generate and move electric charges — made of an electron and a positively-charged electron ‘hole’ — when sunlight hits them.

    One potential optimisation approach is single-crystal thin films — very thin slices of material with a highly-ordered crystal structure, which are often used in electronics. However, making these films is normally a complex and time-consuming process.
    Using thin film deposition techniques, the researchers were able to grow high-quality cuprous oxide films at ambient pressure and room temperature. By precisely controlling growth and flow rates in the chamber, they were able to ‘shift’ the crystals into a particular orientation. Then, using high temporal resolution spectroscopic techniques, they were able to observe how the orientation of the crystals affected how efficiently electric charges moved through the material.
    “These crystals are basically cubes, and we found that when the electrons move through the cube at a body diagonal, rather than along the face or edge of the cube, they move an order of magnitude further,” said Pan. “The further the electrons move, the better the performance.”
    “Something about that diagonal direction in these materials is magic,” said Stranks. “We need to carry out further work to fully understand why and optimise it further, but it has so far resulted in a huge jump in performance.” Tests of a cuprous oxide photocathode made using this technique showed an increase in performance of more than 70% over existing state-of-the-art electrodeposited oxide photocathodes.
    “In addition to the improved performance, we found that the orientation makes the films much more stable, but factors beyond the bulk properties may be at play,” said Pan.
    The researchers say that much more research and development is still needed, but this and related families of materials could have a vital role in the energy transition.
    “There’s still a long way to go, but we’re on an exciting trajectory,” said Stranks. “There’s a lot of interesting science to come from these materials, and it’s interesting for me to connect the physics of these materials with their growth, how they form, and ultimately how they perform.”
    The research was a collaboration with École Polytechnique Fédérale de Lausanne, Nankai University and Uppsala University. The research was supported in part by the European Research Council, the Swiss National Science Foundation, and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). Sam Stranks is Professor of Optoelectronics in the Department of Chemical Engineering and Biotechnology, and a Fellow of Clare College, Cambridge. More

  • in

    Child pedestrians, self-driving vehicles: What’s the safest scenario for crossing the road?

    Crossing a busy street safely typically is a result of a social exchange. Pedestrians look for cues — a wave, a head nod, a winking flash of the headlights, and, of course, a full vehicle stop — to know it’s safe to cross.
    But those clues could be absent or different with self-driving vehicles. How will children and adults know when it’s safe to cross the road?
    In a new study, University of Iowa researchers investigated how pre-teenage children determined when it was safe to cross a residential street with oncoming self-driving cars. The researchers found children made the safest choices when self-driving cars indicated via a green light on top of the vehicle that it was safe to cross when the vehicle arrived at the intersection, then stopped. When self-driving cars turned on the green light farther away from the crossing point — and even when they slowed down — children engaged in riskier intersection crossings, the researchers learned.
    “Children exhibited much safer behavior when the light turned green later,” says Jodie Plumert, professor in the Department of Psychological and Brain Sciences and the study’s senior author. “They seemed to treat it like a walk light and waited for that light to come on before starting to cross. Our recommendation, then, for autonomous vehicle design is that their signals should turn on when the car comes to a stop, but not before.”
    The difference in the timing of the green light signal from the self-driving car is important: Children are inclined to use the light as the vehicle’s clearance to go ahead and cross, trusting that it will stop as it gets closer to the intersection. But as Plumert and co-author Elizabeth O’Neal point out, that could invite peril.
    “This could be dangerous if the car for some reason does not stop, though pedestrians will have the benefit of getting across the road sooner,” says Plumert, who is the Russell B. and Florence D. Day Chair in Liberal Arts and Sciences.
    “So, even though it may be tempting to make the traffic flow more efficient by having these signals come on early, it’s probably pretty dangerous for kids in particular,” adds O’Neal, assistant professor in the Department of Community and Behavioral Health and the study’s corresponding author.

    Some may see self-driving vehicles as a futuristic technology, but they are operating right now in American cities. The Insurance Institute for Highway Safety projects there will be 3.5 million vehicles with self-driving functionality on U.S. roads by next year, and 4.5 million by 2030. This year, an autonomous-vehicle taxi service, called Waymo One, will operate in four cities, including new routes in Los Angeles and Austin, Texas.
    This comes as pedestrian deaths from motor vehicles remains a serious concern. According to the Governors Highway Safety Association, more than 7,500 pedestrians were killed by drivers in 2022, a 40-year high.
    “The fact is drivers don’t always come to a complete stop, even with stop signs,” notes Plumert, who has studied vehicle-pedestrian interactions since 2012. “People are running stop signs all the time. Sometimes drivers don’t see people. Sometimes they’re just spacing out.”
    The researchers aimed to understand how children respond to two different cues from self-driving cars when deciding when to cross a road: gradual versus a sudden (later) slowing; and the distance from the crossing point when a green light signal atop the vehicle was activated. The researchers placed nearly 100 children ages 8 to 12 in a realistic simulated environment and asked them to cross one lane of a road with oncoming driverless vehicles. The crossings took place in an immersive, 3D interactive space at the Hank Virtual Environments Lab on the UI campus.
    Researchers observed and recorded the children’s crossing actions and spoke with them after the sessions to learn more about how they responded to the green light signaling and the timing of the vehicle slowing.
    One major difference in crossing behavior: When the car’s green light turned on farther away from the crossing point, child participants entered the intersection on average 1.5 seconds sooner than the kids whose scenario included the light coming on later and the vehicle had stopped at the crossing point.

    “That time difference is actually quite significant,” Plumert notes. “A green light signal that flashes early is potentially dangerous because kids and even adults will use it as a cue to begin crossing, trusting that the car is going to come to a stop.”
    The results build on findings published in 2017 by Plumert and O’Neal that children up to their early teenage years had difficulty consistently crossing a street safely in a virtual environment, with accident rates as high as 8% with 6-year-olds.
    That danger underscores the need for clear, easy-to-understand signaling to children from self-driving vehicles, the researchers say. Researchers are testing various communicative signals, including flashing lights, projecting eyes on the windshield, splashing racer stripes on the edge of the windshield, and written words (like walk/don’t walk).
    “All have some utility, but children are a special case,” says O’Neal, who earned a doctorate in psychology at Iowa in 2018 and had been working as a postdoctoral researcher in Plumert’s lab before joining the faculty in the College of Public Health. “They may not always be able to incorporate a flashing light or a racing light to indicate that it’s slowing or that it’s going to yield to you.”
    Children naturally understood signaling using a green light and a red light, the researchers found. But timing is critical, they learned.
    “We think vehicle manufacturers should not consider the idea of turning the light on early or having the signal present early,” Plumert says, “because people will definitely use that, and they’ll get out there in front of the approaching vehicle. People hate to wait.”
    The study is titled, “Deciding when to cross in front of an autonomous vehicle: How child and adult pedestrians respond to eHMI timing and vehicle kinematics.” It published online on April 24 in the journal Accident Analysis and Prevention.
    Lakshmi Subramanian, who earned a doctorate from Iowa and now is at Kean University in New Jersey, shares first authorship on the study. Joseph Kearney, professor emeritus in the Department of Computer Science, is a senior author. Contributing authors include Nam-Yoon Kim and Megan Noonan in the Department of Psychological and Brain Sciences.
    The U.S. National Science Foundation and the U.S. Department of Transportation funded the research. More

  • in

    Condensed matter physics: Novel one-dimensional superconductor

    In a significant development in the field of superconductivity, researchers at The University of Manchester have successfully achieved robust superconductivity in high magnetic fields using a newly created one-dimensional (1D) system. This breakthrough offers a promising pathway to achieving superconductivity in the quantum Hall regime, a longstanding challenge in condensed matter physics.
    Superconductivity, the ability of certain materials to conduct electricity with zero resistance, holds profound potential for advancements of quantum technologies. However, achieving superconductivity in the quantum Hall regime, characterised by quantised electrical conductance, has proven to be a mighty challenge.
    The research, published this week (25 April 2024) in Nature, details extensive work of the Manchester team led by Professor Andre Geim, Dr Julien Barrier and Dr Na Xin to achieve superconductivity in the quantum Hall regime. Their initial efforts followed the conventional route where counterpropagating edge states were brought into close proximity of each other. However, this approach proved to be limited.
    “Our initial experiments were primarily motivated by the strong persistent interest in proximity superconductivity induced along quantum Hall edge states,” explains Dr Barrier, the paper’s lead author. “This possibility has led to numerous theoretical predictions regarding the emergence of new particles known as non-abelian anyons.”
    The team then explored a new strategy inspired by their earlier work demonstrating that boundaries between domains in graphene could be highly conductive. By placing such domain walls between two superconductors, they achieved the desired ultimate proximity between counterpropagating edge states while minimising effects of disorder.
    “We were encouraged to observe large supercurrents at relatively ‘balmy’ temperatures up to one Kelvin in every device we fabricated,” Dr Barrier recalls.
    Further investigation revealed that the proximity superconductivity originated not from the quantum Hall edge states propagating along domain walls, but rather from strictly 1D electronic states existing within the domain walls themselves. These 1D states, proven to exist by the theory group of Professor Vladimir Fal’ko’s at the National Graphene Institute, exhibited a greater ability to hybridise with superconductivity as compared to quantum Hall edge states. The inherent one-dimensional nature of the interior states is believed to be responsible for the observed robust supercurrents at high magnetic fields.

    This discovery of single-mode 1D superconductivity shows exciting avenues for further research. “In our devices, electrons propagate in two opposite directions within the same nanoscale space and without scattering,” Dr Barrier elaborates. “Such 1D systems are exceptionally rare and hold promise for addressing a wide range of problems in fundamental physics.”
    The team has already demonstrated the ability to manipulate these electronic states using gate voltage and observe standing electron waves that modulated the superconducting properties.
    “It is fascinating to think what this novel system can bring us in the future. The 1D superconductivity presents an alternative path towards realising topological quasiparticles combining the quantum Hall effect and superconductivity,” concludes Dr Xin. This is just one example of the vast potential our findings holds.”
    20 years after the advent of the first 2D material graphene, this research by The University of Manchester represents another step forward in the field of superconductivity. The development of this novel 1D superconductor is expected to open doors for advancements in quantum technologies and pave the way for further exploration of new physics, attracting interest from various scientific communities. More