More stories

  • in

    Researchers create highly conductive metallic gel for 3D printing

    Researchers have developed a metallic gel that is highly electrically conductive and can be used to print three-dimensional (3D) solid objects at room temperature.
    “3D printing has revolutionized manufacturing, but we’re not aware of previous technologies that allowed you to print 3D metal objects at room temperature in a single step,” says Michael Dickey, co-corresponding author of a paper on the work and the Camille & Henry Dreyfus Professor of Chemical and Biomolecular Engineering at North Carolina State University. “This opens the door to manufacturing a wide range of electronic components and devices.”
    To create the metallic gel, the researchers start with a solution of micron-scale copper particles suspended in water. The researchers then add a small amount of an indium-gallium alloy that is liquid metal at room temperature. The resulting mixture is then stirred together.
    As the mixture is stirred, the liquid metal and copper particles essentially stick to each other, forming a metallic gel “network” within the aqueous solution.
    “This gel-like consistency is important, because it means you have a fairly uniform distribution of copper particles throughout the material,” Dickey says. “This does two things. First, it means the network of particles connect to form electrical pathways. And second, it means that the copper particles aren’t settling out of solution and clogging the printer.”
    The resulting gel can be printed using a conventional 3D printing nozzle and retains its shape when printed. And, when allowed to dry at room temperature, the resulting 3D object becomes even more solid while retaining its shape.
    However, if users decide to apply heat to the printed object while it is drying, some interesting things can happen.
    The researchers found that the alignment of the particles influences how the material dries. For example, if you printed a cylindrical object, the sides would contract more than the top and bottom as it dries. If something is drying at room temperature, the process is sufficiently slow that it doesn’t create structural change in the object. However, if you apply heat — for example, put it under a heat lamp at 80 degrees Celsius — the rapid drying can cause structural deformation. Because this deformation is predictable, that means you can make a printed object change shape after it is printed by controlling the pattern of the printed object and the amount of heat the object is exposed to while drying.
    “Ultimately, this sort of four-dimensional printing — the traditional three dimensions, plus time — is one more tool that can be used to create structures with the desired dimensions,” Dickey says. “But what we find most exciting about this material is its conductivity.
    “Because the printed objects end up being as much as 97.5% metal, they are highly conductive. It’s obviously not as conductive as conventional copper wire, but it’s impossible to 3D print copper wire at room temperature. And what we’ve developed is far more conductive than anything else that can be printed. We’re pretty excited about the applications here.
    “We’re open to working with industry partners to explore potential applications, and are always happy to talk with potential collaborators about future directions for research,” Dickey says.
    The work was done with support from the National Natural Science Foundation of China, under grant number 52203101; and from the China Scholarship Council, under grant number 201906250075. More

  • in

    Artificial cells demonstrate that ‘life finds a way’

    “Listen, if there’s one thing the history of evolution has taught us is that life will not be contained. Life breaks free. It expands to new territories, and it crashes through barriers painfully, maybe even dangerously, but . . . life finds a way,” said Ian Malcolm, Jeff Goldblum’s character in Jurassic Park, the 1993 science fiction film about a park with living dinosaurs.
    You won’t find any Velociraptors lurking around evolutionary biologist Jay T. Lennon’s lab; however, Lennon, a professor in the College of Arts and Sciences Department of Biology at Indiana University Bloomington, and his colleagues have found that life does indeed find a way. Lennon’s research team has been studying a synthetically constructed minimal cell that has been stripped of all but its essential genes. The team found that the streamlined cell can evolve just as fast as a normal cell — demonstrating the capacity for organisms to adapt, even with an unnatural genome that would seemingly provide little flexibility.
    “It appears there’s something about life that’s really robust,” says Lennon. “We can simplify it down to just the bare essentials, but that doesn’t stop evolution from going to work.”
    For their study, Lennon’s team used the synthetic organism, Mycoplasma mycoides JCVI-syn3B — a minimized version of the bacterium M. mycoides commonly found in the guts of goats and similar animals. Over millennia, the parasitic bacterium has naturally lost many of its genes as it evolved to depend on its host for nutrition. Researchers at the J. Craig Venter Institute in California took this one step further. In 2016, they eliminated 45 percent of the 901 genes from the natural M. mycoides genome — reducing it to the smallest set of genes required for autonomous cellular life. At 493 genes, the minimal genome of M. mycoides JCVI-syn3B is the smallest of any known free-living organism. In comparison, many animal and plant genomes contain more than 20,000 genes.
    In principle, the simplest organism would have no functional redundancies and possess only the minimum number of genes essential for life. Any mutation in such an organism could lethally disrupt one or more cellular functions, placing constraints on evolution. Organisms with streamlined genomes have fewer targets upon which positive selection can act, thus limiting opportunities for adaptation.
    Although M. mycoides JCVI-syn3B could grow and divide in laboratory conditions, Lennon and colleagues wanted to know how a minimal cell would respond to the forces of evolution over time, particularly given the limited raw materials upon which natural selection could operate as well as the uncharacterized input of new mutations.
    “Every single gene in its genome is essential,” says Lennon in reference to M. mycoides JCVI-syn3B. “One could hypothesize that there is no wiggle room for mutations, which could constrain its potential to evolve.”
    The researchers established that M. mycoides JCVI-syn3B, in fact, has an exceptionally high mutation rate. They then grew it in the lab where it was allowed to evolve freely for 300 days, equivalent to 2000 bacterial generations or about 40,000 years of human evolution.
    The next step was to set up experiments to determine how the minimal cells that had evolved for 300 days performed in comparison to the original, non-minimal M. mycoides as well as to a strain of minimal cells that hadn’t evolved for 300 days. In the comparison tests, the researchers put equal amounts of the strains being assessed together in a test tube. The strain better suited to its environment became the more common strain.
    They found that the non-minimal version of the bacterium easily outcompeted the unevolved minimal version. The minimal bacterium that had evolved for 300 days, however, did much better, effectively recovering all of the fitness that it had lost due to genome streamlining. The researchers identified the genes that changed the most during evolution. Some of these genes were involved in constructing the surface of the cell, while the functions of several others remain unknown.
    Understanding how organisms with simplified genomes overcome evolutionary challenges has important implications for long-standing problems in biology — including the treatment of clinical pathogens, the persistence of host-associated endosymbionts, the refinement of engineered microorganisms, and the origin of life itself. The research done by Lennon and his team demonstrates the power of natural selection to rapidly optimize fitness in the simplest autonomous organism, with implications for the evolution of cellular complexity. In other words, it shows that life finds a way. More

  • in

    From atoms to materials: Algorithmic breakthrough unlocks path to sustainable technologies

    New research by the University of Liverpool could signal a step change in the quest to design the new materials that are needed to meet the challenge of net zero and a sustainable future.
    Publishing in the journal Nature, the Liverpool researchers have shown that a mathematical algorithm can guarantee to predict the structure of any material just based on knowledge of the atoms that make it up.
    Developed by an interdisciplinary team of researchers from the University of Liverpool’s Departments of Chemistry and Computer Science, the algorithm systematically evaluates entire sets of possible structures at once, rather than considering them one at a time, to accelerate identification of the correct solution.
    This breakthrough makes it possible to identify those materials that can be made and, in many cases, to predict their properties. The new method was demonstrated on quantum computers that have the potential to solve many problems faster than classical computers and can therefore speed up the calculations even further.
    Our way of life depends on materials — “everything is made of something.” New materials are needed to meet the challenge of net zero, from batteries and solar absorbers for clean power to providing low-energy computing and the catalysts that will make the clean polymers and chemicals for our sustainable future.
    This search is slow and difficult because there are so many ways that atoms could be combined to make materials, and in particular so many structures that could form. In addition, materials with transformative properties are likely to have structures that are different from those that are known today, and predicting a structure that nothing is known about is a tremendous scientific challenge.
    Professor Matt Rosseinsky, from the University’s Department of Chemistry and Materials Innovation Factory, said: “Having certainty in the prediction of crystal structures now offers the opportunity to identify from the whole of the space of chemistry exactly which materials can be synthesised and the structures that they will adopt, giving us for the first time the ability to define the platform for future technologies.
    “With this new tool, we will be able to define how to use those chemical elements that are widely available and begin to create materials to replace those based on scarce or toxic elements, as well as to find materials that outperform those we rely on today, meeting the future challenges of a sustainable society.”
    Professor Paul Spirakis, from the University’s Department of Computer Science, said: “We managed to provide a general algorithm for crystal structure prediction that can be applied to a diversity of structures. Coupling local minimization to integer programming allowed us to explore the unknown atomic positions in the continuous space using strong optimization methods in a discrete space.
    Our aim is to explore and use more algorithmic ideas in the nice adventure of discovering new and useful materials. Joining efforts of chemists and computer scientists was the key to this success.”
    The research team includes researchers from the University of Liverpool’s Departments of Computer Science and Chemistry, the Materials Innovation Factory and the Leverhulme Research Centre for Functional Materials Design, which was established to develop new approaches to the design of functional materials at the atomic scale through interdisciplinary research.
    This project has received funding from the Leverhulme Trust and the Royal Society. More

  • in

    Deciphering the thermodynamic arrow of time in large-scale complex networks

    Life, from the perspective of thermodynamics, is a system out of equilibrium, resisting tendencies towards increasing their levels of disorder. In such a state, the dynamics are irreversible over time. This link between the tendency toward disorder and irreversibility is expressed as the arrow of time by the English physicist Arthur Eddington in 1927.
    Now, an international team including researchers from Kyoto University, Hokkaido University, and the Basque Center for Applied Mathematics, has developed a solution for temporal asymmetry, furthering our understanding of the behavior of biological systems, machine learning, and AI tools.
    “The study offers, for the first time, an exact mathematical solution of the temporal asymmetry — also known as entropy production — of nonequilibrium disordered Ising networks,” says co-author Miguel Aguilera of the Basque Center for Applied Mathematics.
    The researchers focused on a prototype of large-scale complex networks called the Ising model, a tool used to study recurrently connected neurons. When connections between neurons are symmetric, the Ising model is in a state of equilibrium and presents complex disordered states called spin glasses. The mathematical solution of this state led to the award of the 2021 Nobel Prize in physics to Giorgio Parisi.
    Unlike in living systems, however, spin crystals are in equilibrium and their dynamics are time-reversible. The researchers instead worked on the time-irreversible Ising dynamics caused by asymmetric connections between neurons.
    The exact solutions obtained serve as benchmarks for developing approximate methods for learning artificial neural networks. The development of learning methods used in multiple phases may advance machine learning studies.
    “The Ising model underpins recent advances in deep learning and generative artificial neural networks. So, understanding its behavior offers critical insights into both biological and artificial intelligence in general,” added Hideaki Shimazaki at KyotoU’s Graduate School of Informatics.
    “Our findings are the result of an exciting collaboration involving insights from physics, neuroscience and mathematical modeling,” remarked Aguilera. “The multidisciplinary approach has opened the door to novel ways to understand the organization of large-scale complex networks and perhaps decipher the thermodynamic arrow of time.” More

  • in

    Growing bio-inspired polymer brains for artificial neural networks

    A new method for connecting neurons in neuromorphic wetware has been developed by researchers from Osaka University and Hokkaido University. The wetware comprises conductive polymer wires grown in a three-dimensional configuration, done by applying square-wave voltage to electrodes submerged in a precursor solution. The voltage can modify wire conductance, allowing the network to be trained. This fabricated network is able to perform unsupervised Hebbian learning and spike-based learning.
    The development of neural networks to create artificial intelligence in computers was originally inspired by how biological systems work. These ‘neuromorphic’ networks, however, run on hardware that looks nothing like a biological brain, which limits performance. Now, researchers from Osaka University and Hokkaido University plan to change this by creating neuromorphic ‘wetware’.
    While neural-network models have achieved remarkable success in applications such as image generation and cancer diagnosis, they still lag far behind the general processing abilities of the human brain. In part, this is because they are implemented in software using traditional computer hardware that is not optimized for the millions of parameters and connections that these models typically require.
    Neuromorphic wetware, based on memristive devices, could address this problem. A memristive device is a device whose resistance is set by its history of applied voltage and current. In this approach, electropolymerization is used to link electrodes immersed in a precursor solution using wires made of conductive polymer. The resistance of each wire is then tuned using small voltage pulses, resulting in a memristive device.
    “The potential to create fast and energy-efficient networks has been shown using 1D or 2D structures,” says senior author Megumi Akai-Kasaya. “Our aim was to extend this approach to the construction of a 3D network.”
    The researchers were able to grow polymer wires from a common polymer mixture called ‘PEDOT:PSS’, which is highly conductive, transparent, flexible, and stable. A 3D structure of top and bottom electrodes was first immersed in a precursor solution. The PEDOT:PSS wires were then grown between selected electrodes by applying a square-wave voltage on these electrodes, mimicking the formation of synaptic connections through axon guidance in an immature brain.
    Once the wire was formed, the characteristics of the wire, especially the conductance, were controlled using small voltage pulses applied to one electrode, which changes the electrical properties of the film surrounding the wires.
    “The process is continuous and reversible,” explains lead author Naruki Hagiwara, “and this characteristic is what enables the network to be trained, just like software-based neural networks.”
    The fabricated network was used to demonstrate unsupervised Hebbian learning (i.e., when synapses that often fire together strengthen their shared connection over time). What’s more, the researchers were able to precisely control the conductance values of the wires so that the network could complete its tasks. Spike-based learning, another approach to neural networks that more closely mimics the processes of biological neural networks, was also demonstrated by controlling the diameter and conductivity of the wires.
    Next, by fabricating a chip with a larger number of electrodes and using microfluidic channels to supply the precursor solution to each electrode, the researchers hope to build a larger and more powerful network. Overall, the approach determined in this study is a big step toward the realization of neuromorphic wetware and closing the gap between the cognitive abilities of humans and computers. More

  • in

    ‘Workplace AI revolution isn’t happening yet,’ survey shows

    The UK risks a growing divide between organisations who have invested in new, artificial intelligence-enabled digital technologies and those who haven’t, new research suggests.
    Only 36% of UK employers have invested in AI-enabled technologies like industrial robots, chat bots, smart assistants and cloud computing over the past five years, according to a nationally representative survey from the Digital Futures at Work Research Centre (Digit). The survey was carried out between November 2021 and June 2022, with a second wave now underway.
    Academics at the University of Leeds, with colleagues at the Universities of Sussex and Cambridge, led the research, finding that just 10% of employers who hadn’t already invested in AI-enabled technologies were planning to invest in the next two years.
    The new data also points to a growing skills problem. Less than 10% of employers anticipated a need to make an investment in digital skills training in the coming years, despite 75% finding it difficult to recruit people with the right skills. Almost 60% of employers reported that none of their employees had received formal digital skills training in the past year.
    Lead researcher Professor Mark Stuart, Pro Dean for Research and Innovation at Leeds University Business School, said: “A mix of hope, speculation, and hype is fuelling a runaway narrative that the adoption of new AI-enabled digital technologies will rapidly transform the UK’s labour market, boosting productivity and growth. These hopes are often accompanied by fears about the consequences for jobs and even of existential risk.
    “However, our findings suggest there is a need to focus on a different policy challenge. The workplace AI revolution is not happening quite yet. Policymakers will need to address both low employer investment in digital technologies and low investment in digital skills, if the UK economy is to realise the potential benefits of digital transformation.”
    Stijn Broecke, Senior Economist at the Organisation for Economic Co-operation and Development (OECD), said: “At a time when AI is shifting digitalisation into a higher gear, it is important to move beyond the hype and have a debate that is driven by evidence rather than fear and anecdote. This new report by the Digital Futures at Work Research Centre (Digit) does exactly this and provides a nuanced picture of the impact of digital technologies on the workplace, highlighting both the risks and the opportunities.”
    The main reasons for investing were improving efficiency, productivity and product and service quality, according to the survey. On the other hand, the key reasons for non-investment were AI being irrelevant to the business activity, wider business risks and the nature of skills demanded.
    There was little evidence in this survey to suggest that investing in AI-enabled technology leads to job losses. In fact, digital adopters were more likely to have increased their employment in the five-year period before the survey.
    As policymakers race to keep up with new developments in technology, the researchers are now urging politicians to focus on the facts of AI in the workplace.
    The Employers’ Digital Practices at Work Survey is a key output of the Digital Futures at Work Research Centre, which is funded by the Economic and Social Research Council (ESRC) and co-led by the Universities of Sussex and Leeds Business Schools. The First Findings report will be available on the Digit website on Tuesday 4 July. More

  • in

    Counting Africa’s largest bat colony

    Once a year, a small forest in Zambia becomes the site of one of the world’s greatest natural spectacles. In November, straw-colored fruit bats migrate from across the African continent to a patch of trees in Kasanka National Park. For reasons not yet known, the bats converge for three months in a small area of the park, forming the largest colony of bats anywhere in Africa. The exact number of bats in this colony, however, has never been known. Estimates range anywhere from 1 to 10 million. A new method developed by the Max Planck Institute of Animal Behavior (MPI-AB) has counted the colony with the greatest accuracy yet. The method uses GoPro cameras to record bats and then applies artificial intelligence (AI) to detect animals without the need for human observers. The method, published in the journal Ecosphere, produced an overall estimate of between 750,000 and 1,000,000 bats in Kasanka — making the colony the largest for bats by biomass anywhere in the world.
    “We’ve shown that cheap cameras, combined with AI, can be used to monitor large animal populations in ways that would otherwise be impossible,” says Ben Koger who is first author on the paper. “This approach will change what we know about the natural world and how we work to maintain it in the face of rapid human development and climate change.”
    Africa’s secret gardeners
    Even amongst the charismatic fauna of the African continent, the straw-colored fruit bat shines bright. By some estimates, it’s the most abundant mammal anywhere on the continent. And, by traveling up to two thousand kilometers every year, it’s also the most extreme long-distance migrant of any flying fox. From an environmental perspective, these merits matter a lot. By dispersing seeds as they fly over vast distances, the fruit bats are cardinal reforesters of degraded land — making them a “keystone” species on the African continent.
    Scientists have long sought to estimate colony sizes of this important species, but the challenges of manually counting very large populations have led to widely fluctuating numbers. That’s always frustrated Dina Dechmann, a biologist from the MPI-AB, who has studied straw-colored fruit bats for over 10 years. Concerned that she has witnessed a decline in numbers of these fruit bats over her career, Dechmann wanted a tool that could accurately reveal if populations were changing. That is, she needed a way of counting bats that was reproducible and comparable across time.
    “Straw-colored fruit bats are the secret gardeners of Africa,” says Dechmann. “They connect the continent in ways that no other seed disperser does. A loss of the species would be devastating for the ecosystem. So, if the population is decreasing at all, we urgently need to know.”
    Dechmann began talking to longtime collaborators Roland Kays from NC State University and Teague O’Mara from Southeastern Louisiana University, as well as Kasanka Trust, the Zambian conservation organization responsible for managing Kasanka National Park and protecting its colony of bats. Together, they wondered if advances in computer vision and artificial intelligence could improve the accuracy and efficiency of counting large and complex bat populations. To find out, they approached Ben Koger, then a doctoral student at the MPI-AB, who was an expert in using automated approaches to create ecological datasets.

    Accurate and comparable bat counts
    Koger worked to devise a method that could be used by scientists and conservation managers to efficiently quantify the complex system. His method comprised two main steps. First, nine GoPro cameras were set up evenly around the colony to record the bats as they left the roost at dusk. Second, Koger trained deep learning models to automatically detect and count bats in the videos. To test the method’s accuracy, the team manually counted bats in a sample of clips and found the AI was 95% accurate — it even worked well in dark conditions.
    “Using more sophisticated technology to monitor a colony as giant as Kasanka’s could be prohibitively expensive because you’d need so much equipment,” says Koger. “But we could show that cheap cameras paired with our custom software algorithms did very well at detecting and counting bats at our study site. This is hugely important for monitoring the site in the future.”
    Recording bats over five nights, the new method counted an average of between around 750,000 and 1,000,000 animals per night. This result falls below previous counts at Kasanka, and the authors state that the study might not have caught the peak of bat migration, and some animals might have arrived after the count period. Even so, the study’s estimate makes Kasanka’s colony the heaviest congregation of bats anywhere in the world.
    Says Dechmann: “This is a game-changer for counting and conserving large populations of animals. Now, we have an efficient and reproducible way of monitoring animals over time. If we use this same method to census animals every year, we can actually say if the population is going up or down.”
    For the Kasanka colony, which is facing threats from agriculture and constriction, Dechmann says that the need for accurate monitoring has never been more urgent than now.
    “It’s easy to assume that losing a few animals here and there from large populations won’t make a dent. But if we are to maintain the ecosystem services provided by these animals, we need to maintain their populations at meaningful levels. The Kasanka colony isn’t just one of many; it’s a sink colony of bats from across the subcontinent. Losing this colony would be devastating for Africa as a whole.” More

  • in

    Limiting loss in leaky fibers

    A theoretical understanding of the relationship between the geometrical structure of hollow-core optical fibres and their leakage loss will inspire the design of novel low-loss fibres.
    Immense progress has been made in recent years to increase the efficiency of optical fibres through the design of cables that allow data to be transmitted both faster and at broader bandwidths. The greatest improvements have been made in the area of hollow-core fibres — a type of fibre that is notoriously ‘leaky’ yet also essential for many applications.
    Now, for the first time, scientists have figured out why some air-filled fibre designs work so much more efficiently than others.
    The puzzle has been solved by recent PhD graduate Dr Leah Murphy and Emeritus Professor David Bird from the Centre for Photonics and Photonic Materials at the University of Bath.
    The researchers’ theoretical and computational analysis gives a clear explanation for a phenomenon that other physicists have observed in practice: that a hollow-centred optical fibre incorporating glass filaments into its design causes ultra-low loss of light as it travels from source to destination.
    Dr Murphy said: “The work is exciting because it adds a new perspective to a 20-year-long conversation about how antiresonant, hollow-core fibres guide light. I’m really optimistic that this will encourage researchers to try out interesting new hollow-core fibre designs where light loss is kept ultra-low.”
    The communication revolution

    Optical fibres have transformed communications in recent years, playing a vital role in enabling the enormous growth of fast data transmission. Specially designed fibres have also become key in the fields of imaging, lasers and sensing (as seen, for instance, in pressure and temperature sensors used in harsh environments).
    The best fibres have some astounding properties — for example, a pulse of light can travel over 50km along a standard silica glass fibre and still retain more than 10% of its original intensity (an equivalent would be the ability to see through 50km of water).
    But the fact that light is guided through a solid material means current fibres have some drawbacks. Silica glass becomes opaque when the light it is attempting to transmit falls within the mid-infrared and ultraviolet ends of the electromagnetic spectrum. This means applications that need light at these wavelengths (such as spectrometry and instruments used by astrophysicists) cannot use standard fibres.
    In addition, high-intensity light pulses are distorted in standard fibres and they can even destroy the fibre itself.
    Researchers have been working hard to find solutions to these drawbacks, putting their efforts into developing optical fibres that guide light through air rather than glass.

    This, however, brings its own set of problems: a fundamental property of light is that it doesn’t like to be confined in a low-density region like air. Optical fibres that use air rather than glass are intrinsically leaky (the way a hosepipe would be if water could seep through the sides).
    The confinement loss (or leakage loss) is a measure of how much light intensity is lost as it moves through the fibres, and a key research goal is to improve the design of the fibre’s structure to minimise this loss.
    Hollow cores
    The most promising designs involve a central hollow core surrounded and confined by a specially designed cladding. Slotted within the cladding are hollow, ultra-thin-walled glass capillaries attached to an outer glass jacket.
    Using this set-up, the loss performance of the hollow-core fibre is close to that of a conventional fibre.
    An intriguing feature of these hollow-core fibres is that a theoretical understanding of how and why they guide light so well has not kept up with experimental progress.
    For around two decades, scientists have had a good physical understanding of how the thin glass capillary walls that face the hollow core (green in the diagram) act to reflect light back into the core and thus prevent leakage. But a theoretical model that includes only this mechanism greatly overestimates the confinement loss, and the question of why real fibres guide light far more effectively than the simple theoretical model would predict has, until now, remained unanswered.
    Dr Murphy and Professor Bird describe their model in a paper published this week in the leading journal Optica.
    The theoretical and computational analysis focuses on the role played by sections of the glass capillary walls (red in the diagram) that face neither the inner core nor the outer wall of the fibre structure.
    As well as supporting the core-facing elements of the cladding, the Bath researchers show that these elements play a crucial role in guiding light, by imposing a structure on the wave fields of the propagating light (grey curved lines in the diagram). The authors have named the effect of these structures ‘azimuthal confinement’.
    Although the basic idea of how azimuthal confinement works is simple, the concept is shown to be remarkably powerful in explaining the relationship between the geometry of the cladding structure and the confinement loss of the fibre.
    Dr Murphy, first author of the paper, said: “We expect the concept of azimuthal confinement to be important to other researchers who are studying the effect of light leakage from hollow-core fibres, as well as those who are involved in developing and fabricating new designs.”
    Professor Bird, who led the project, added: “This was a really rewarding project that needed the time and space to think about things in a different way and then work through all the details.
    “We started working on the problem in the first lockdown and it has now been keeping me busy through the first year of my retirement. The paper provides a new way for researchers to think about leakage of light in hollow-core fibres, and I’m confident it will lead to new designs being tried out.”
    Dr Murphy was funded by the UK Engineering and Physical Sciences Research Council. More