More stories

  • in

    Novel design helps develop powerful microbatteries

    Translating electrochemical performance of large format batteries to microscale power sources has been a long-standing technological challenge, limiting the ability of batteries to power microdevices, microrobots and implantable medical devices. University of Illinois Urbana-Champaign researchers have created a high-voltage microbattery ( > 9 V), with high-energy and -power density, unparalleled by any existing battery design.
    Material Science and Engineering Professor Paul Braun (Grainger Distinguished Chair in Engineering, Materials Research Laboratory Director), Dr. Sungbong Kim (Postdoc, MatSE, current assistant professor at Korea Military Academy, co-first author), and Arghya Patra (Graduate Student, MatSE, MRL, co-first author) recently published their paper “Serially integrated high-voltage and high-power miniature batteries” in Cell Reports Physical Science.
    The team demonstrated hermetically sealed (tightly closed to prevent exposure to ambient air), durable, compact, lithium batteries with exceptionally low package mass fraction in single-, double-, and triple-stacked configurations with unprecedented operating voltages, high power densities, and energy densities.
    Braun explains, “We need powerful tiny batteries to unlock the full potential of microscale devices, by improving the electrode architectures and coming up with innovative battery designs.” The problem is that as batteries become smaller, the packaging dominates the battery volume and mass while the electrode area becomes smaller. This results in drastic reductions in energy and power of the battery.
    In their unique design of powerful microbatteries, the team developed novel packaging technology that used the positive and negative terminal current collectors as part of the packaging itself (rather than a separate entity). This allowed for the compact volume (? 0.165 cm3) andlow package mass fraction (10.2%) of the batteries. In addition, they vertically stacked the electrode cells in series (so the voltage of each cell adds), which enabled the high operating voltage of the battery.
    Another way these microbatteries are improved is by using very dense electrodes which offers energy density. Normal electrodes are almost 40% by volume occupied by polymers and carbon additives (not active materials). Braun’s group has grown electrodes by an intermediate temperature direct electrodeposition technique which are fully dense and without polymer and carbon additives. These fully dense electrodes offer more volumetric energy density than their commercial counterparts. The microbatteries in this research were fabricated using the dense electroplated DirectPlateTM LiCoO2 electrodes manufactured by Xerion Advanced Battery Corporation (XABC, Dayton, Ohio), a company that spun out of Braun’s research.
    Patra mentions, “To date, electrode architectures and cell designs at the micro-nano scale have been limited to power dense designs that came at the cost of porosity and volumetric energy density. Our work has been successful to create a microscale energy source that exhibits both high power density and volumetric energy density.”
    An important application space of these microbatteries includes powering insect-size microrobots to obtain valuable information during natural disasters, search and rescue missions, and in hazardous environments where direct human access is impossible. Co-author James Pikul (Assistant Professor, Department of Mechanical Engineering and Applied Mechanics, University of Pennsylvania) points out that “the high voltage is important for reducing the electronic payload that a microrobot needs to carry. 9 V can directly power motors and reduce the energy loss associated with boosting the voltage to the hundreds or thousands of volts needed from some actuators. This means that these batteries enable system level improvements beyond their energy density enhancement so that the small robots can travel farther or send more critical information to human operators.”
    Kim adds, “Our work bridges the knowledge gap at the intersection of materials chemistry, unique materials manufacturing requirements for energy dense planar microbattery configurations, and applied nano-microelectronics that require a high-voltage, on-board type power source to drive microactuators and micromotors.”
    Braun, a pioneer in the field of battery miniaturization, concludes, “our current microbattery design is well-suited for high-energy, high-power, high-voltage, single-discharge applications. The next step is to translate the design to all solid-state microbattery platforms, batteries which would inherently be safer and more energy dense than liquid-cell counterparts.”
    Other contributors to this work include Dr. James H. Pikul (Assistant Professor, Department of Mechanical Engineering and Applied Mechanics, University of Pennsylvania), Dr. John B. Cook (XABC), Dr. Ryan Kohlmeyer (XABC), Dr. Beniamin Zahiri (Research Assistant Professor, MRL, UIUC) and Dr. Pengcheng Sun (Research Scientist, MRL, UIUC). More

  • in

    New studies suggest social isolation is a risk factor for dementia in older adults, point to ways to reduce risk

    In two studies using nationally representative data from the National Health and Aging Trends Study gathered on thousands of Americans, researchers from the Johns Hopkins University School of Medicine and Bloomberg School of Public Health have significantly added to evidence that social isolation is a substantial risk factor for dementia in community-dwelling (noninstitutionalized) older adults, and identified technology as an effective way to intervene.
    Collectively, the studies do not establish a direct cause and effect between dementia and social isolation, defined as lack of social contact and interactions with people on a regular basis. But, the researchers say, the studies strengthen observations that such isolation increases the risk of dementia, and suggest that relatively simple efforts to increase social support of older adults — such as texting and use of email — may reduce that risk. In the United States, an estimated 1 in 4 people over age 65 experience social isolation, according to the National Institute on Aging.
    “Social connections matter for our cognitive health, and it is potentially easily modifiable for older adults without the use of medication,” says Thomas Cudjoe, M.D., M.P.H., assistant professor of medicine at the Johns Hopkins University School of Medicine and senior author of both of the new studies.
    The first study, described Jan. 11 in the Journal of the American Geriatrics Society, used data collected on a group of 5,022 Medicare beneficiaries for a long-term study known as the National Health and Aging Trends, which began in 2011. All participants were 65 or older, and were asked to complete an annual two-hour, in-person interview to assess cognitive function, health status and overall well-being.
    At the initial interview, 23% of the 5,022 participants were socially isolated and showed no signs of dementia. However, by the end of this nine-year study, 21% of the total sample of participants had developed dementia. The researchers concluded that risk of developing dementia over nine years was 27% higher among socially isolated older adults compared with older adults who were not socially isolated.
    “Socially isolated older adults have smaller social networks, live alone and have limited participation in social activities,” says Alison Huang, Ph.D., M.P.H., senior research associate at the Johns Hopkins Bloomberg School of Public Health. “One possible explanation is that having fewer opportunities to socialize with others decreases cognitive engagement as well, potentially contributing to increased risk of dementia.”
    Interventions to reduce that risk are possible, according to results of the second study, published Dec. 15 in the Journal of the American Geriatrics Society. Specifically, researchers found the use of communications technology such as telephone and email lowered the risk for social isolation.
    Researchers for the second study used data from participants in the same National Health and Aging Trends study, and found that more than 70% of people age 65 and up who were not socially isolated at their initial appointment had a working cellphone and/or computer, and regularly used email or texting to initiate and respond to others. Over the four-year research period for this second study, older adults who had access to such technology consistently showed a 31% lower risk for social isolation than the rest of the cohort.
    “Basic communications technology is a great tool to combat social isolation,” says Mfon Umoh, M.D., Ph.D., postdoctoral fellow in geriatric medicine at the Johns Hopkins University School of Medicine. “This study shows that access and use of simple technologies are important factors that protect older adults against social isolation, which is associated with significant health risks. This is encouraging because it means simple interventions may be meaningful.”
    Social isolation has gained significant attention in the past decade, especially due to restrictions implemented for the COVID-19 pandemic, but more work needs to be done to identify at-risk populations and create tools for providers and caregivers to minimize risk, the researchers say. Future research in this area should focus on increased risks based on biological sex, physical limitations, race and income level.
    Other scientists who contributed to this research are Laura Prichett, Cynthia Boyd, David Roth, Tom Cidav, Shang-En Chung, Halima Amjad, and Roland Thorpe of the Johns Hopkins University School of Medicine and Bloomberg School of Public Health.
    This research was funded by the Caryl & George Bernstein Human Aging Project, the Johns Hopkins University Center for Innovative Medicine, the National Center for Advancing Translational Sciences, the National Institute on Aging, the Secunda Family Foundation, the Patient-Centered Care for Older Adults with Multiple Chronic Conditions, and the National Institute on Minority Health and Health Disparities. More

  • in

    Computers that power self-driving cars could be a huge driver of global carbon emissions

    In the future, the energy needed to run the powerful computers on board a global fleet of autonomous vehicles could generate as many greenhouse gas emissions as all the data centers in the world today.
    That is one key finding of a new study from MIT researchers that explored the potential energy consumption and related carbon emissions if autonomous vehicles are widely adopted.
    The data centers that house the physical computing infrastructure used for running applications are widely known for their large carbon footprint: They currently account for about 0.3 percent of global greenhouse gas emissions, or about as much carbon as the country of Argentina produces annually, according to the International Energy Agency. Realizing that less attention has been paid to the potential footprint of autonomous vehicles, the MIT researchers built a statistical model to study the problem. They determined that 1 billion autonomous vehicles, each driving for one hour per day with a computer consuming 840 watts, would consume enough energy to generate about the same amount of emissions as data centers currently do.
    The researchers also found that in over 90 percent of modeled scenarios, to keep autonomous vehicle emissions from zooming past current data center emissions, each vehicle must use less than 1.2 kilowatts of power for computing, which would require more efficient hardware. In one scenario — where 95 percent of the global fleet of vehicles is autonomous in 2050, computational workloads double every three years, and the world continues to decarbonize at the current rate — they found that hardware efficiency would need to double faster than every 1.1 years to keep emissions under those levels.
    “If we just keep the business-as-usual trends in decarbonization and the current rate of hardware efficiency improvements, it doesn’t seem like it is going to be enough to constrain the emissions from computing onboard autonomous vehicles. This has the potential to become an enormous problem. But if we get ahead of it, we could design more efficient autonomous vehicles that have a smaller carbon footprint from the start,” says first author Soumya Sudhakar, a graduate student in aeronautics and astronautics.
    Sudhakar wrote the paper with her co-advisors Vivienne Sze, associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Research Laboratory of Electronics (RLE); and Sertac Karaman, associate professor of aeronautics and astronautics and director of the Laboratory for Information and Decision Systems (LIDS). The research appears in the January-February issue of IEEE Micro.

    Modeling emissions
    The researchers built a framework to explore the operational emissions from computers on board a global fleet of electric vehicles that are fully autonomous, meaning they don’t require a back-up human driver.
    The model is a function of the number of vehicles in the global fleet, the power of each computer on each vehicle, the hours driven by each vehicle, and the carbon intensity of the electricity powering each computer.
    “On its own, that looks like a deceptively simple equation. But each of those variables contains a lot of uncertainty because we are considering an emerging application that is not here yet,” Sudhakar says.
    For instance, some research suggests that the amount of time driven in autonomous vehicles might increase because people can multitask while driving and the young and the elderly could drive more. But other research suggests that time spent driving might decrease because algorithms could find optimal routes that get people to their destinations faster.

    In addition to considering these uncertainties, the researchers also needed to model advanced computing hardware and software that doesn’t exist yet.
    To accomplish that, they modeled the workload of a popular algorithm for autonomous vehicles, known as a multitask deep neural network because it can perform many tasks at once. They explored how much energy this deep neural network would consume if it were processing many high-resolution inputs from many cameras with high frame rates, simultaneously.
    When they used the probabilistic model to explore different scenarios, Sudhakar was surprised by how quickly the algorithms’ workload added up.
    For example, if an autonomous vehicle has 10 deep neural networks processing images from 10 cameras, and that vehicle drives for one hour a day, it will make 21.6 million inferences each day. One billion vehicles would make 21.6 quadrillion inferences. To put that into perspective, all of Facebook’s data centers worldwide make a few trillion inferences each day (1 quadrillion is 1,000 trillion).
    “After seeing the results, this makes a lot of sense, but it is not something that is on a lot of people’s radar. These vehicles could actually be using a ton of computer power. They have a 360-degree view of the world, so while we have two eyes, they may have 20 eyes, looking all over the place and trying to understand all the things that are happening at the same time,” Karaman says.
    Autonomous vehicles would be used for moving goods, as well as people, so there could be a massive amount of computing power distributed along global supply chains, he says. And their model only considers computing — it doesn’t take into account the energy consumed by vehicle sensors or the emissions generated during manufacturing.
    Keeping emissions in check
    To keep emissions from spiraling out of control, the researchers found that each autonomous vehicle needs to consume less than 1.2 kilowatts of energy for computing. For that to be possible, computing hardware must become more efficient at a significantly faster pace, doubling in efficiency about every 1.1 years.
    One way to boost that efficiency could be to use more specialized hardware, which is designed to run specific driving algorithms. Because researchers know the navigation and perception tasks required for autonomous driving, it could be easier to design specialized hardware for those tasks, Sudhakar says. But vehicles tend to have 10- or 20-year lifespans, so one challenge in developing specialized hardware would be to “future-proof” it so it can run new algorithms.
    In the future, researchers could also make the algorithms more efficient, so they would need less computing power. However, this is also challenging because trading off some accuracy for more efficiency could hamper vehicle safety.
    Now that they have demonstrated this framework, the researchers want to continue exploring hardware efficiency and algorithm improvements. In addition, they say their model can be enhanced by characterizing embodied carbon from autonomous vehicles — the carbon emissions generated when a car is manufactured — and emissions from a vehicle’s sensors.
    While there are still many scenarios to explore, the researchers hope that this work sheds light on a potential problem people may not have considered.
    “We are hoping that people will think of emissions and carbon efficiency as important metrics to consider in their designs. The energy consumption of an autonomous vehicle is really critical, not just for extending the battery life, but also for sustainability,” says Sze.
    This research was funded, in part, by the National Science Foundation and the MIT-Accenture Fellowship. More

  • in

    Screen-printing method can make wearable electronics less expensive

    The glittering, serpentine structures that power wearable electronics can be created with the same technology used to print rock concert t-shirts, new research shows.
    The study, led by Washington State University researchers, demonstrates that electrodes can be made using just screen printing, creating a stretchable, durable circuit pattern that can be transferred to fabric and worn directly on human skin. Such wearable electronics can be used for health monitoring in hospitals or at home.
    “We wanted to make flexible, wearable electronics in a way that is much easier, more convenient and lower cost,” said corresponding author Jong-Hoon Kim, associate professor at the WSU Vancouver’s School of Engineering and Computer Science. “That’s why we focused on screen printing: it’s easy to use. It has a simple setup, and it is suitable for mass production.”
    Current commercial manufacturing of wearable electronics requires expensive processes involving clean rooms. While some use screen printing for parts of the process, this new method relies wholly on screen printing, which has advantages for manufacturers and ultimately, consumers.
    In the study, published in the ACS Applied Materials and Interfaces journal, Kim and his colleagues detail the electrode screen-printing process and demonstrate how the resulting electrodes can be used for electrocardiogram monitoring, also known as ECG.
    They used a multi-step process to layer polymer and metal inks to create snake-like structures of the electrode. While the resulting thin pattern appears delicate, the electrodes are not fragile. The study showed they could be stretched by 30% and bend to 180 degrees.
    Multiple electrodes are printed onto a pre-treated glass slide, which allows them to be easily peeled off and transferred onto fabric or other material. After printing the electrodes, the researchers transferred them onto an adhesive fabric that was then worn directly on the skin by volunteers. The wireless electrodes accurately recorded heart and respiratory rates, sending the data to a mobile phone.
    While this study focused on ECG monitoring, the screen-printing process can be used to create electrodes for a range of uses, including those that serve similar functions to smart watches or fitness trackers, Kim said.
    Kim’s lab is currently working on expanding this technology to print different electrodes as well as entire electronic chips and even potentially whole circuit boards.
    In addition to Kim, co-authors on the study includes researchers from the Georgia Institute of Technology and Pukyong National University in South Korea as well as others from WSU Vancouver. This research received support from the National Science Foundation. More

  • in

    Computer models determine drug candidate's ability to bind to proteins

    Combing computational physics with experimental data, University of Arkansas researchers have developed computer models for determining a drug candidate’s ability to target and bind to proteins within cells.
    If accurate, such an estimator could computationally demonstrate binding affinity and thus prevent experimental researchers from needing to investigate millions of chemical compounds. The work could substantially reduce the cost and time associated with developing new drugs.
    “We developed a theoretical framework for estimating ligand-protein binding,” said Mahmoud Moradi, associate professor of chemistry and biochemistry in the Fulbright College of Arts and Sciences. “The proposed method assigns an effective energy to the ligand at every grid point in a coordinate system, which has its origin at the most likely location of the ligand when it is in its bound state.”
    A ligand is a substance — an ion or molecule — such as a drug that binds to another molecule, such as a protein, to form a complex system that may cause or prevent a biological function.
    Moradi’s research focuses on computational simulations of diseases, including coronavirus. For this project, he collaborated with Suresh Thallapuranam, professor of biochemistry and the Cooper Chair of Bioinformatics Research.
    Moradi and Thallapuranam used biased simulations — as well as non-parametric re-weighting techniques to account for the bias — to create a binding estimator that was computationally efficient and accurate. They then used a mathematically robust technique called orientation quaternion formalism to further describe the ligand’s conformational changes as it bound to targeted proteins.
    The researchers tested this approach by estimating the binding affinity between human fibroblast growth factor 1 — a specific signaling protein — and heparin hexasaccharide 5, a popular medication.
    The project was conceived because Moradi and Thallapuranam were studying human fibroblast growth factor 1 protein and its mutants in the absence and presence of heparin. They found strong qualitative agreement between simulations and experimental results.
    “When it came to binding affinity, we knew that the typical methods we had at our disposal would not work for such a difficult problem,” Moradi said. “This is why we decided to develop a new method. We had a joyous moment when the experimental and computational data were compared with each other, and the two numbers matched almost perfectly.”
    The researchers’ work was published in Nature Computational Science.
    Moradi previously received attention for developing computational simulations of the behavior of SARS-CoV-2 spike proteins prior to fusion with human cell receptors. SARS-CoV-2 is the virus that causes COVID-19. More

  • in

    Now on the molecular scale: Electric motors

    Electric vehicles, powered by macroscopic electric motors, are increasingly prevalent on our streets and highways. These quiet and eco-friendly machines got their start nearly 200 years ago when physicists took the first tiny steps to bring electric motors into the world.
    Now a multidisciplinary team led by Northwestern University has made an electric motor you can’t see with the naked eye: an electric motor on the molecular scale.
    This early work — a motor that can convert electrical energy into unidirectional motion at the molecular level — has implications for materials science and particularly medicine, where the electric molecular motor could team up with biomolecular motors in the human body.
    “We have taken molecular nanotechnology to another level,” said Northwestern’s Sir Fraser Stoddart, who received the 2016 Nobel Prize in Chemistry for his work in the design and synthesis of molecular machines. “This elegant chemistry uses electrons to effectively drive a molecular motor, much like a macroscopic motor. While this area of chemistry is in its infancy, I predict one day these tiny motors will make a huge difference in medicine.”
    Stoddart, Board of Trustees Professor of Chemistry at the Weinberg College of Arts and Sciences, is a co-corresponding author of the study. The research was done in close collaboration with Dean Astumian, a molecular machine theorist and professor at the University of Maine, and William Goddard, a computational chemist and professor at the California Institute of Technology. Long Zhang, a postdoctoral fellow in Stoddart’s lab, is the paper’s first author and a co-corresponding author.
    “We have taken molecular nanotechnology to another level.” — Sir Fraser Stoddart, chemist
    Only 2 nanometers wide, the molecular motor is the first to be produced en masse in abundance. The motor is easy to make, operates quickly and does not produce any waste products.

    The study and a corresponding news brief were published today (Jan. 11) by the journal Nature.
    The research team focused on a certain type of molecule with interlocking rings known as catenanes held together by powerful mechanical bonds, so the components could move freely relative to each other without falling apart. (Stoddart decades ago played a key role in the creation of the mechanical bond, a new type of chemical bond that has led to the development of molecular machines.)
    The electric molecular motor specifically is based on a [3]catenane whose components ― a loop interlocked with two identical rings ― are redox active, i.e. they undergo unidirectional motion in response to changes in voltage potential. The researchers discovered that two rings are needed to achieve this unidirectional motion. Experiments showed that a [2]catenane, which has one loop interlocked with one ring, does not run as a motor.
    The synthesis and operation of molecules that perform the function of a motor ― converting external energy into directional motion ― has challenged scientists in the fields of chemistry, physics and molecular nanotechnology for some time.
    To achieve their breakthrough, Stoddart, Zhang and their Northwestern team spent more than four years on the design and synthesis of their electric molecular motor. This included a year working with UMaine’s Astumian and Caltech’s Goddard to complete the quantum mechanical calculations to explain the working mechanism behind the motor.

    “Controlling the relative movement of components on a molecular scale is a formidable challenge, so collaboration was crucial,” Zhang said. “Working with experts in synthesis, measurements, computational chemistry and theory enabled us to develop an electric molecular motor that works in solution.”
    A few examples of single-molecule electric motors have been reported, but they require harsh operating conditions, such as the use of an ultrahigh vacuum, and also produce waste.
    The next steps for their electric molecular motor, the researchers said, is to attach many of the motors to an electrode surface to influence the surface and ultimately do some useful work.
    “The achievement we report today is a testament to the creativity and productivity of our young scientists as well as their willingness to take risks,” Stoddart said. “This work gives me and the team enormous satisfaction.”
    Stoddart is a member of the International Institute for Nanotechnology and the Robert H. Lurie Comprehensive Cancer Center of Northwestern University. More

  • in

    Project aims to expand language technologies

    Only a fraction of the 7,000 to 8,000 languages spoken around the world benefit from modern language technologies like voice-to-text transcription, automatic captioning, instantaneous translation and voice recognition. Carnegie Mellon University researchers want to expand the number of languages with automatic speech recognition tools available to them from around 200 to potentially 2,000.
    “A lot of people in this world speak diverse languages, but language technology tools aren’t being developed for all of them,” said Xinjian Li, a Ph.D. student in the School of Computer Science’s Language Technologies Institute (LTI). “Developing technology and a good language model for all people is one of the goals of this research.”
    Li is part of a research team aiming to simplify the data requirements languages need to create a speech recognition model. The team — which also includes LTI faculty members Shinji Watanabe, Florian Metze, David Mortensen and Alan Black — presented their most recent work, “ASR2K: Speech Recognition for Around 2,000 Languages Without Audio,” at Interspeech 2022 in South Korea.
    Most speech recognition models require two data sets: text and audio. Text data exists for thousands of languages. Audio data does not. The team hopes to eliminate the need for audio data by focusing on linguistic elements common across many languages.
    Historically, speech recognition technologies focus on a language’s phoneme. These distinct sounds that distinguish one word from another — like the “d” that differentiates “dog” from “log” and “cog” — are unique to each language. But languages also have phones, which describe how a word sounds physically. Multiple phones might correspond to a single phoneme. So even though separate languages may have different phonemes, their underlying phones could be the same.
    The LTI team is developing a speech recognition model that moves away from phonemes and instead relies on information about how phones are shared between languages, thereby reducing the effort to build separate models for each language. Specifically, it pairs the model with a phylogenetic tree — a diagram that maps the relationships between languages — to help with pronunciation rules. Through their model and the tree structure, the team can approximate the speech model for thousands of languages without audio data.
    “We are trying to remove this audio data requirement, which helps us move from 100 or 200 languages to 2,000,” Li said. “This is the first research to target such a large number of languages, and we’re the first team aiming to expand language tools to this scope.”
    Still in an early stage, the research has improved existing language approximation tools by a modest 5%, but the team hopes it will serve as inspiration not only for their future work but also for that of other researchers.
    For Li, the work means more than making language technologies available to all. It’s about cultural preservation.
    “Each language is a very important factor in its culture. Each language has its own story, and if you don’t try to preserve languages, those stories might be lost,” Li said. “Developing this kind of speech recognition system and this tool is a step to try to preserve those languages.”
    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Aaron Aupperlee. Note: Content may be edited for style and length. More

  • in

    The optical fiber that keeps data safe even after being twisted or bent

    Optical fibres are the backbone of our modern information networks. From long-range communication over the internet to high-speed information transfer within data centres and stock exchanges, optical fibre remains critical in our globalised world.
    Fibre networks are not, however, structurally perfect, and information transfer can be compromised when things go wrong. Tßo address this problem, physicists at the University of Bath in the UK have developed a new kind of fibre designed to enhance the robustness of networks. This robustness could prove to be especially important in the coming age of quantum networks.
    The team has fabricated optical fibres (the flexible glass channels through which information is sent) that can protect light (the medium through which data is transmitted) using the mathematics of topology. Best of all, these modified fibres are easily scalable, meaning the structure of each fibre can be preserved over thousands of kilometres.
    The Bath study is published in the latest issue of Science Advances.
    Protecting light against disorder
    At its simplest, optical fibre, which typically has a diameter of 125 µm (similar to a thick strand of hair) comprises a core of solid glass surrounded by cladding. Light travels through the core, where it bounces along as though reflecting off a mirror.

    However, the pathway taken by an optical fibre as it crisscrosses the landscape is rarely straight and undisturbed: turns, loops, and bends are the norm. Distortions in the fibre can cause information to degrade as it moves between sender and receiver. “The challenge was to build a network that takes robustness into account,” said Physics PhD student Nathan Roberts, who led the research.
    “Whenever you fabricate a fibre-optic cable, small variations in the physical structure of the fibre are inevitably present. When deployed in a network, the fibre can also get twisted and bent. One way to counter these variations and defects is to ensure the fibre design process includes a real focus on robustness. This is where we found the ideas of topology useful.”
    To design this new fibre, the Bath team used topology, which is the mathematical study of quantities that remain unchanged despite continuous distortions to the geometry. Its principles are already applied to many areas of physics research. By connecting physical phenomena to unchanging numbers, the destructive effects of a disordered environment can be avoided.
    The fibre designed by the Bath team deploys topological ideas by including several light-guiding cores in a fibre, linked together in a spiral. Light can hop between these cores but becomes trapped within the edge thanks to the topological design. These edge states are protected against disorder in the structure.
    Bath physicist Dr Anton Souslov, who co-authored the study as theory lead, said: “Using our fibre, light is less influenced by environmental disorder than it would be in an equivalent system lacking topological design.

    “By adopting optical fibres with topological design, researchers will have the tools to pre-empt and forestall signal-degrading effects by building inherently robust photonic systems.”
    Theory meets practical expertise
    Bath physicist Dr Peter Mosley, who co-authored the study as experimental lead, said: “Previously, scientists have applied the complex mathematics of topology to light, but here at the University of Bath we have lots of experience physically making optical fibres, so we put the mathematics together with our expertise to create topological fibre.”
    The team, which also includes PhD student Guido Baardink and Dr Josh Nunn from the Department of Physics, are now looking for industry partners to develop their concept further.
    “We are really keen to help people build robust communication networks and we are ready for the next phase of this work,” said Dr Souslov.
    Mr Roberts added: “We have shown that you can make kilometres of topological fibre wound around a spool. We envision a quantum internet where information will be transmitted robustly across continents using topological principles.”
    He also pointed out that this research has implications that go beyond communications networks. He said: “Fibre development is not only a technological challenge, but also an exciting scientific field in its own right.
    “Understanding how to engineer optical fibre has led to light sources from bright ‘supercontinuum’ that spans the entire visible spectrum right down to quantum light sources that produce individual photons — single particles of light.”
    The future is quantum
    Quantum networks are widely expected to play an important technological role in years to come. Quantum technologies have the capacity to store and process information in more powerful ways than ‘classical’ computers can today, as well as sending messages securely across global networks without any chance of eavesdropping.
    But the quantum states of light that transmit information are easily impacted by their environment and finding a way to protect them is a major challenge. This work may be a step towards maintaining quantum information in fibre optics using topological design. More