More stories

  • in

    Now on the molecular scale: Electric motors

    Electric vehicles, powered by macroscopic electric motors, are increasingly prevalent on our streets and highways. These quiet and eco-friendly machines got their start nearly 200 years ago when physicists took the first tiny steps to bring electric motors into the world.
    Now a multidisciplinary team led by Northwestern University has made an electric motor you can’t see with the naked eye: an electric motor on the molecular scale.
    This early work — a motor that can convert electrical energy into unidirectional motion at the molecular level — has implications for materials science and particularly medicine, where the electric molecular motor could team up with biomolecular motors in the human body.
    “We have taken molecular nanotechnology to another level,” said Northwestern’s Sir Fraser Stoddart, who received the 2016 Nobel Prize in Chemistry for his work in the design and synthesis of molecular machines. “This elegant chemistry uses electrons to effectively drive a molecular motor, much like a macroscopic motor. While this area of chemistry is in its infancy, I predict one day these tiny motors will make a huge difference in medicine.”
    Stoddart, Board of Trustees Professor of Chemistry at the Weinberg College of Arts and Sciences, is a co-corresponding author of the study. The research was done in close collaboration with Dean Astumian, a molecular machine theorist and professor at the University of Maine, and William Goddard, a computational chemist and professor at the California Institute of Technology. Long Zhang, a postdoctoral fellow in Stoddart’s lab, is the paper’s first author and a co-corresponding author.
    “We have taken molecular nanotechnology to another level.” — Sir Fraser Stoddart, chemist
    Only 2 nanometers wide, the molecular motor is the first to be produced en masse in abundance. The motor is easy to make, operates quickly and does not produce any waste products.

    The study and a corresponding news brief were published today (Jan. 11) by the journal Nature.
    The research team focused on a certain type of molecule with interlocking rings known as catenanes held together by powerful mechanical bonds, so the components could move freely relative to each other without falling apart. (Stoddart decades ago played a key role in the creation of the mechanical bond, a new type of chemical bond that has led to the development of molecular machines.)
    The electric molecular motor specifically is based on a [3]catenane whose components ― a loop interlocked with two identical rings ― are redox active, i.e. they undergo unidirectional motion in response to changes in voltage potential. The researchers discovered that two rings are needed to achieve this unidirectional motion. Experiments showed that a [2]catenane, which has one loop interlocked with one ring, does not run as a motor.
    The synthesis and operation of molecules that perform the function of a motor ― converting external energy into directional motion ― has challenged scientists in the fields of chemistry, physics and molecular nanotechnology for some time.
    To achieve their breakthrough, Stoddart, Zhang and their Northwestern team spent more than four years on the design and synthesis of their electric molecular motor. This included a year working with UMaine’s Astumian and Caltech’s Goddard to complete the quantum mechanical calculations to explain the working mechanism behind the motor.

    “Controlling the relative movement of components on a molecular scale is a formidable challenge, so collaboration was crucial,” Zhang said. “Working with experts in synthesis, measurements, computational chemistry and theory enabled us to develop an electric molecular motor that works in solution.”
    A few examples of single-molecule electric motors have been reported, but they require harsh operating conditions, such as the use of an ultrahigh vacuum, and also produce waste.
    The next steps for their electric molecular motor, the researchers said, is to attach many of the motors to an electrode surface to influence the surface and ultimately do some useful work.
    “The achievement we report today is a testament to the creativity and productivity of our young scientists as well as their willingness to take risks,” Stoddart said. “This work gives me and the team enormous satisfaction.”
    Stoddart is a member of the International Institute for Nanotechnology and the Robert H. Lurie Comprehensive Cancer Center of Northwestern University. More

  • in

    Rare earth mining may be key to our renewable energy future. But at what cost?

    In spring 1949, three prospectors armed with Geiger counters set out to hunt for treasure in the arid mountains of southern Nevada and southeastern California.

    In the previous century, those mountains yielded gold, silver, copper and cobalt. But the men were looking for a different kind of treasure: uranium. The world was emerging from World War II and careening into the Cold War. The United States needed uranium to build its nuclear weapons arsenal. Mining homegrown sources became a matter of national security.

    Science News headlines, in your inbox

    Headlines and summaries of the latest Science News articles, delivered to your email inbox every Thursday.

    Thank you for signing up!

    There was a problem signing you up.

    After weeks of searching, the trio hit what they thought was pay dirt. Their instruments detected intense radioactivity in brownish-red veins of ore exposed in a rocky outcrop within California’s Clark Mountain Range. But instead of uranium, the brownish-red stuff turned out to be bastnaesite, a mineral bearing fluorine, carbon and 17 curious elements known collectively as rare earths. Traces of radioactive thorium, also in the ore, had set the Geiger counters pinging.

    As disappointing as that must have been, the bastnaesite still held value, and the prospectors sold their claim to the Molybdenum Corporation of America, later called Molycorp. The company was interested in mining the rare earths. During the mid-20th century, rare earth elements were becoming useful in a variety of ways: Cerium, for example, was the basis for a glass-polishing powder and europium lent luminescence to recently invented color television screens and fluorescent lamps.

    For the next few decades, the site, later dubbed Mountain Pass mine, was the world’s top source for rare earth elements, until two pressures became too much. By the late 1980s, China was intensively mining its own rare earths — and selling them at lower prices. And a series of toxic waste spills at Mountain Pass brought production at the struggling mine to a halt in 2002.

    But that wasn’t the end of the story. The green-tech revolution of the 21st century brought new attention to Mountain Pass, which later reopened and remains the only U.S. mine for rare earths.

    Rare earths are now integral to the manufacture of many carbon-neutral technologies — plus a whole host of tools that move the modern world. These elements are the building blocks of small, super­efficient permanent magnets that keep smartphones buzzing, wind turbines spinning, electric vehicles zooming and more.

    Mining U.S. sources of rare earth elements, President Joe Biden’s administration stated in February 2021, is a matter of national security.

    Rare earths are not actually rare on Earth, but they tend to be scattered throughout the crust at low concentrations. And the ore alone is worth relatively little without the complex, often environmentally hazardous processing involved in converting the ore into a usable form, says Julie Klinger, a geographer at the University of Delaware in Newark. As a result, the rare earth mining industry is wrestling with a legacy of environmental problems.

    Rare earths are mined by digging vast open pits in the ground, which can contaminate the environment and disrupt ecosystems. When poorly regulated, mining can produce wastewater ponds filled with acids, heavy metals and radioactive material that might leak into groundwater. Processing the raw ore into a form useful to make magnets and other tech is a lengthy effort that takes large amounts of water and potentially toxic chemicals, and produces voluminous waste.

    “We need rare earth elements … to help us with the transition to a climate-safe future,” says Michele Bustamante, a sustainability researcher at the Natural Resources Defense Council in Washington, D.C. Yet “everything that we do when we’re mining is impactful environmentally,” Bustamante says.

    But there are ways to reduce mining’s footprint, says Thomas Lograsso, a metallurgist at the Ames National Laboratory in Iowa and the director of the Critical Materials Institute, a Department of Energy research center. Researchers are investigating everything from reducing the amount of waste produced during the ore processing to improving the efficiency of rare earth element separation, which can also cut down on the amount of toxic waste. Scientists are also testing alternatives to mining, such as recycling rare earths from old electronics or recovering them from coal waste.

    Much of this research is in partnership with the mining industry, whose buy-in is key, Lograsso says. Mining companies have to be willing to invest in making changes. “We want to make sure that the science and innovations that we do are driven by industry needs, so that we’re not here developing solutions that nobody really wants,” he says.

    Klinger says she’s cautiously optimistic that the rare earth mining industry can become less polluting and more sustainable, if such solutions are widely adopted. “A lot of gains come from the low-hanging fruit,” she says. Even basic hardware upgrades to improve insulation can reduce the fuel required to reach the high temperatures needed for some processing. “You do what you [can].”

    The environmental impact of rare earth mining

    Between the jagged peaks of California’s Clark range and the Nevada border sits a broad, flat, shimmering valley known as the Ivanpah Dry Lake. Some 8,000 years ago, the valley held water year-round. Today, like many such playas in the Mojave Desert, the lake is ephemeral, winking into appearance only after an intense rain and flash flooding. It’s a beautiful, stark place, home to endangered desert tortoises and rare desert plants like Mojave milkweed.

    From about 1984 to 1998, the Ivanpah Dry Lake was also a holding pen for wastewater piped in from Mountain Pass. The wastewater was a by-product of chemical processing to concentrate the rare earth elements in the mined rock, making it more marketable to companies that could then extract those elements to make specific products. Via a buried pipeline, the mine sent wastewater to evaporation ponds about 23 kilometers away, in and around the dry lake bed.

    The pipeline repeatedly ruptured over the years. At least 60 separate spills dumped an estimated 2,000 metric tons of wastewater containing radioactive thorium into the valley. Federal officials feared that local residents and visitors to the nearby Mojave National Preserve might be at risk of exposure to that thorium, which could lead to increased risk of lung, pancreatic and other cancers.

    Unocal Corporation, which had acquired Molycorp in 1977, was ordered to clean up the spill in 1997, and the company paid over $1.4 million in fines and settlements. Chemical processing of the raw ore ground to a halt. Mining operations stopped shortly afterward.

    Half a world away, another environmental disaster was unfolding. The vast majority — between 80 and 90 percent — of rare earth elements on the market since the 1990s have come from China. One site alone, the massive Bayan Obo mine in Inner Mongolia, accounted for 45 percent of rare earth production in 2019.

    Bayan Obo spans some 4,800 hectares, about half the size of Florida’s Walt Disney World resort. It is also one of the most heavily polluted places on Earth. Clearing the land to dig for ore meant removing vegetation in an area already prone to desertification, allowing the Gobi Desert to creep southward.

    In 2010, officials in the nearby city of Baotou noted that radioactive, arsenic- and fluorine-containing mine waste, or tailings, was being dumped on farmland and into local water supplies, as well as into the nearby Yellow River. The air was polluted by fumes and toxic dust that reduced visibility. Residents complained of nausea, dizziness, migraines and arthritis. Some had skin lesions and discolored teeth, signs of prolonged exposure to arsenic; others exhibited signs of brittle bones, indications of skeletal fluorosis, Klinger says.

    The country’s rare earth industry was causing “severe damage to the ecological environment,” China’s State Council wrote in 2010. The release of heavy metals and other pollutants during mining led to “the destruction of vegetation and pollution of surface water, groundwater and farmland.” The “excessive rare earth mining,” the council wrote, led to landslides and clogged rivers.

    Faced with these mounting environmental disasters, as well as fears that it was depleting its rare earth resources too rapidly, China slashed its export of the elements in 2010 by 40 percent. The new limits sent prices soaring and kicked off concern around the globe that China had too tight of a stranglehold on these must-have elements. That, in turn, sparked investment in rare earth mining elsewhere.

    In 2010, there were few other places mining rare earths, with only minimal production from India, Brazil and Malaysia. A new mine in remote Western Australia came online in 2011, owned by mining company Lynas. The company dug into fossilized lava preserved within an ancient volcano called Mount Weld.

    Mount Weld didn’t have anywhere near the same sort of environmental impact seen in China: Its location was too remote and the mine was just a fraction of the size of Bayan Obo, according to Saleem Ali, an environmental planner at the University of Delaware. The United States, meanwhile, was eager to once again have its own source of rare earths — and Mountain Pass was still the best prospect.

    The Bayan Obo mine (shown) in China’s Inner Mongolia region was responsible for nearly half of the world’s rare earth production in 2019. Mining there has taken a heavy toll on the local residents and the environment.WU CHANGQING/VCG VIA GETTY IMAGES

    Mountain Pass mine gets revived

    After the Ivanpah Dry Lake mess, the Mountain Pass mine changed hands again. Chevron purchased it in 2005, but did not resume operations. Then, in 2008, a newly formed company called Molycorp Minerals purchased the mine with ambitious plans to create a complete rare earth supply chain in the United States.

    The goal was not just mining and processing ore, but also separating out the desirable elements and even manufacturing them into magnets. Currently, the separations and magnet manufacturing are done overseas, mostly in China. The company also proposed a plan to avoid spilling wastewater into nearby fragile habitats. Molycorp resumed mining, and introduced a “dry tailings” process — a method to squeeze 85 percent of the water out of its mine waste, forming a thick paste. The company would then store the immobilized, pasty residue in lined pits on its own land and recycle the water back into the facility.

    Unfortunately, Molycorp “was an epic debacle” from a business perspective, says Matt Sloustcher, senior vice president of communications and policy at MP Materials, current owner of Mountain Pass mine. Mismanagement ultimately led Molycorp to file for Chapter 11 bankruptcy in 2015. MP Materials bought the mine in 2017 and resumed mining later that year. By 2022, Mountain Pass mine was producing 15 percent of the world’s rare earths.

    MP Materials, too, has an ambitious agenda with plans to create a complete supply chain. And the company is determined not to repeat the mistakes of its predecessors. “We have a world-class … unbelievable deposit, an untapped potential,” says Michael Rosenthal, MP Materials’ chief operating officer. “We want to support a robust and diverse U.S. supply chain, be the magnetics champion in the U.S.”

    The challenges of separating rare earths

    On a hot morning in August, Sloustcher stands at the edge of the Mountain Pass mine, a giant hole in the ground, 800 meters across and up to 183 meters deep, big enough to be visible from space. It’s an impressive sight, and a good vantage point from which to describe a vision for the future. He points out the various buildings: where the ore is crushed and ground, where the ground rocks are chemically treated to slough off as much non–rare earth material as possible, and where the water is squeezed from that waste and the waste is placed into lined ponds.

    The end result is a highly concentrated rare earth oxide ore — still nowhere near the magnet-making stage. But the company has a three-stage plan “to restore the full rare earth supply to the United States,” from “mine to magnet,” Rosenthal says. Stage 1, begun in 2017, was to restart mining, crushing and concentrating the ore. Stage 2 will culminate in the chemical separation of the rare earth elements. And stage 3 will be magnet production, he says.

    Since coming online in 2017, MP Materials has shipped its concentrated ore to China for the next steps, including the arduous, hazardous process of separating the elements from one another. But in November, the company announced to investors that it had begun the preliminary steps for stage 2, a “major milestone” on the way to realizing its mine-to-magnet ambitions.

    With investments from the U.S. Department of Defense, the company is building two separations facilities. One plant will pull out lighter rare earth elements — those with smaller atomic numbers, including neodymium and praseodymium, both of which are key ingredients in the permanent magnets that power electric vehicles and many consumer electronics. MP Materials has additional grant money from the DOD to design and build a second processing plant to split apart the heavier rare earth elements such as dysprosium, also an ingredient in magnets, and yttrium, used to make superconductors and lasers.

    Like stage 2, stage 3 is already under way. In 2022, the company broke ground in Fort Worth, Texas, for a facility to produce neodymium magnets. And it inked a deal with General Motors to supply those magnets for electric vehicle motors.

    But separating the elements comes with its own set of environmental concerns.

    The process is difficult and leads to lots of waste. Rare earth elements are extremely similar chemically, which means they tend to stick together. Forcing them apart requires multiple sequential steps and a variety of powerful solvents to separate them one by one. Caustic sodium hydroxide causes cerium to drop out of the mix, for example. Other steps involve solutions containing organic molecules called ligands, which have a powerful thirst for metal atoms. The ligands can selectively bind to particular rare earth elements and pull them out of the mix.

    But one of the biggest issues plaguing this extraction process is its inefficiency, says Santa Jansone-Popova, an organic chemist at Oak Ridge National Laboratory in Tennessee. The scavenging of these metals is slow and imperfect, and companies have to go through a lot of extraction steps to get a sufficiently marketable amount of the elements. With the current chemical methods, “you need many, many, many stages in order to achieve the desired separation,” Jansone-Popova says. That makes the whole process “more complex, more expensive, and [it] produces more waste.”

    Under the aegis of the DOE’s Critical Materials Institute, Jansone-Popova and her colleagues have been hunting for a way to make the process more efficient, eliminating many of those steps. In 2022, the researchers identified a ligand that they say is much more efficient at snagging certain rare earths than the ligands now used in the industry. Industry partners are on board to try out the new process this year, she says.

    In addition to concerns about heavy metals and other toxic materials in the waste, there are lingering worries about the potential impacts of radioactivity on human health. The trouble is that there is still only limited epidemiological evidence of the impact of rare earth mining on human and environmental health, according to Ali, and much of that evidence is related to the toxicity of heavy metals such as arsenic. It’s also not clear, he says, how much of the concerns over radioactive waste are scientifically supported, due to the low concentration of radioactive elements in mined rare earths.

    Such concerns get international attention, however. In 2019, protests erupted in Malaysia over what activists called “a mountain of toxic waste,” about 1.5 million metric tons, produced by a rare earth separation facility near the Malaysian city of Kuantan. The facility is owned by Lynas, which ships its rare earth ore from Australia’s Mount Weld to the site. To dissolve the rare earths, the ore is cooked with sulfuric acid and then diluted with water. The residue that’s left behind can contain traces of radioactive thorium.

    Australian company Lynas built a plant near Kuantan, Malaysia, (shown in 2012) to separate and process the rare earth oxide ore mined at Mount Weld in Western Australia. Local protests erupted in 2019 over how the company disposes of its thorium-laced waste.GOH SENG CHONG/BLOOMBERG VIA GETTY IMAGES

    Lynas had no permanent storage for the waste, piling it up in hills near Kuantan instead. But the alarm over the potential radioactivity in those hills may be exaggerated, experts say. Lynas reports that workers at the site are exposed to less than 1.05 millisieverts per year, far below the radiation exposure threshold for workers of 20 millisieverts set by the International Atomic Energy Agency.

    “There’s a lot of misinformation about by­products such as thorium.… The thorium from rare earth processing is actually very low-level radiation,” Ali says. “As someone who has been a committed environmentalist, I feel right now that there’s not much science-based decision making on these things.”

    Given the concerns over new mining, environmental think tanks like the World Resources Institute have been calling for more recycling of existing rare earth materials to reduce the need for new mining and processing.

    “The path to the future has to do with getting the most out of what we take out of the ground,” says Bustamante, of the NRDC. “Ultimately the biggest lever for change is not in the mining itself, but in the manufacturing, and what we do with those materials at the end of life.”

    That means using mined resources as efficiently as possible, but also recycling rare earths out of already existing materials. Getting more out of these materials can reduce the overall environmental impacts of the mining itself, she adds.

    That is a worthwhile goal, but recycling isn’t a silver bullet, Ali says. For one thing, there aren’t enough spent rare earth–laden batteries and other materials available at the moment for recycling. “Some mining will be necessary, [because] right now we don’t have the stock.” And that supply problem, he adds, will only grow as demand increases. More

  • in

    Project aims to expand language technologies

    Only a fraction of the 7,000 to 8,000 languages spoken around the world benefit from modern language technologies like voice-to-text transcription, automatic captioning, instantaneous translation and voice recognition. Carnegie Mellon University researchers want to expand the number of languages with automatic speech recognition tools available to them from around 200 to potentially 2,000.
    “A lot of people in this world speak diverse languages, but language technology tools aren’t being developed for all of them,” said Xinjian Li, a Ph.D. student in the School of Computer Science’s Language Technologies Institute (LTI). “Developing technology and a good language model for all people is one of the goals of this research.”
    Li is part of a research team aiming to simplify the data requirements languages need to create a speech recognition model. The team — which also includes LTI faculty members Shinji Watanabe, Florian Metze, David Mortensen and Alan Black — presented their most recent work, “ASR2K: Speech Recognition for Around 2,000 Languages Without Audio,” at Interspeech 2022 in South Korea.
    Most speech recognition models require two data sets: text and audio. Text data exists for thousands of languages. Audio data does not. The team hopes to eliminate the need for audio data by focusing on linguistic elements common across many languages.
    Historically, speech recognition technologies focus on a language’s phoneme. These distinct sounds that distinguish one word from another — like the “d” that differentiates “dog” from “log” and “cog” — are unique to each language. But languages also have phones, which describe how a word sounds physically. Multiple phones might correspond to a single phoneme. So even though separate languages may have different phonemes, their underlying phones could be the same.
    The LTI team is developing a speech recognition model that moves away from phonemes and instead relies on information about how phones are shared between languages, thereby reducing the effort to build separate models for each language. Specifically, it pairs the model with a phylogenetic tree — a diagram that maps the relationships between languages — to help with pronunciation rules. Through their model and the tree structure, the team can approximate the speech model for thousands of languages without audio data.
    “We are trying to remove this audio data requirement, which helps us move from 100 or 200 languages to 2,000,” Li said. “This is the first research to target such a large number of languages, and we’re the first team aiming to expand language tools to this scope.”
    Still in an early stage, the research has improved existing language approximation tools by a modest 5%, but the team hopes it will serve as inspiration not only for their future work but also for that of other researchers.
    For Li, the work means more than making language technologies available to all. It’s about cultural preservation.
    “Each language is a very important factor in its culture. Each language has its own story, and if you don’t try to preserve languages, those stories might be lost,” Li said. “Developing this kind of speech recognition system and this tool is a step to try to preserve those languages.”
    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Aaron Aupperlee. Note: Content may be edited for style and length. More

  • in

    The optical fiber that keeps data safe even after being twisted or bent

    Optical fibres are the backbone of our modern information networks. From long-range communication over the internet to high-speed information transfer within data centres and stock exchanges, optical fibre remains critical in our globalised world.
    Fibre networks are not, however, structurally perfect, and information transfer can be compromised when things go wrong. Tßo address this problem, physicists at the University of Bath in the UK have developed a new kind of fibre designed to enhance the robustness of networks. This robustness could prove to be especially important in the coming age of quantum networks.
    The team has fabricated optical fibres (the flexible glass channels through which information is sent) that can protect light (the medium through which data is transmitted) using the mathematics of topology. Best of all, these modified fibres are easily scalable, meaning the structure of each fibre can be preserved over thousands of kilometres.
    The Bath study is published in the latest issue of Science Advances.
    Protecting light against disorder
    At its simplest, optical fibre, which typically has a diameter of 125 µm (similar to a thick strand of hair) comprises a core of solid glass surrounded by cladding. Light travels through the core, where it bounces along as though reflecting off a mirror.

    However, the pathway taken by an optical fibre as it crisscrosses the landscape is rarely straight and undisturbed: turns, loops, and bends are the norm. Distortions in the fibre can cause information to degrade as it moves between sender and receiver. “The challenge was to build a network that takes robustness into account,” said Physics PhD student Nathan Roberts, who led the research.
    “Whenever you fabricate a fibre-optic cable, small variations in the physical structure of the fibre are inevitably present. When deployed in a network, the fibre can also get twisted and bent. One way to counter these variations and defects is to ensure the fibre design process includes a real focus on robustness. This is where we found the ideas of topology useful.”
    To design this new fibre, the Bath team used topology, which is the mathematical study of quantities that remain unchanged despite continuous distortions to the geometry. Its principles are already applied to many areas of physics research. By connecting physical phenomena to unchanging numbers, the destructive effects of a disordered environment can be avoided.
    The fibre designed by the Bath team deploys topological ideas by including several light-guiding cores in a fibre, linked together in a spiral. Light can hop between these cores but becomes trapped within the edge thanks to the topological design. These edge states are protected against disorder in the structure.
    Bath physicist Dr Anton Souslov, who co-authored the study as theory lead, said: “Using our fibre, light is less influenced by environmental disorder than it would be in an equivalent system lacking topological design.

    “By adopting optical fibres with topological design, researchers will have the tools to pre-empt and forestall signal-degrading effects by building inherently robust photonic systems.”
    Theory meets practical expertise
    Bath physicist Dr Peter Mosley, who co-authored the study as experimental lead, said: “Previously, scientists have applied the complex mathematics of topology to light, but here at the University of Bath we have lots of experience physically making optical fibres, so we put the mathematics together with our expertise to create topological fibre.”
    The team, which also includes PhD student Guido Baardink and Dr Josh Nunn from the Department of Physics, are now looking for industry partners to develop their concept further.
    “We are really keen to help people build robust communication networks and we are ready for the next phase of this work,” said Dr Souslov.
    Mr Roberts added: “We have shown that you can make kilometres of topological fibre wound around a spool. We envision a quantum internet where information will be transmitted robustly across continents using topological principles.”
    He also pointed out that this research has implications that go beyond communications networks. He said: “Fibre development is not only a technological challenge, but also an exciting scientific field in its own right.
    “Understanding how to engineer optical fibre has led to light sources from bright ‘supercontinuum’ that spans the entire visible spectrum right down to quantum light sources that produce individual photons — single particles of light.”
    The future is quantum
    Quantum networks are widely expected to play an important technological role in years to come. Quantum technologies have the capacity to store and process information in more powerful ways than ‘classical’ computers can today, as well as sending messages securely across global networks without any chance of eavesdropping.
    But the quantum states of light that transmit information are easily impacted by their environment and finding a way to protect them is a major challenge. This work may be a step towards maintaining quantum information in fibre optics using topological design. More

  • in

    Scientists use machine learning to fast-track drug formulation development

    Scientists at the University of Toronto have successfully tested the use of machine learning models to guide the design of long-acting injectable drug formulations. The potential for machine learning algorithms to accelerate drug formulation could reduce the time and cost associated with drug development, making promising new medicines available faster.
    The study was published today in Nature Communications and is one of the first to apply machine learning techniques to the design of polymeric long-acting injectable drug formulations.
    The multidisciplinary researchis led by Christine Allen from the University of Toronto’s department of pharmaceutical sciences and Alán Aspuru-Guzik, from thedepartments of chemistry and computer science. Both researchers are also members of the Acceleration Consortium, a global initiative that uses artificial intelligence and automation to accelerate the discovery of materials and molecules needed for a sustainable future.
    “This study takes a critical step towards data-driven drug formulation development with an emphasis on long-acting injectables,” said Christine Allen, professor in pharmaceutical sciences at the Leslie Dan Faculty of Pharmacy, University of Toronto. “We’ve seen how machine learning has enabled incredible leap-step advances in the discovery of new molecules that have the potential to become medicines. We are now working to apply the same techniques to help us design better drug formulations and, ultimately, better medicines.”
    Considered one of the most promising therapeutic strategies for the treatment of chronic diseases, long-acting injectables (LAI) are a class of advanced drug delivery systems that are designed to release their cargo over extended periods of time to achieve a prolonged therapeutic effect. This approach can help patients better adhere to their medication regimen, reduce side effects, and increase efficacy when injected close to the site of action in the body. However, achieving the optimal amount of drug release over the desired period of time requires the development and characterization of a wide array of formulation candidates through extensive and time-consuming experiments. This trial-and-error approach has created a significant bottleneck in LAI development compared to more conventional types of drug formulation.
    “AI is transforming the way we do science. It helps accelerate discovery and optimization. This is a perfect example of a ‘Before AI’ and an ‘After AI’ moment and shows how drug delivery can be impacted by this multidisciplinary research,” said Alán Aspuru-Guzik, professor in chemistry and computer science, University of Toronto who also holds the CIFAR Artificial Intelligence Research Chair at the Vector Institute in Toronto.
    To investigate whether machine learning tools could accurately predict the rate of drug release, the research team trained and evaluated a series of eleven different models, including multiple linear regression (MLR), random forest (RF), light gradient boosting machine (lightGBM), and neural networks (NN). The data set used to train the selected panel of machine learning models was constructed from previously published studies by the authors and other research groups.
    “Once we had the data set, we split it into two subsets: one used for training the models and one for testing. We then asked the models to predict the results of the test set and directly compared with previous experimental data. We found that the tree-based models, and specifically lightGBM, delivered the most accurate predictions,” said Pauric Bannigan, research associate with the Allen research group at the Leslie Dan Faculty of Pharmacy, University of Toronto.
    As a next step, the team worked to apply these predictions and illustrate how machine learning models might be used to inform the design of new LAIs, the team used advanced analytical techniques to extract design criteria from the lightGBM model. This allowed the design of a new LAI formulation for a drug currently used to treat ovarian cancer. “Once you have a trained model, you can then work to interpret what the machine has learned and use that to develop design criteria for new systems,” said Bannigan. Once prepared, the drug release rate was tested and further validated the predictions made by the lightGBM model. “Sure enough, the formulation had the slow-release rate that we were looking for. This was significant because in the past it might have taken us several iterations to get to a release profile that looked like this, with machine learning we got there in one,” he said.
    The results of the current study are encouraging and signal the potential for machine learning to reduce reliance on trial-and-error testing slowing the pace of development for long-acting injectables. However, the study’s authors identify that the lack of available open-source data sets in pharmaceutical sciences represents a significant challenge to future progress. “When we began this project, we were surprised by the lack of data reported across numerous studies using polymeric microparticles,” said Allen. “This meant the studies and the work that went into them couldn’t be leveraged to develop the machine learning models we need to propel advances in this space,” said Allen. “There is a real need to create robust databases in pharmaceutical sciences that are open access and available for all so that we can work together to advance the field,” she said.
    To promote the move toward the accessible databases needed to support the integration of machine learning into pharmaceutical sciences more broadly, Allen and the research team have made their datasets and code and available on the open-source platform Zenodo.
    “For this study our goal was to lower the barrier of entry to applying machine learning in pharmaceutical sciences,” said Bannigan. “We’ve made our data sets fully available so others can hopefully build on this work. We want this to be the start of something and not the end of the story for machine learning in drug formulation.” More

  • in

    The thermodynamics of quantum computing

    Heat and computers do not mix well. If computers overheat, they do not work well or may even crash. But what about the quantum computers of the future? These high-performance devices are even more sensitive to heat. This is because their basic computational units — quantum bits or “qubits” — are based on highly-sensitive units, some of them individual atoms, and heat can be a crucial interference factor.
    The basic dilemma: In order to retrieve the information of a qubit, its quantum state must be destroyed. The heat released in the process can interfere with the sensitive quantum system. The quantum computer’s own heat generation could consequently become a problem, suspect physicists Wolfgang Belzig (University of Konstanz), Clemens Winkelmann (Néel Institute, Grenoble) and Jukka Pekola (Aalto University, Helsinki). In experiments, the researchers have now documented the heat generated by superconducting quantum systems. To do so, they developed a method that can measure and display the temperature curve to one millionth of a second in accuracy throughout the process of reading one qubit. “This means we are monitoring the process as it takes place,” says Wolfgang Belzig. The method was recently published in the journal Nature Physics.
    Superconducting quantum systems produce heat
    Until now, research on quantum computing has focused on the basics of getting these high-performance computers to work: Much research mainly involves the coupling of quantum bits and identifying which material systems are optimal for qubits. Little consideration has been given to heat generation: Especially in the case of superconducting qubits constructed using a supposedly ideal conducting material, researchers have often assumed that no heat is generated or that the amount is negligible. “That is simply not true,” Wolfgang Belzig says and adds: “People often think of quantum computers as idealized systems. However, even the circuitry of a superconducting quantum system produces heat.” How much heat, is what the researchers can now measure precisely.
    A thermometer for the quantum bit
    The measurement method was developed for superconducting quantum systems. These systems are based on superconducting circuits that use “Josephson junctions” as a central electronic element. “We measure the electron temperature based on the conductivity of such contacts. This is nothing special in and of itself: Many electronic thermometers are based in some way on measuring conductivity using a resistor. The only problem is: How quickly can you take the measurements?” Clemens Winkelmann explains. Changes to a quantum state take only a millionth of a second.
    “Our trick is to have the resistor measuring the temperature inside a resonator — an oscillating circuit — that produces a strong response at a certain frequency. This resonator oscillates at 600 megahertz and can be read out very quickly,” Winkelmann explains.
    Heat is always generated
    With their experimental evidence, the researchers want to draw attention to the thermodynamic processes of a quantum system. “Our message to the quantum computing world is: Be careful, and watch out for heat generation. We can even measure the exact amount,” Winkelmann adds.
    This heat generation could become particularly relevant for scaling up quantum systems. Wolfgang Belzig explains: “One of the greatest advantages of superconducting qubits is that they are so large, because this size makes them easy to build and control. On the other hand, this can be a disadvantage if you want to put many qubits on a chip. Developers need to take into account that more heat will be produced as a result and that the system needs to be cooled adequately.”
    This research was conducted in the context of the Collaborative Research Centre SFB 1432 “Fluctuations and Nonlinearities in Classical and Quantum Matter beyond Equilibrium” at the University of Konstanz. More

  • in

    AI developed to monitor changes to the globally important Thwaites Glacier

    Scientists have developed artificial intelligence techniques to track the development of crevasses — or fractures — on the Thwaites Glacier Ice Tongue in west Antarctica.
    A team of scientists from the University of Leeds and University of Bristol have adapted an AI algorithm originally developed to identify cells in microscope images to spot crevasses forming in the ice from satellite images. Crevasses are indicators of stresses building-up in the glacier.
    Thwaites is a particularly important part of the Antarctic Ice Sheet because it holds enough ice to raise global sea levels by around 60 centimetres and is considered by many to be at risk of rapid retreat, threatening coastal communities around the world.
    Use of AI will allow scientists to more accurately monitor and model changes to this important glacier.
    Published today (Monday, Jan 9) in the journal Nature Geoscience, the research focussed on a part of the glacier system where the ice flows into the sea and begins to float. Where this happens is known as the grounding line and it forms the start of the Thwaites Eastern Ice Shelf and the Thwaites Glacier Ice Tongue, which is also an ice shelf.
    Despite being small in comparison to the size of the entire glacier, changes to these ice shelves could have wide-ranging implications for the whole glacier system and future sea-level rise.

    The scientists wanted to know if crevassing or fracture formation in the glacier was more likely to occur with changes to the speed of the ice flow.
    Development of the algorithm
    Using machine learning, the researchers taught a computer to look at radar satellite images and identify changes over the last decade. The images were taken by the European Space Agency’s Sentinel-1 satellites, which can “see” through the top layer of snow and onto the glacier, revealing the fractured surface of the ice normally hidden from sight.
    The analysis revealed that over the last six years, the Thwaites Glacier ice tongue has sped up and slowed down twice, by around 40% each time — from four km/year to six km/year before slowing. This is a a substantial increase in the magnitude and frequency of speed change compared with past records.
    The study found a complex interplay between crevasse formation and speed of the ice flow. When the ice flow quickens or slows, more crevasses are likely to form. In turn, the increase in crevasses causes the ice to change speed as the level of friction between the ice and underlying rock alters.

    Dr Anna Hogg, a glaciologist in the Satellite Ice Dynamics group at Leeds and an author on the study, said: “Dynamic changes on ice shelves are traditionally thought to occur on timescales of decades to centuries, so it was surprising to see this huge glacier speed up and slow down so quickly.”
    “The study also demonstrates the key role that fractures play in un-corking the flow of ice — a process known as ‘unbuttressing’.
    “Ice sheet models must be evolved to account for the fact that ice can fracture, which will allow us to measure future sea level contributions more accurately.”
    Trystan Surawy-Stepney, lead author of the paper and a doctoral researcher at Leeds, added: “The nice thing about this study is the precision with which the crevasses were mapped.
    “It has been known for a while that crevassing is an important component of ice shelf dynamics and this study demonstrates that this link can be studied on a large scale with beautiful resolution, using computer vision techniques applied to the deluge of satellite images acquired each week.”
    Satellites orbiting the Earth provide scientists with new data over the most remote and inaccessible regions of Antarctica. The radar on board Sentinel-1 allows places like Thwaites Glacier to be imaged day or night, every week, all year round.
    Dr Mark Drinkwater of the European Space Agency commented: “Studies like this would not be possible without the large volume of high-resolution data provided by Sentinel-1. By continuing to plan future missions, we can carry on supporting work like this and broaden the scope of scientific research on vital areas of the Earth’s climate system.”
    As for Thwaites Glacier Ice Tongue, it remains to be seen whether such short-term changes have any impact on the long-term dynamics of the glacier, or whether they are simply isolated symptoms of an ice shelf close to its end.
    The paper — “Episodic dynamic change linked to damage on the thwaites glacier ice tongue” — was authored by Trystan Surawy-Stepney, Anna E. Hogg and Benjamin J. Davison, from the University of Leeds; and Stephen L. Cornford, from the University of Bristol. More

  • in

    New quantum computing architecture could be used to connect large-scale devices

    Quantum computers hold the promise of performing certain tasks that are intractable even on the world’s most powerful supercomputers. In the future, scientists anticipate using quantum computing to emulate materials systems, simulate quantum chemistry, and optimize hard tasks, with impacts potentially spanning finance to pharmaceuticals.
    However, realizing this promise requires resilient and extensible hardware. One challenge in building a large-scale quantum computer is that researchers must find an effective way to interconnect quantum information nodes — smaller-scale processing nodes separated across a computer chip. Because quantum computers are fundamentally different from classical computers, conventional techniques used to communicate electronic information do not directly translate to quantum devices. However, one requirement is certain: Whether via a classical or a quantum interconnect, the carried information must be transmitted and received.
    To this end, MIT researchers have developed a quantum computing architecture that will enable extensible, high-fidelity communication between superconducting quantum processors. In work published in Nature Physics, MIT researchers demonstrate step one, the deterministic emission of single photons — information carriers — in a user-specified direction. Their method ensures quantum information flows in the correct direction more than 96 percent of the time.
    Linking several of these modules enables a larger network of quantum processors that are interconnected with one another, no matter their physical separation on a computer chip.
    “Quantum interconnects are a crucial step toward modular implementations of larger-scale machines built from smaller individual components,” says Bharath Kannan PhD ’22, co-lead author of a research paper describing this technique.
    “The ability to communicate between smaller subsystems will enable a modular architecture for quantum processors, and this may be a simpler way of scaling to larger system sizes compared to the brute-force approach of using a single large and complicated chip,” Kannan adds.

    Kannan wrote the paper with co-lead author Aziza Almanakly, an electrical engineering and computer science graduate student in the Engineering Quantum Systems group of the Research Laboratory of Electronics (RLE) at MIT. The senior author is William D. Oliver, a professor of electrical engineering and computer science and of physics, an MIT Lincoln Laboratory Fellow, director of the Center for Quantum Engineering, and associate director of RLE.
    Moving quantum information
    In a conventional classical computer, various components perform different functions, such as memory, computation, etc. Electronic information, encoded and stored as bits (which take the value of 1s or 0s), is shuttled between these components using interconnects, which are wires that move electrons around on a computer processor.
    But quantum information is more complex. Instead of only holding a value of 0 or 1, quantum information can also be both 0 and 1 simultaneously (a phenomenon known as superposition). Also, quantum information can be carried by particles of light, called photons. These added complexities make quantum information fragile, and it can’t be transported simply using conventional protocols.
    A quantum network links processing nodes using photons that travel through special interconnects known as waveguides. A waveguide can either be unidirectional, and move a photon only to the left or to the right, or it can be bidirectional.

    Most existing architectures use unidirectional waveguides, which are easier to implement since the direction in which photons travel is easily established. But since each waveguide only moves photons in one direction, more waveguides become necessary as the quantum network expands, which makes this approach difficult to scale. In addition, unidirectional waveguides usually incorporate additional components to enforce the directionality, which introduces communication errors.
    “We can get rid of these lossy components if we have a waveguide that can support propagation in both the left and right directions, and a means to choose the direction at will. This ‘directional transmission’ is what we demonstrated, and it is the first step toward bidirectional communication with much higher fidelities,” says Kannan.
    Using their architecture, multiple processing modules can be strung along one waveguide. A remarkable feature the architecture design is that the same module can be used as both a transmitter and a receiver, he says. And photons can be sent and captured by any two modules along a common waveguide.
    “We have just one physical connection that can have any number of modules along the way. This is what makes it scalable. Having demonstrated directional photon emission from one module, we are now working on capturing that photon downstream at a second module,” Almanakly adds.
    Leveraging quantum properties
    To accomplish this, the researchers built a module comprising four qubits.
    Qubits are the building blocks of quantum computers, and are used to store and process quantum information. But qubits can also be used as photon emitters. Adding energy to a qubit causes the qubit to become excited, and then when it de-excites, the qubit will emit the energy in the form of a photon.
    However, simply connecting one qubit to a waveguide does not ensure directionality. A single qubit emits a photon, but whether it travels to the left or to the right is completely random. To circumvent this problem, the researchers utilize two qubits and a property known as quantum interference to ensure the emitted photon travels in the correct direction.
    The technique involves preparing the two qubits in an entangled state of single excitation called a Bell state. This quantum-mechanical state comprises two aspects: the left qubit being excited and the right qubit being excited. Both aspects exist simultaneously, but which qubit is excited at a given time is unknown.
    When the qubits are in this entangled Bell state, the photon is effectively emitted to the waveguide at the two qubit locations simultaneously, and these two “emission paths” interfere with each other. Depending on the relative phase within the Bell state, the resulting photon emission must travel to the left or to the right. By preparing the Bell state with the correct phase, the researchers choose the direction in which the photon travels through the waveguide.
    They can use this same technique, but in reverse, to receive the photon at another module.
    “The photon has a certain frequency, a certain energy, and you can prepare a module to receive it by tuning it to the same frequency. If they are not at the same frequency, then the photon will just pass by. It’s analogous to tuning a radio to a particular station. If we choose the right radio frequency, we’ll pick up the music transmitted at that frequency,” Almanakly says.
    The researchers found that their technique achieved more than 96 percent fidelity — this means that if they intended to emit a photon to the right, 96 percent of the time it went to the right.
    Now that they have used this technique to effectively emit photons in a specific direction, the researchers want to connect multiple modules and use the process to emit and absorb photons. This would be a major step toward the development of a modular architecture that combines many smaller-scale processors into one larger-scale, and more powerful, quantum processor.
    The research is funded, in part, by the AWS Center for Quantum Computing, the U.S. Army Research Office, the Department of Energy Office of Science National Quantum Information Science Research Centers, the Co-design Center for Quantum Advantage, and the Department of Defense. More