More stories

  • in

    New studies suggest social isolation is a risk factor for dementia in older adults, point to ways to reduce risk

    In two studies using nationally representative data from the National Health and Aging Trends Study gathered on thousands of Americans, researchers from the Johns Hopkins University School of Medicine and Bloomberg School of Public Health have significantly added to evidence that social isolation is a substantial risk factor for dementia in community-dwelling (noninstitutionalized) older adults, and identified technology as an effective way to intervene.
    Collectively, the studies do not establish a direct cause and effect between dementia and social isolation, defined as lack of social contact and interactions with people on a regular basis. But, the researchers say, the studies strengthen observations that such isolation increases the risk of dementia, and suggest that relatively simple efforts to increase social support of older adults — such as texting and use of email — may reduce that risk. In the United States, an estimated 1 in 4 people over age 65 experience social isolation, according to the National Institute on Aging.
    “Social connections matter for our cognitive health, and it is potentially easily modifiable for older adults without the use of medication,” says Thomas Cudjoe, M.D., M.P.H., assistant professor of medicine at the Johns Hopkins University School of Medicine and senior author of both of the new studies.
    The first study, described Jan. 11 in the Journal of the American Geriatrics Society, used data collected on a group of 5,022 Medicare beneficiaries for a long-term study known as the National Health and Aging Trends, which began in 2011. All participants were 65 or older, and were asked to complete an annual two-hour, in-person interview to assess cognitive function, health status and overall well-being.
    At the initial interview, 23% of the 5,022 participants were socially isolated and showed no signs of dementia. However, by the end of this nine-year study, 21% of the total sample of participants had developed dementia. The researchers concluded that risk of developing dementia over nine years was 27% higher among socially isolated older adults compared with older adults who were not socially isolated.
    “Socially isolated older adults have smaller social networks, live alone and have limited participation in social activities,” says Alison Huang, Ph.D., M.P.H., senior research associate at the Johns Hopkins Bloomberg School of Public Health. “One possible explanation is that having fewer opportunities to socialize with others decreases cognitive engagement as well, potentially contributing to increased risk of dementia.”
    Interventions to reduce that risk are possible, according to results of the second study, published Dec. 15 in the Journal of the American Geriatrics Society. Specifically, researchers found the use of communications technology such as telephone and email lowered the risk for social isolation.
    Researchers for the second study used data from participants in the same National Health and Aging Trends study, and found that more than 70% of people age 65 and up who were not socially isolated at their initial appointment had a working cellphone and/or computer, and regularly used email or texting to initiate and respond to others. Over the four-year research period for this second study, older adults who had access to such technology consistently showed a 31% lower risk for social isolation than the rest of the cohort.
    “Basic communications technology is a great tool to combat social isolation,” says Mfon Umoh, M.D., Ph.D., postdoctoral fellow in geriatric medicine at the Johns Hopkins University School of Medicine. “This study shows that access and use of simple technologies are important factors that protect older adults against social isolation, which is associated with significant health risks. This is encouraging because it means simple interventions may be meaningful.”
    Social isolation has gained significant attention in the past decade, especially due to restrictions implemented for the COVID-19 pandemic, but more work needs to be done to identify at-risk populations and create tools for providers and caregivers to minimize risk, the researchers say. Future research in this area should focus on increased risks based on biological sex, physical limitations, race and income level.
    Other scientists who contributed to this research are Laura Prichett, Cynthia Boyd, David Roth, Tom Cidav, Shang-En Chung, Halima Amjad, and Roland Thorpe of the Johns Hopkins University School of Medicine and Bloomberg School of Public Health.
    This research was funded by the Caryl & George Bernstein Human Aging Project, the Johns Hopkins University Center for Innovative Medicine, the National Center for Advancing Translational Sciences, the National Institute on Aging, the Secunda Family Foundation, the Patient-Centered Care for Older Adults with Multiple Chronic Conditions, and the National Institute on Minority Health and Health Disparities. More

  • in

    Computers that power self-driving cars could be a huge driver of global carbon emissions

    In the future, the energy needed to run the powerful computers on board a global fleet of autonomous vehicles could generate as many greenhouse gas emissions as all the data centers in the world today.
    That is one key finding of a new study from MIT researchers that explored the potential energy consumption and related carbon emissions if autonomous vehicles are widely adopted.
    The data centers that house the physical computing infrastructure used for running applications are widely known for their large carbon footprint: They currently account for about 0.3 percent of global greenhouse gas emissions, or about as much carbon as the country of Argentina produces annually, according to the International Energy Agency. Realizing that less attention has been paid to the potential footprint of autonomous vehicles, the MIT researchers built a statistical model to study the problem. They determined that 1 billion autonomous vehicles, each driving for one hour per day with a computer consuming 840 watts, would consume enough energy to generate about the same amount of emissions as data centers currently do.
    The researchers also found that in over 90 percent of modeled scenarios, to keep autonomous vehicle emissions from zooming past current data center emissions, each vehicle must use less than 1.2 kilowatts of power for computing, which would require more efficient hardware. In one scenario — where 95 percent of the global fleet of vehicles is autonomous in 2050, computational workloads double every three years, and the world continues to decarbonize at the current rate — they found that hardware efficiency would need to double faster than every 1.1 years to keep emissions under those levels.
    “If we just keep the business-as-usual trends in decarbonization and the current rate of hardware efficiency improvements, it doesn’t seem like it is going to be enough to constrain the emissions from computing onboard autonomous vehicles. This has the potential to become an enormous problem. But if we get ahead of it, we could design more efficient autonomous vehicles that have a smaller carbon footprint from the start,” says first author Soumya Sudhakar, a graduate student in aeronautics and astronautics.
    Sudhakar wrote the paper with her co-advisors Vivienne Sze, associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Research Laboratory of Electronics (RLE); and Sertac Karaman, associate professor of aeronautics and astronautics and director of the Laboratory for Information and Decision Systems (LIDS). The research appears in the January-February issue of IEEE Micro.

    Modeling emissions
    The researchers built a framework to explore the operational emissions from computers on board a global fleet of electric vehicles that are fully autonomous, meaning they don’t require a back-up human driver.
    The model is a function of the number of vehicles in the global fleet, the power of each computer on each vehicle, the hours driven by each vehicle, and the carbon intensity of the electricity powering each computer.
    “On its own, that looks like a deceptively simple equation. But each of those variables contains a lot of uncertainty because we are considering an emerging application that is not here yet,” Sudhakar says.
    For instance, some research suggests that the amount of time driven in autonomous vehicles might increase because people can multitask while driving and the young and the elderly could drive more. But other research suggests that time spent driving might decrease because algorithms could find optimal routes that get people to their destinations faster.

    In addition to considering these uncertainties, the researchers also needed to model advanced computing hardware and software that doesn’t exist yet.
    To accomplish that, they modeled the workload of a popular algorithm for autonomous vehicles, known as a multitask deep neural network because it can perform many tasks at once. They explored how much energy this deep neural network would consume if it were processing many high-resolution inputs from many cameras with high frame rates, simultaneously.
    When they used the probabilistic model to explore different scenarios, Sudhakar was surprised by how quickly the algorithms’ workload added up.
    For example, if an autonomous vehicle has 10 deep neural networks processing images from 10 cameras, and that vehicle drives for one hour a day, it will make 21.6 million inferences each day. One billion vehicles would make 21.6 quadrillion inferences. To put that into perspective, all of Facebook’s data centers worldwide make a few trillion inferences each day (1 quadrillion is 1,000 trillion).
    “After seeing the results, this makes a lot of sense, but it is not something that is on a lot of people’s radar. These vehicles could actually be using a ton of computer power. They have a 360-degree view of the world, so while we have two eyes, they may have 20 eyes, looking all over the place and trying to understand all the things that are happening at the same time,” Karaman says.
    Autonomous vehicles would be used for moving goods, as well as people, so there could be a massive amount of computing power distributed along global supply chains, he says. And their model only considers computing — it doesn’t take into account the energy consumed by vehicle sensors or the emissions generated during manufacturing.
    Keeping emissions in check
    To keep emissions from spiraling out of control, the researchers found that each autonomous vehicle needs to consume less than 1.2 kilowatts of energy for computing. For that to be possible, computing hardware must become more efficient at a significantly faster pace, doubling in efficiency about every 1.1 years.
    One way to boost that efficiency could be to use more specialized hardware, which is designed to run specific driving algorithms. Because researchers know the navigation and perception tasks required for autonomous driving, it could be easier to design specialized hardware for those tasks, Sudhakar says. But vehicles tend to have 10- or 20-year lifespans, so one challenge in developing specialized hardware would be to “future-proof” it so it can run new algorithms.
    In the future, researchers could also make the algorithms more efficient, so they would need less computing power. However, this is also challenging because trading off some accuracy for more efficiency could hamper vehicle safety.
    Now that they have demonstrated this framework, the researchers want to continue exploring hardware efficiency and algorithm improvements. In addition, they say their model can be enhanced by characterizing embodied carbon from autonomous vehicles — the carbon emissions generated when a car is manufactured — and emissions from a vehicle’s sensors.
    While there are still many scenarios to explore, the researchers hope that this work sheds light on a potential problem people may not have considered.
    “We are hoping that people will think of emissions and carbon efficiency as important metrics to consider in their designs. The energy consumption of an autonomous vehicle is really critical, not just for extending the battery life, but also for sustainability,” says Sze.
    This research was funded, in part, by the National Science Foundation and the MIT-Accenture Fellowship. More

  • in

    Screen-printing method can make wearable electronics less expensive

    The glittering, serpentine structures that power wearable electronics can be created with the same technology used to print rock concert t-shirts, new research shows.
    The study, led by Washington State University researchers, demonstrates that electrodes can be made using just screen printing, creating a stretchable, durable circuit pattern that can be transferred to fabric and worn directly on human skin. Such wearable electronics can be used for health monitoring in hospitals or at home.
    “We wanted to make flexible, wearable electronics in a way that is much easier, more convenient and lower cost,” said corresponding author Jong-Hoon Kim, associate professor at the WSU Vancouver’s School of Engineering and Computer Science. “That’s why we focused on screen printing: it’s easy to use. It has a simple setup, and it is suitable for mass production.”
    Current commercial manufacturing of wearable electronics requires expensive processes involving clean rooms. While some use screen printing for parts of the process, this new method relies wholly on screen printing, which has advantages for manufacturers and ultimately, consumers.
    In the study, published in the ACS Applied Materials and Interfaces journal, Kim and his colleagues detail the electrode screen-printing process and demonstrate how the resulting electrodes can be used for electrocardiogram monitoring, also known as ECG.
    They used a multi-step process to layer polymer and metal inks to create snake-like structures of the electrode. While the resulting thin pattern appears delicate, the electrodes are not fragile. The study showed they could be stretched by 30% and bend to 180 degrees.
    Multiple electrodes are printed onto a pre-treated glass slide, which allows them to be easily peeled off and transferred onto fabric or other material. After printing the electrodes, the researchers transferred them onto an adhesive fabric that was then worn directly on the skin by volunteers. The wireless electrodes accurately recorded heart and respiratory rates, sending the data to a mobile phone.
    While this study focused on ECG monitoring, the screen-printing process can be used to create electrodes for a range of uses, including those that serve similar functions to smart watches or fitness trackers, Kim said.
    Kim’s lab is currently working on expanding this technology to print different electrodes as well as entire electronic chips and even potentially whole circuit boards.
    In addition to Kim, co-authors on the study includes researchers from the Georgia Institute of Technology and Pukyong National University in South Korea as well as others from WSU Vancouver. This research received support from the National Science Foundation. More

  • in

    Computer models determine drug candidate's ability to bind to proteins

    Combing computational physics with experimental data, University of Arkansas researchers have developed computer models for determining a drug candidate’s ability to target and bind to proteins within cells.
    If accurate, such an estimator could computationally demonstrate binding affinity and thus prevent experimental researchers from needing to investigate millions of chemical compounds. The work could substantially reduce the cost and time associated with developing new drugs.
    “We developed a theoretical framework for estimating ligand-protein binding,” said Mahmoud Moradi, associate professor of chemistry and biochemistry in the Fulbright College of Arts and Sciences. “The proposed method assigns an effective energy to the ligand at every grid point in a coordinate system, which has its origin at the most likely location of the ligand when it is in its bound state.”
    A ligand is a substance — an ion or molecule — such as a drug that binds to another molecule, such as a protein, to form a complex system that may cause or prevent a biological function.
    Moradi’s research focuses on computational simulations of diseases, including coronavirus. For this project, he collaborated with Suresh Thallapuranam, professor of biochemistry and the Cooper Chair of Bioinformatics Research.
    Moradi and Thallapuranam used biased simulations — as well as non-parametric re-weighting techniques to account for the bias — to create a binding estimator that was computationally efficient and accurate. They then used a mathematically robust technique called orientation quaternion formalism to further describe the ligand’s conformational changes as it bound to targeted proteins.
    The researchers tested this approach by estimating the binding affinity between human fibroblast growth factor 1 — a specific signaling protein — and heparin hexasaccharide 5, a popular medication.
    The project was conceived because Moradi and Thallapuranam were studying human fibroblast growth factor 1 protein and its mutants in the absence and presence of heparin. They found strong qualitative agreement between simulations and experimental results.
    “When it came to binding affinity, we knew that the typical methods we had at our disposal would not work for such a difficult problem,” Moradi said. “This is why we decided to develop a new method. We had a joyous moment when the experimental and computational data were compared with each other, and the two numbers matched almost perfectly.”
    The researchers’ work was published in Nature Computational Science.
    Moradi previously received attention for developing computational simulations of the behavior of SARS-CoV-2 spike proteins prior to fusion with human cell receptors. SARS-CoV-2 is the virus that causes COVID-19. More

  • in

    Now on the molecular scale: Electric motors

    Electric vehicles, powered by macroscopic electric motors, are increasingly prevalent on our streets and highways. These quiet and eco-friendly machines got their start nearly 200 years ago when physicists took the first tiny steps to bring electric motors into the world.
    Now a multidisciplinary team led by Northwestern University has made an electric motor you can’t see with the naked eye: an electric motor on the molecular scale.
    This early work — a motor that can convert electrical energy into unidirectional motion at the molecular level — has implications for materials science and particularly medicine, where the electric molecular motor could team up with biomolecular motors in the human body.
    “We have taken molecular nanotechnology to another level,” said Northwestern’s Sir Fraser Stoddart, who received the 2016 Nobel Prize in Chemistry for his work in the design and synthesis of molecular machines. “This elegant chemistry uses electrons to effectively drive a molecular motor, much like a macroscopic motor. While this area of chemistry is in its infancy, I predict one day these tiny motors will make a huge difference in medicine.”
    Stoddart, Board of Trustees Professor of Chemistry at the Weinberg College of Arts and Sciences, is a co-corresponding author of the study. The research was done in close collaboration with Dean Astumian, a molecular machine theorist and professor at the University of Maine, and William Goddard, a computational chemist and professor at the California Institute of Technology. Long Zhang, a postdoctoral fellow in Stoddart’s lab, is the paper’s first author and a co-corresponding author.
    “We have taken molecular nanotechnology to another level.” — Sir Fraser Stoddart, chemist
    Only 2 nanometers wide, the molecular motor is the first to be produced en masse in abundance. The motor is easy to make, operates quickly and does not produce any waste products.

    The study and a corresponding news brief were published today (Jan. 11) by the journal Nature.
    The research team focused on a certain type of molecule with interlocking rings known as catenanes held together by powerful mechanical bonds, so the components could move freely relative to each other without falling apart. (Stoddart decades ago played a key role in the creation of the mechanical bond, a new type of chemical bond that has led to the development of molecular machines.)
    The electric molecular motor specifically is based on a [3]catenane whose components ― a loop interlocked with two identical rings ― are redox active, i.e. they undergo unidirectional motion in response to changes in voltage potential. The researchers discovered that two rings are needed to achieve this unidirectional motion. Experiments showed that a [2]catenane, which has one loop interlocked with one ring, does not run as a motor.
    The synthesis and operation of molecules that perform the function of a motor ― converting external energy into directional motion ― has challenged scientists in the fields of chemistry, physics and molecular nanotechnology for some time.
    To achieve their breakthrough, Stoddart, Zhang and their Northwestern team spent more than four years on the design and synthesis of their electric molecular motor. This included a year working with UMaine’s Astumian and Caltech’s Goddard to complete the quantum mechanical calculations to explain the working mechanism behind the motor.

    “Controlling the relative movement of components on a molecular scale is a formidable challenge, so collaboration was crucial,” Zhang said. “Working with experts in synthesis, measurements, computational chemistry and theory enabled us to develop an electric molecular motor that works in solution.”
    A few examples of single-molecule electric motors have been reported, but they require harsh operating conditions, such as the use of an ultrahigh vacuum, and also produce waste.
    The next steps for their electric molecular motor, the researchers said, is to attach many of the motors to an electrode surface to influence the surface and ultimately do some useful work.
    “The achievement we report today is a testament to the creativity and productivity of our young scientists as well as their willingness to take risks,” Stoddart said. “This work gives me and the team enormous satisfaction.”
    Stoddart is a member of the International Institute for Nanotechnology and the Robert H. Lurie Comprehensive Cancer Center of Northwestern University. More

  • in

    Rare earth mining may be key to our renewable energy future. But at what cost?

    In spring 1949, three prospectors armed with Geiger counters set out to hunt for treasure in the arid mountains of southern Nevada and southeastern California.

    In the previous century, those mountains yielded gold, silver, copper and cobalt. But the men were looking for a different kind of treasure: uranium. The world was emerging from World War II and careening into the Cold War. The United States needed uranium to build its nuclear weapons arsenal. Mining homegrown sources became a matter of national security.

    Science News headlines, in your inbox

    Headlines and summaries of the latest Science News articles, delivered to your email inbox every Thursday.

    Thank you for signing up!

    There was a problem signing you up.

    After weeks of searching, the trio hit what they thought was pay dirt. Their instruments detected intense radioactivity in brownish-red veins of ore exposed in a rocky outcrop within California’s Clark Mountain Range. But instead of uranium, the brownish-red stuff turned out to be bastnaesite, a mineral bearing fluorine, carbon and 17 curious elements known collectively as rare earths. Traces of radioactive thorium, also in the ore, had set the Geiger counters pinging.

    As disappointing as that must have been, the bastnaesite still held value, and the prospectors sold their claim to the Molybdenum Corporation of America, later called Molycorp. The company was interested in mining the rare earths. During the mid-20th century, rare earth elements were becoming useful in a variety of ways: Cerium, for example, was the basis for a glass-polishing powder and europium lent luminescence to recently invented color television screens and fluorescent lamps.

    For the next few decades, the site, later dubbed Mountain Pass mine, was the world’s top source for rare earth elements, until two pressures became too much. By the late 1980s, China was intensively mining its own rare earths — and selling them at lower prices. And a series of toxic waste spills at Mountain Pass brought production at the struggling mine to a halt in 2002.

    But that wasn’t the end of the story. The green-tech revolution of the 21st century brought new attention to Mountain Pass, which later reopened and remains the only U.S. mine for rare earths.

    Rare earths are now integral to the manufacture of many carbon-neutral technologies — plus a whole host of tools that move the modern world. These elements are the building blocks of small, super­efficient permanent magnets that keep smartphones buzzing, wind turbines spinning, electric vehicles zooming and more.

    Mining U.S. sources of rare earth elements, President Joe Biden’s administration stated in February 2021, is a matter of national security.

    Rare earths are not actually rare on Earth, but they tend to be scattered throughout the crust at low concentrations. And the ore alone is worth relatively little without the complex, often environmentally hazardous processing involved in converting the ore into a usable form, says Julie Klinger, a geographer at the University of Delaware in Newark. As a result, the rare earth mining industry is wrestling with a legacy of environmental problems.

    Rare earths are mined by digging vast open pits in the ground, which can contaminate the environment and disrupt ecosystems. When poorly regulated, mining can produce wastewater ponds filled with acids, heavy metals and radioactive material that might leak into groundwater. Processing the raw ore into a form useful to make magnets and other tech is a lengthy effort that takes large amounts of water and potentially toxic chemicals, and produces voluminous waste.

    “We need rare earth elements … to help us with the transition to a climate-safe future,” says Michele Bustamante, a sustainability researcher at the Natural Resources Defense Council in Washington, D.C. Yet “everything that we do when we’re mining is impactful environmentally,” Bustamante says.

    But there are ways to reduce mining’s footprint, says Thomas Lograsso, a metallurgist at the Ames National Laboratory in Iowa and the director of the Critical Materials Institute, a Department of Energy research center. Researchers are investigating everything from reducing the amount of waste produced during the ore processing to improving the efficiency of rare earth element separation, which can also cut down on the amount of toxic waste. Scientists are also testing alternatives to mining, such as recycling rare earths from old electronics or recovering them from coal waste.

    Much of this research is in partnership with the mining industry, whose buy-in is key, Lograsso says. Mining companies have to be willing to invest in making changes. “We want to make sure that the science and innovations that we do are driven by industry needs, so that we’re not here developing solutions that nobody really wants,” he says.

    Klinger says she’s cautiously optimistic that the rare earth mining industry can become less polluting and more sustainable, if such solutions are widely adopted. “A lot of gains come from the low-hanging fruit,” she says. Even basic hardware upgrades to improve insulation can reduce the fuel required to reach the high temperatures needed for some processing. “You do what you [can].”

    The environmental impact of rare earth mining

    Between the jagged peaks of California’s Clark range and the Nevada border sits a broad, flat, shimmering valley known as the Ivanpah Dry Lake. Some 8,000 years ago, the valley held water year-round. Today, like many such playas in the Mojave Desert, the lake is ephemeral, winking into appearance only after an intense rain and flash flooding. It’s a beautiful, stark place, home to endangered desert tortoises and rare desert plants like Mojave milkweed.

    From about 1984 to 1998, the Ivanpah Dry Lake was also a holding pen for wastewater piped in from Mountain Pass. The wastewater was a by-product of chemical processing to concentrate the rare earth elements in the mined rock, making it more marketable to companies that could then extract those elements to make specific products. Via a buried pipeline, the mine sent wastewater to evaporation ponds about 23 kilometers away, in and around the dry lake bed.

    The pipeline repeatedly ruptured over the years. At least 60 separate spills dumped an estimated 2,000 metric tons of wastewater containing radioactive thorium into the valley. Federal officials feared that local residents and visitors to the nearby Mojave National Preserve might be at risk of exposure to that thorium, which could lead to increased risk of lung, pancreatic and other cancers.

    Unocal Corporation, which had acquired Molycorp in 1977, was ordered to clean up the spill in 1997, and the company paid over $1.4 million in fines and settlements. Chemical processing of the raw ore ground to a halt. Mining operations stopped shortly afterward.

    Half a world away, another environmental disaster was unfolding. The vast majority — between 80 and 90 percent — of rare earth elements on the market since the 1990s have come from China. One site alone, the massive Bayan Obo mine in Inner Mongolia, accounted for 45 percent of rare earth production in 2019.

    Bayan Obo spans some 4,800 hectares, about half the size of Florida’s Walt Disney World resort. It is also one of the most heavily polluted places on Earth. Clearing the land to dig for ore meant removing vegetation in an area already prone to desertification, allowing the Gobi Desert to creep southward.

    In 2010, officials in the nearby city of Baotou noted that radioactive, arsenic- and fluorine-containing mine waste, or tailings, was being dumped on farmland and into local water supplies, as well as into the nearby Yellow River. The air was polluted by fumes and toxic dust that reduced visibility. Residents complained of nausea, dizziness, migraines and arthritis. Some had skin lesions and discolored teeth, signs of prolonged exposure to arsenic; others exhibited signs of brittle bones, indications of skeletal fluorosis, Klinger says.

    The country’s rare earth industry was causing “severe damage to the ecological environment,” China’s State Council wrote in 2010. The release of heavy metals and other pollutants during mining led to “the destruction of vegetation and pollution of surface water, groundwater and farmland.” The “excessive rare earth mining,” the council wrote, led to landslides and clogged rivers.

    Faced with these mounting environmental disasters, as well as fears that it was depleting its rare earth resources too rapidly, China slashed its export of the elements in 2010 by 40 percent. The new limits sent prices soaring and kicked off concern around the globe that China had too tight of a stranglehold on these must-have elements. That, in turn, sparked investment in rare earth mining elsewhere.

    In 2010, there were few other places mining rare earths, with only minimal production from India, Brazil and Malaysia. A new mine in remote Western Australia came online in 2011, owned by mining company Lynas. The company dug into fossilized lava preserved within an ancient volcano called Mount Weld.

    Mount Weld didn’t have anywhere near the same sort of environmental impact seen in China: Its location was too remote and the mine was just a fraction of the size of Bayan Obo, according to Saleem Ali, an environmental planner at the University of Delaware. The United States, meanwhile, was eager to once again have its own source of rare earths — and Mountain Pass was still the best prospect.

    The Bayan Obo mine (shown) in China’s Inner Mongolia region was responsible for nearly half of the world’s rare earth production in 2019. Mining there has taken a heavy toll on the local residents and the environment.WU CHANGQING/VCG VIA GETTY IMAGES

    Mountain Pass mine gets revived

    After the Ivanpah Dry Lake mess, the Mountain Pass mine changed hands again. Chevron purchased it in 2005, but did not resume operations. Then, in 2008, a newly formed company called Molycorp Minerals purchased the mine with ambitious plans to create a complete rare earth supply chain in the United States.

    The goal was not just mining and processing ore, but also separating out the desirable elements and even manufacturing them into magnets. Currently, the separations and magnet manufacturing are done overseas, mostly in China. The company also proposed a plan to avoid spilling wastewater into nearby fragile habitats. Molycorp resumed mining, and introduced a “dry tailings” process — a method to squeeze 85 percent of the water out of its mine waste, forming a thick paste. The company would then store the immobilized, pasty residue in lined pits on its own land and recycle the water back into the facility.

    Unfortunately, Molycorp “was an epic debacle” from a business perspective, says Matt Sloustcher, senior vice president of communications and policy at MP Materials, current owner of Mountain Pass mine. Mismanagement ultimately led Molycorp to file for Chapter 11 bankruptcy in 2015. MP Materials bought the mine in 2017 and resumed mining later that year. By 2022, Mountain Pass mine was producing 15 percent of the world’s rare earths.

    MP Materials, too, has an ambitious agenda with plans to create a complete supply chain. And the company is determined not to repeat the mistakes of its predecessors. “We have a world-class … unbelievable deposit, an untapped potential,” says Michael Rosenthal, MP Materials’ chief operating officer. “We want to support a robust and diverse U.S. supply chain, be the magnetics champion in the U.S.”

    The challenges of separating rare earths

    On a hot morning in August, Sloustcher stands at the edge of the Mountain Pass mine, a giant hole in the ground, 800 meters across and up to 183 meters deep, big enough to be visible from space. It’s an impressive sight, and a good vantage point from which to describe a vision for the future. He points out the various buildings: where the ore is crushed and ground, where the ground rocks are chemically treated to slough off as much non–rare earth material as possible, and where the water is squeezed from that waste and the waste is placed into lined ponds.

    The end result is a highly concentrated rare earth oxide ore — still nowhere near the magnet-making stage. But the company has a three-stage plan “to restore the full rare earth supply to the United States,” from “mine to magnet,” Rosenthal says. Stage 1, begun in 2017, was to restart mining, crushing and concentrating the ore. Stage 2 will culminate in the chemical separation of the rare earth elements. And stage 3 will be magnet production, he says.

    Since coming online in 2017, MP Materials has shipped its concentrated ore to China for the next steps, including the arduous, hazardous process of separating the elements from one another. But in November, the company announced to investors that it had begun the preliminary steps for stage 2, a “major milestone” on the way to realizing its mine-to-magnet ambitions.

    With investments from the U.S. Department of Defense, the company is building two separations facilities. One plant will pull out lighter rare earth elements — those with smaller atomic numbers, including neodymium and praseodymium, both of which are key ingredients in the permanent magnets that power electric vehicles and many consumer electronics. MP Materials has additional grant money from the DOD to design and build a second processing plant to split apart the heavier rare earth elements such as dysprosium, also an ingredient in magnets, and yttrium, used to make superconductors and lasers.

    Like stage 2, stage 3 is already under way. In 2022, the company broke ground in Fort Worth, Texas, for a facility to produce neodymium magnets. And it inked a deal with General Motors to supply those magnets for electric vehicle motors.

    But separating the elements comes with its own set of environmental concerns.

    The process is difficult and leads to lots of waste. Rare earth elements are extremely similar chemically, which means they tend to stick together. Forcing them apart requires multiple sequential steps and a variety of powerful solvents to separate them one by one. Caustic sodium hydroxide causes cerium to drop out of the mix, for example. Other steps involve solutions containing organic molecules called ligands, which have a powerful thirst for metal atoms. The ligands can selectively bind to particular rare earth elements and pull them out of the mix.

    But one of the biggest issues plaguing this extraction process is its inefficiency, says Santa Jansone-Popova, an organic chemist at Oak Ridge National Laboratory in Tennessee. The scavenging of these metals is slow and imperfect, and companies have to go through a lot of extraction steps to get a sufficiently marketable amount of the elements. With the current chemical methods, “you need many, many, many stages in order to achieve the desired separation,” Jansone-Popova says. That makes the whole process “more complex, more expensive, and [it] produces more waste.”

    Under the aegis of the DOE’s Critical Materials Institute, Jansone-Popova and her colleagues have been hunting for a way to make the process more efficient, eliminating many of those steps. In 2022, the researchers identified a ligand that they say is much more efficient at snagging certain rare earths than the ligands now used in the industry. Industry partners are on board to try out the new process this year, she says.

    In addition to concerns about heavy metals and other toxic materials in the waste, there are lingering worries about the potential impacts of radioactivity on human health. The trouble is that there is still only limited epidemiological evidence of the impact of rare earth mining on human and environmental health, according to Ali, and much of that evidence is related to the toxicity of heavy metals such as arsenic. It’s also not clear, he says, how much of the concerns over radioactive waste are scientifically supported, due to the low concentration of radioactive elements in mined rare earths.

    Such concerns get international attention, however. In 2019, protests erupted in Malaysia over what activists called “a mountain of toxic waste,” about 1.5 million metric tons, produced by a rare earth separation facility near the Malaysian city of Kuantan. The facility is owned by Lynas, which ships its rare earth ore from Australia’s Mount Weld to the site. To dissolve the rare earths, the ore is cooked with sulfuric acid and then diluted with water. The residue that’s left behind can contain traces of radioactive thorium.

    Australian company Lynas built a plant near Kuantan, Malaysia, (shown in 2012) to separate and process the rare earth oxide ore mined at Mount Weld in Western Australia. Local protests erupted in 2019 over how the company disposes of its thorium-laced waste.GOH SENG CHONG/BLOOMBERG VIA GETTY IMAGES

    Lynas had no permanent storage for the waste, piling it up in hills near Kuantan instead. But the alarm over the potential radioactivity in those hills may be exaggerated, experts say. Lynas reports that workers at the site are exposed to less than 1.05 millisieverts per year, far below the radiation exposure threshold for workers of 20 millisieverts set by the International Atomic Energy Agency.

    “There’s a lot of misinformation about by­products such as thorium.… The thorium from rare earth processing is actually very low-level radiation,” Ali says. “As someone who has been a committed environmentalist, I feel right now that there’s not much science-based decision making on these things.”

    Given the concerns over new mining, environmental think tanks like the World Resources Institute have been calling for more recycling of existing rare earth materials to reduce the need for new mining and processing.

    “The path to the future has to do with getting the most out of what we take out of the ground,” says Bustamante, of the NRDC. “Ultimately the biggest lever for change is not in the mining itself, but in the manufacturing, and what we do with those materials at the end of life.”

    That means using mined resources as efficiently as possible, but also recycling rare earths out of already existing materials. Getting more out of these materials can reduce the overall environmental impacts of the mining itself, she adds.

    That is a worthwhile goal, but recycling isn’t a silver bullet, Ali says. For one thing, there aren’t enough spent rare earth–laden batteries and other materials available at the moment for recycling. “Some mining will be necessary, [because] right now we don’t have the stock.” And that supply problem, he adds, will only grow as demand increases. More

  • in

    Project aims to expand language technologies

    Only a fraction of the 7,000 to 8,000 languages spoken around the world benefit from modern language technologies like voice-to-text transcription, automatic captioning, instantaneous translation and voice recognition. Carnegie Mellon University researchers want to expand the number of languages with automatic speech recognition tools available to them from around 200 to potentially 2,000.
    “A lot of people in this world speak diverse languages, but language technology tools aren’t being developed for all of them,” said Xinjian Li, a Ph.D. student in the School of Computer Science’s Language Technologies Institute (LTI). “Developing technology and a good language model for all people is one of the goals of this research.”
    Li is part of a research team aiming to simplify the data requirements languages need to create a speech recognition model. The team — which also includes LTI faculty members Shinji Watanabe, Florian Metze, David Mortensen and Alan Black — presented their most recent work, “ASR2K: Speech Recognition for Around 2,000 Languages Without Audio,” at Interspeech 2022 in South Korea.
    Most speech recognition models require two data sets: text and audio. Text data exists for thousands of languages. Audio data does not. The team hopes to eliminate the need for audio data by focusing on linguistic elements common across many languages.
    Historically, speech recognition technologies focus on a language’s phoneme. These distinct sounds that distinguish one word from another — like the “d” that differentiates “dog” from “log” and “cog” — are unique to each language. But languages also have phones, which describe how a word sounds physically. Multiple phones might correspond to a single phoneme. So even though separate languages may have different phonemes, their underlying phones could be the same.
    The LTI team is developing a speech recognition model that moves away from phonemes and instead relies on information about how phones are shared between languages, thereby reducing the effort to build separate models for each language. Specifically, it pairs the model with a phylogenetic tree — a diagram that maps the relationships between languages — to help with pronunciation rules. Through their model and the tree structure, the team can approximate the speech model for thousands of languages without audio data.
    “We are trying to remove this audio data requirement, which helps us move from 100 or 200 languages to 2,000,” Li said. “This is the first research to target such a large number of languages, and we’re the first team aiming to expand language tools to this scope.”
    Still in an early stage, the research has improved existing language approximation tools by a modest 5%, but the team hopes it will serve as inspiration not only for their future work but also for that of other researchers.
    For Li, the work means more than making language technologies available to all. It’s about cultural preservation.
    “Each language is a very important factor in its culture. Each language has its own story, and if you don’t try to preserve languages, those stories might be lost,” Li said. “Developing this kind of speech recognition system and this tool is a step to try to preserve those languages.”
    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Aaron Aupperlee. Note: Content may be edited for style and length. More

  • in

    The optical fiber that keeps data safe even after being twisted or bent

    Optical fibres are the backbone of our modern information networks. From long-range communication over the internet to high-speed information transfer within data centres and stock exchanges, optical fibre remains critical in our globalised world.
    Fibre networks are not, however, structurally perfect, and information transfer can be compromised when things go wrong. Tßo address this problem, physicists at the University of Bath in the UK have developed a new kind of fibre designed to enhance the robustness of networks. This robustness could prove to be especially important in the coming age of quantum networks.
    The team has fabricated optical fibres (the flexible glass channels through which information is sent) that can protect light (the medium through which data is transmitted) using the mathematics of topology. Best of all, these modified fibres are easily scalable, meaning the structure of each fibre can be preserved over thousands of kilometres.
    The Bath study is published in the latest issue of Science Advances.
    Protecting light against disorder
    At its simplest, optical fibre, which typically has a diameter of 125 µm (similar to a thick strand of hair) comprises a core of solid glass surrounded by cladding. Light travels through the core, where it bounces along as though reflecting off a mirror.

    However, the pathway taken by an optical fibre as it crisscrosses the landscape is rarely straight and undisturbed: turns, loops, and bends are the norm. Distortions in the fibre can cause information to degrade as it moves between sender and receiver. “The challenge was to build a network that takes robustness into account,” said Physics PhD student Nathan Roberts, who led the research.
    “Whenever you fabricate a fibre-optic cable, small variations in the physical structure of the fibre are inevitably present. When deployed in a network, the fibre can also get twisted and bent. One way to counter these variations and defects is to ensure the fibre design process includes a real focus on robustness. This is where we found the ideas of topology useful.”
    To design this new fibre, the Bath team used topology, which is the mathematical study of quantities that remain unchanged despite continuous distortions to the geometry. Its principles are already applied to many areas of physics research. By connecting physical phenomena to unchanging numbers, the destructive effects of a disordered environment can be avoided.
    The fibre designed by the Bath team deploys topological ideas by including several light-guiding cores in a fibre, linked together in a spiral. Light can hop between these cores but becomes trapped within the edge thanks to the topological design. These edge states are protected against disorder in the structure.
    Bath physicist Dr Anton Souslov, who co-authored the study as theory lead, said: “Using our fibre, light is less influenced by environmental disorder than it would be in an equivalent system lacking topological design.

    “By adopting optical fibres with topological design, researchers will have the tools to pre-empt and forestall signal-degrading effects by building inherently robust photonic systems.”
    Theory meets practical expertise
    Bath physicist Dr Peter Mosley, who co-authored the study as experimental lead, said: “Previously, scientists have applied the complex mathematics of topology to light, but here at the University of Bath we have lots of experience physically making optical fibres, so we put the mathematics together with our expertise to create topological fibre.”
    The team, which also includes PhD student Guido Baardink and Dr Josh Nunn from the Department of Physics, are now looking for industry partners to develop their concept further.
    “We are really keen to help people build robust communication networks and we are ready for the next phase of this work,” said Dr Souslov.
    Mr Roberts added: “We have shown that you can make kilometres of topological fibre wound around a spool. We envision a quantum internet where information will be transmitted robustly across continents using topological principles.”
    He also pointed out that this research has implications that go beyond communications networks. He said: “Fibre development is not only a technological challenge, but also an exciting scientific field in its own right.
    “Understanding how to engineer optical fibre has led to light sources from bright ‘supercontinuum’ that spans the entire visible spectrum right down to quantum light sources that produce individual photons — single particles of light.”
    The future is quantum
    Quantum networks are widely expected to play an important technological role in years to come. Quantum technologies have the capacity to store and process information in more powerful ways than ‘classical’ computers can today, as well as sending messages securely across global networks without any chance of eavesdropping.
    But the quantum states of light that transmit information are easily impacted by their environment and finding a way to protect them is a major challenge. This work may be a step towards maintaining quantum information in fibre optics using topological design. More