More stories

  • in

    Capturing methane from the air would slow global warming. Can it be done?

    This summer was the hottest ever recorded on Earth, and 2023 is on track to be the hottest year. Heat waves threatened people’s health across North America, Europe and Asia. Canada had its worst wildfire season ever, and flames devastated the city of Lahaina in Maui. Los Angeles was pounded by an unheard-of summer tropical storm while rains in Libya caused devastating floods that left thousands dead and missing. This extreme weather is a warning sign that we are living in a climate crisis, and a call to action.

    Carbon dioxide emissions from burning fossil fuels are the main culprit behind climate change, and scientists say they must be reined in. But there’s another greenhouse gas to deal with: methane. Tackling methane may be the best bet for putting the brakes on rising temperatures in the short term, says Rob Jackson, an Earth systems scientist at Stanford University and chair of the Global Carbon Project, which tracks greenhouse gas emissions. “Methane is the strongest lever we have to slow global warming over the next few decades.”

    That’s because it’s relatively short-lived in the atmosphere — methane lasts about 12 years, while CO2 can stick around for hundreds of years. And on a molecule-per-molecule basis, methane is more potent. Over the 20-year period after it’s emitted, methane can warm the atmosphere more than 80 times as much as an equivalent amount of CO2.

    We already have strategies for cutting methane emissions — fixing natural gas leaks (methane is the main component of natural gas), phasing out coal (mining operations release methane), eating less meat and dairy (cows burp up lots of methane) and electrifying transportation and appliances. Implementing all existing methane-mitigation strategies could slow global warming by 30 percent over the next decade, research has shown.

    But some climate scientists, including Jackson, say we need to go further. Several methane sources will be difficult, if not impossible, to eliminate. That includes some human-caused emissions, such as those produced by rice paddies and cattle farming — though practices do exist to reduce these emissions (SN: 11/28/15, p. 22). Some natural sources are poised to release more methane as the world warms. There are signs that tropical wetlands are already releasing more of the gas into the atmosphere, and rapid warming in the Arctic could turn permafrost into a hot spot for methane-making microbes and release a bomb of methane stored in the currently frozen soil.

    So scientists want to develop ways to remove methane directly from the air.

    Three billion metric tons more methane exist in the atmosphere today than in preindustrial times. Removing that excess methane would cool the planet by 0.5 degrees Celsius, Jackson says.

    Similar “negative emissions” strategies are already in limited use for CO2. That gas is captured where it’s emitted, or directly from the air, and then stored somewhere. Methane, however, is a tricky molecule to capture, meaning scientists need different approaches.

    Most ideas are still in early research stages. The National Academies of Sciences, Engineering and Medicine is currently studying these potential technologies, their state of readiness and possible risks, and what further research and funding are needed. Some of the approaches include re-engineering bacteria that are already pros at eating methane and developing catalytic reactors to place in coal-mine vents and other methane-rich places to chemically transform the gas.

    “Methane is a sprint and CO2 is a marathon,” says Desirée Plata, a civil and environmental engineer at MIT. For scientists focused on removing greenhouse gases, it’s off to the races.

    Microbes already remove methane from the air

    Methane, CH4, is readily broken down in the atmosphere, where sunshine and highly reactive hydroxyl radicals are abundant. But it’s a different story when chemists try to work with the molecule. Methane’s four carbon-hydrogen bonds are strong and stable. Currently, chemists must expose the gas to extremely high temperatures and pressures to break it down.

    Even getting hold of the gas is difficult. Despite its potent warming power, it’s present in low concentrations in the atmosphere. Only 2 out of every 1 million air molecules are methane (by comparison, about 400 of every 1 million air molecules are CO2). So it’s challenging to grab enough methane to store it or efficiently convert it into something else.

    Nature’s chemists, however, can take up and transform methane even in these challenging conditions. These microbes, called methanotrophs, use enzymes to eat methane. The natural global uptake of methane by methanotrophs living in soil is about 30 million metric tons per year. Compare that with the roughly 350 million tons of methane that human activities pumped into the atmosphere in 2022, according to the International Energy Agency.

    Microbiologists want to know whether it’s possible to get these bacteria to take up more methane more quickly.

    Lisa Stein, a microbiologist at the University of Alberta in Edmonton, Canada, studies the genetics and physiology of these microbes. “We do basic research to understand how they thrive in different environments,” she says.

    Methanotrophs work especially slowly in low-oxygen environments, Stein says, like wetland muck and landfills, the kinds of places where methane is plentiful. In these environments, microbes that make methane, called methanogens, generate the gas faster than methanotrophs can gobble it up.

    But it might be possible to develop soil amendments and other ecosystem modifications to speed microbial methane uptake, Stein says. She’s also talking with materials scientists about engineering a surface to encourage methanotrophs to grow faster and thus speed up their methane consumption.

    Scientists hope to get around this speed bump with a more detailed understanding of the enzyme that helps many methanotrophs feast on methane. Methane monooxygenase, or MMO, grabs the molecule and, with the help of copper embedded in the enzyme, uses oxygen to break methane’s carbon-hydrogen bonds. The enzyme ultimately produces methanol that the microbes then metabolize.

    Boosting MMO’s speed could not only help with methane removal but also allow engineers to put methanotrophs to work in industrial systems. Turning methane into methanol would be the first step, followed by several faster reactions, to make an end product like plastic or fuel.

    Some bacteria, including Methylococcus capsulatus (shown), naturally break down methane with the enzyme methane monooxygenase. By studying the enzyme’s structure, scientists hope to speed up bacteria’s uptake of the greenhouse gas.Anne Fjellbirkeland/Wikimedia Commons (CC BY 2.5)

    “Methane monooxygenases are not superfast enzymes,” says Amy Rosenzweig, a chemist at Northwestern University in Evanston, Ill. Any reaction involving MMO will impose a speed limit on the proceedings. “That is the key step, and unless you understand it, it’s going to be very difficult to make an engineered organism do what you want,” Rosenzweig says.

    Enzymes are often shaped to fit their reactants — in this case, methane — like a glove. So having a clear view of MMO’s physical structure could help researchers tweak the enzyme’s actions. MMO is embedded in a lipid membrane in the cell. To image it, structural biologists have typically started by using detergents to remove the lipids, which inactivates the enzyme and results in an incomplete picture of it and its activity. But Rosenzweig and colleagues recently managed to image the enzyme in this lipid context. This unprecedented view of MMO in its native state, published in 2022 in Science, revealed a previously unseen site where copper binds.

    But that’s still not the entire picture. Rosenzweig says she hopes her structural studies, along with other work, will lead to a breakthrough soon enough to help forestall further consequences of global warming. “Maybe people get lucky and engineer a strain quickly,” Rosenzweig says. “You don’t know until you try.”

    Chemists make progress on catalysts

    Other scientists seek to put methane-destroying chemical reactors close to methane sources. These reactors typically use a catalyst to speed up the chemical reactions that convert methane into a less planet-warming molecule. These catalysts often require high temperatures or other stringent conditions to operate, contain expensive metals like platinum, and don’t work well at the concentrations of methane found in ambient air.

    One promising place to start, though, is coal mines. Coal mining is associated with tens of millions of tons of methane emissions worldwide every year. Although coal-fired power plants are being phased out in many countries, coal will be difficult to eliminate entirely due to its key role in steel production, says Plata, of MIT.

    To develop a catalyst that might work in a coal mine, Plata found inspiration in MMO. Her team developed a catalyst material based on a silicate material embedded with copper — the same metal found in MMO and much less expensive than those usually required to oxidize methane. The material is also porous, which improves the catalyst’s efficiency because it has a larger surface area, and thus more places for reactions to occur, than a nonporous material would. The catalyst turns methane into CO2, a reaction that releases heat, which is needed to further fuel the reaction. If methane concentrations are high enough, the reaction will be self-sustaining, Plata says.

    Turning methane into CO2 may sound counterproductive, but it reduces warming overall because methane traps much more heat than CO2 and is far less abundant in the atmosphere. If all the excess methane in the atmosphere were turned into CO2, according to a 2019 study led by Jackson, it would result in only 8.2 billion additional tons of CO2 — equivalent to just a few months of CO2 emissions at today’s rates. And the net effect would be to lessen the heating of the atmosphere by a sixth.

    Cattle feedlots are another place where Plata’s catalytic reactor might work. Barns outfitted with fans to keep cattle comfortable move air around, so reactors could be fitted to these ventilation systems. The next step is determining whether methane concentrations at industrial dairy farms are high enough for the catalyst to work.

    At Drumgoon Dairy in South Dakota, Elijah Martin (left) and Will Sawyer (right) test a small-scale thermal catalytic unit developed in Desirée Plata’s lab at MIT. The reactor transforms methane into carbon dioxide, which could lower the planet’s net warming rate because methane is a stronger greenhouse gas.D. Plata

    Another researcher making progress is energy scientist and engineer Arun Majumdar, one of Jackson’s collaborators at Stanford. In January, Majumdar published initial results describing a catalyst that converts methane into methanol, with an added boost from high-energy ultraviolet light. This UV blast adds the energy needed to overcome CH4’s stubborn bonds — and the carefully designed catalyst stays on target. Previous catalyst designs tended to produce a mix of CO2 and methanol, but this catalyst mostly sticks to making methanol.

    Is geoengineering a path to methane removal?

    A more extreme approach to speed up methane’s natural breakdown is to change the chemistry of the atmosphere itself. A few companies, such as the U.S.-based Blue Dot Change, have proposed releasing chemicals into the sky to enhance methane oxidation.

    Natalie Mahowald, an atmospheric chemist at Cornell University, decided to evaluate this type of geoengineering.

    “I’m not super excited about throwing more things into the atmosphere,” Mahowald says. To meet the goals of the Paris Agreement, limiting global warming to 1.5 to 2 degrees Celsius above the preindustrial average, though, it’s worth exploring all possibilities, she says. “If we’re going to meet these targets,” she says “we’re going to need some of these crazy ideas to work. So I’m willing to look at it. But I’m looking with a scientist’s critical eye.”

    The main strategy proposed by advocates would inject iron aerosols into the air over the ocean on a sunny day. These aerosols would react with salty sea spray aerosols to form chlorine, which would then attack methane in the atmosphere and initiate further chemical reactions that turn it into CO2. Mahowald wondered how much chlorine would be needed — and if there might be any unintended consequences.

    Detailed modeling revealed something alarming. The iron injections could have the opposite of the intended effect, Mahowald and colleagues reported in July in Nature Communications. Chlorine won’t attack methane if ozone is around. Instead, chlorine will first break down all the ozone it can find. But ozone plays a key role in generating the hydroxyl radicals that naturally break down atmospheric methane. So when ozone levels fall, Mahowald says, the concentration and lifetime of methane molecules in the atmosphere actually increases. To use this strategy to break down methane, geo­engineers would need to add a tremendous amount of chlorine to the atmosphere — enough to first break down the ozone, then attack methane.

    Removing 20 percent of the atmosphere’s methane, thus reducing the planet’s surface temperature by 0.2 degrees Celsius by 2050, for example, would require creating about 630 million tons of atmospheric chlorine every year. That would in turn require injecting perhaps tens of millions of tons of iron. A form of particulate matter, these iron aerosols could worsen air quality; inhaling particulate matter is associated with a range of health problems, particularly cardiovascular and lung disease. This atmospheric tinkering could also create hydrochloric acid that could reach the ocean and acidify it.

    And there’s no guarantee that some of the chlorine wouldn’t make it all the way up to the ozone layer, depleting the planetary shield that protects us from the sun’s harmful UV rays. Mahowald is still studying this possibility.

    Methane is a sprint and CO2 is a marathon. Desirée Plata

    Mahowald is ambivalent about doing research on geoengineering. “We’re just throwing out ideas here because we’re in a terrible, terrible position,” she says. She’s worried about what could happen if all the methane locked up in the world’s permafrost escapes. If scientists can figure out how to use iron aerosols effectively, without adverse effects — and if such geoengineering is accepted by society — we might need it.

    “We’re just trying to see, is there any hope this could work and would we ever want to do it? Would it have enough benefits to outweigh the disadvantages?”

    The committee organized by the National Academies to investigate methane removal is taking these kinds of ethical questions into account, as well as considering the potential cost and scale of technologies. Stein, a committee member, says a framework proposed by Spark Climate Solutions provides some guidance. The organization, a nonprofit based in San Francisco that evaluates methane-removal technologies, proposes investing in tech that can remove tens of millions of tons of methane per year in the coming decades, at a cost of less than $2,000 per ton. Spark cofounder David Mann says the numbers are designed to focus attention and investment on technologies that can make a real difference in curbing climate change in the near term.

    The National Academies group aims to make recommendations about research priorities on methane-removal technologies by next summer. It’s likely that a portfolio of different technologies will be necessary. What works in a cattle feedlot may not work at a wastewater treatment plant, for instance.

    Scientists focused on methane removal are eager for more researchers, research funding and companies to enter the fray — and quickly. “It’s been a crazy year,” Jackson says of 2023’s extreme weather. We’re already feeling the effects of global warming, but we can seize the moment, he says. “This problem is not something for our grandchildren. It’s here.” More

  • in

    Nextgen computing: Hard-to-move quasiparticles glide up pyramid edges

    A new kind of “wire” for moving excitons, developed at the University of Michigan, could help enable a new class of devices, perhaps including room temperature quantum computers.
    What’s more, the team observed a dramatic violation of Einstein’s relation, used to describe how particles spread out in space, and leveraged it to move excitons in much smaller packages than previously possible.
    “Nature uses excitons in photosynthesis. We use excitons in OLED displays and some LEDs and solar cells,” said Parag Deotare, co-corresponding author of the study in ACS Nano supervising the experimental work, and an associate professor of electrical and computer engineering. “The ability to move excitons where we want will help us improve the efficiency of devices that already use excitons and expand excitonics into computing.”
    An exciton can be thought of as a particle (hence quasiparticle), but it’s really an electron linked with a positively-charged empty space in the lattice of the material (a “hole”). Because an exciton has no net electrical charge, moving excitons are not affected by parasitic capacitances, an electrical interaction between neighboring components in a device that causes energy losses. Excitons are also easy to convert to and from light, so they open the way for extremely fast and efficient computers that use a combination of optics and excitonics, rather than electronics.
    This combination could help enable room temperature quantum computing, said Mackillo Kira, co-corresponding author of the study supervising the theory, and a professor of electrical and computer engineering. Excitons can encode quantum information, and they can hang onto it longer than electrons can inside a semiconductor. But that time is still measured in picoseconds (10-12 seconds) at best, so Kira and others are figuring out how to use femtosecond laser pulses (10-15 seconds) to process information.
    “Full quantum-information applications remain challenging because degradation of quantum information is too fast for ordinary electronics,” he said. “We are currently exploring lightwave electronics as a means to supercharge excitonics with extremely fast processing capabilities.”
    However, the lack of net charge also makes excitons very difficult to move. Previously, Deotare had led a study that pushed excitons through semiconductors with acoustic waves. Now, a pyramid structure enables more precise transport for smaller numbers of excitons, confined to one dimension like a wire.

    It works like this:
    The team used a laser to create a cloud of excitons at a corner of the pyramid’s base, bouncing electrons out of the valence band of a semiconductor into the conduction band — but the negatively charged electrons are still attracted to the positively charged holes left behind in the valence band. The semiconductor is a single layer of tungsten diselenide semiconductor, just three atoms thick, draped over the pyramid like a stretchy cloth. And the stretch in the semiconductor changes the energy landscape that the excitons experience.
    It seems counterintuitive that the excitons should ride up the pyramid’s edge and settle at the peak when we imagine an energy landscape chiefly governed by gravity. But instead, the landscape is governed by how far apart the valence and conduction bands of the semiconductor are. The energy gap between the two, also known as the semiconductor’s band gap, shrinks where the semiconductor is stretched. The excitons migrate to the lowest energy state, funneled onto the pyramid’s edge where they then rise to its peak.
    Usually, an equation penned by Einstein is good at describing how a bunch of particles diffuses outward and drifts. However, the semiconductor was imperfect, and those defects acted as traps that would nab some of the excitons as they tried to drift by. Because the defects at the trailing side of the exciton cloud were filled in, that side of the distribution diffused outward as predicted. The leading edge, however, did not extend so far. Einstein’s relation was off by more than a factor of 10.
    “We’re not saying Einstein was wrong, but we have shown that in complicated cases like this, we shouldn’t be using his relation to predict the mobility of excitons from the diffusion,” said Matthias Florian, co-first-author of the study and a research investigator in electrical and computer engineering, working under Kira.
    To directly measure both, the team needed to detect single photons, emitted when the bound electrons and holes spontaneously recombined. Using time-of-flight measurements, they also figured out where the photons came from precisely enough to measure the distribution of excitons within the cloud.
    The study was supported by the Army Research Office (award no. W911NF2110207) and the Air Force Office of Scientific Research (award no. FA995-22-1-0530).
    The pyramid structure was built in the Lurie Nanofabrication Facility.
    The team has applied for patent protection with the assistance of U-M Innovation Partnerships and is seeking partners to bring the technology to market. More

  • in

    Unlocking the secrets of cells, with AI

    Machine learning is now helping researchers analyze the makeup of unfamiliar cells, which could lead to more personalized medicine in the treatment of cancer and other serious diseases.
    Researchers at the University of Waterloo developed GraphNovo, a new program that provides a more accurate understanding of the peptide sequences in cells. Peptides are chains of amino acids within cells and are building blocks as important and unique as DNA or RNA.
    In a healthy person, the immune system can correctly identify the peptides of irregular or foreign cells, such as cancer cells or harmful bacteria, and then target those cells for destruction. For people whose immune system is struggling, the promising field of immunotherapy is working to retrain their immune systems to identify these dangerous invaders.
    “What scientists want to do is sequence those peptides between the normal tissue and the cancerous tissue to recognize the differences,” said Zeping Mao, a PhD candidate in the Cheriton School of Computer Science who developed GraphNovo under the guidance of Dr. Ming Li.
    This sequencing process is particularly difficult for novel illnesses or cancer cells, which may not have been analyzed before. While scientists can draw on an existing peptide database when analyzing diseases or organisms that have previously been studied, each person’s cancer and immune system are unique.
    To quickly build a profile of the peptides in an unfamiliar cell, scientists have been using a method called de novo peptide sequencing, which uses mass spectrometry to rapidly analyze a new sample. This process may leave some peptides incomplete or entirely missing from the sequence.
    Utilizing machine learning, GraphNovo significantly enhances the accuracy in identifying peptide sequences by filling these gaps with the precise mass of the peptide sequence. Such a leap in accuracy will likely be immensely beneficial in a variety of medical areas, especially in the treatment of cancer and the creation of vaccines for ailments such as Ebola and COVID-19. The researchers achieved this breakthrough due to Waterloo’s commitment to advances in the interface between technology and health.
    “If we don’t have an algorithm that’s good enough, we cannot build the treatments,” Mao said. “Right now, this is all theoretical. But soon, we will be able to use it in the real world.” More

  • in

    Compact accelerator technology achieves major energy milestone

    Particle accelerators hold great potential for semiconductor applications, medical imaging and therapy, and research in materials, energy and medicine. But conventional accelerators require plenty of elbow room — kilometers — making them expensive and limiting their presence to a handful of national labs and universities.
    Researchers from The University of Texas at Austin, several national laboratories, European universities and the Texas-based company TAU Systems Inc. have demonstrated a compact particle accelerator less than 20 meters long that produces an electron beam with an energy of 10 billion electron volts (10 GeV). There are only two other accelerators currently operating in the U.S. that can reach such high electron energies, but both are approximately 3 kilometers long.
    “We can now reach those energies in 10 centimeters,” said Bjorn “Manuel” Hegelich, associate professor of physics at UT and CEO of TAU Systems, referring to the size of the chamber where the beam was produced. He is the senior author on a recent paper describing their achievement in the journal Matter and Radiation at Extremes.
    Hegelich and his team are currently exploring the use of their accelerator, called an advanced wakefield laser accelerator, for a variety of purposes. They hope to use it to test how well space-bound electronics can withstand radiation, to image the 3D internal structures of new semiconductor chip designs, and even to develop novel cancer therapies and advanced medical-imaging techniques.
    This kind of accelerator could also be used to drive another device called an X-ray free electron laser, which could take slow-motion movies of processes on the atomic or molecular scale. Examples of such processes include drug interactions with cells, changes inside batteries that might cause them to catch fire, chemical reactions inside solar panels, and viral proteins changing shape when infecting cells.
    The concept for wakefield laser accelerators was first described in 1979. An extremely powerful laser strikes helium gas, heats it into a plasma and creates waves that kick electrons from the gas out in a high-energy electron beam. During the past couple of decades, various research groups have developed more powerful versions. Hegelich and his team’s key advance relies on nanoparticles. An auxiliary laser strikes a metal plate inside the gas cell, which injects a stream of metal nanoparticles that boost the energy delivered to electrons from the waves.
    The laser is like a boat skimming across a lake, leaving behind a wake, and electrons ride this plasma wave like surfers.

    “It’s hard to get into a big wave without getting overpowered, so wake surfers get dragged in by Jet Skis,” Hegelich said. “In our accelerator, the equivalent of Jet Skis are nanoparticles that release electrons at just the right point and just the right time, so they are all sitting there in the wave. We get a lot more electrons into the wave when and where we want them to be, rather than statistically distributed over the whole interaction, and that’s our secret sauce.”
    For this experiment, the researchers used one of the world’s most powerful pulsed lasers, the Texas Petawatt Laser, which is housed at UT and fires one ultra-intense pulse of light every hour. A single petawatt laser pulse contains about 1,000 times the installed electrical power in the U.S. but lasts only 150 femtoseconds, less than a billionth as long as a lightning discharge. The team’s long-term goal is to drive their system with a laser they’re currently developing that fits on a tabletop and can fire repeatedly at thousands of times per second, making the whole accelerator far more compact and usable in much wider settings than conventional accelerators.
    The study’s co-first authors are Constantin Aniculaesei, corresponding author now at Heinrich Heine University Düsseldorf, Germany; and Thanh Ha, doctoral student at UT and researcher at TAU Systems. Other UT faculty members are professors Todd Ditmire and Michael Downer.
    Hegelich and Aniculaesei have submitted a patent application describing the device and method to generate nanoparticles in a gas cell. TAU Systems, spun out of Hegelich’s lab, holds an exclusive license from the University for this foundational patent. As part of the agreement, UT has been issued shares in TAU Systems.
    Support for this research was provided by the U.S. Air Force Office of Scientific Research, the U.S. Department of Energy, the U.K. Engineering and Physical Sciences Research Council and the European Union’s Horizon 2020 research and innovation program. More

  • in

    Defending your voice against deepfakes

    Recent advances in generative artificial intelligence have spurred developments in realistic speech synthesis. While this technology has the potential to improve lives through personalized voice assistants and accessibility-enhancing communication tools, it also has led to the emergence of deepfakes, in which synthesized speech can be misused to deceive humans and machines for nefarious purposes.
    In response to this evolving threat, Ning Zhang, an assistant professor of computer science and engineering at the McKelvey School of Engineering at Washington University in St. Louis, developed a tool called AntiFake, a novel defense mechanism designed to thwart unauthorized speech synthesis before it happens. Zhang presented AntiFake Nov. 27 at the Association for Computing Machinery’s Conference on Computer and Communications Security in Copenhagen, Denmark.
    Unlike traditional deepfake detection methods, which are used to evaluate and uncover synthetic audio as a post-attack mitigation tool, AntiFake takes a proactive stance. It employs adversarial techniques to prevent the synthesis of deceptive speech by making it more difficult for AI tools to read necessary characteristics from voice recordings. The code is freely available to users.
    “AntiFake makes sure that when we put voice data out there, it’s hard for criminals to use that information to synthesize our voices and impersonate us,” Zhang said. “The tool uses a technique of adversarial AI that was originally part of the cybercriminals’ toolbox, but now we’re using it to defend against them. We mess up the recorded audio signal just a little bit, distort or perturb it just enough that it still sounds right to human listeners, but it’s completely different to AI.”
    To ensure AntiFake can stand up against an ever-changing landscape of potential attackers and unknown synthesis models, Zhang and first author Zhiyuan Yu, a graduate student in Zhang’s lab, built the tool to be generalizable and tested it against five state-of-the-art speech synthesizers. AntiFake achieved a protection rate of over 95%, even against unseen commercial synthesizers. They also tested AntiFake’s usability with 24 human participants to confirm the tool is accessible to diverse populations.
    Currently, AntiFake can protect short clips of speech, taking aim at the most common type of voice impersonation. But, Zhang said, there’s nothing to stop this tool from being expanded to protect longer recordings, or even music, in the ongoing fight against disinformation.
    “Eventually, we want to be able to fully protect voice recordings,” Zhang said. “While I don’t know what will be next in AI voice tech — new tools and features are being developed all the time — I do think our strategy of turning adversaries’ techniques against them will continue to be effective. AI remains vulnerable to adversarial perturbations, even if the engineering specifics may need to shift to maintain this as a winning strategy.” More

  • in

    New framework for using AI in health care considers medical knowledge, practices, procedures, values

    Health care organizations are looking to artificial intelligence (AI) tools to improve patient care, but their translation into clinical settings has been inconsistent, in part because evaluating AI in health care remains challenging. In a new article, researchers propose a framework for using AI that includes practical guidance for applying values and that incorporates not just the tool’s properties but the systems surrounding its use.
    The article was written by researchers at Carnegie Mellon University, The Hospital for Sick Children, the Dalla Lana School of Public Health, Columbia University, and the University of Toronto. It is published in Patterns.
    “Regulatory guidelines and institutional approaches have focused narrowly on the performance of AI tools, neglecting knowledge, practices, and procedures necessary to integrate the model within the larger social systems of medical practice,” explains Alex John London, K&L Gates Professor of Ethics and Computational Technologies at Carnegie Mellon, who coauthored the article. “Tools are not neutral — they reflect our values — so how they work reflects the people, processes, and environments in which they are put to work.”
    London is also Director of Carnegie Mellon’s Center for Ethics and Policy and Chief Ethicist at Carnegie Mellon’s Block Center for Technology and Society as well as a faculty member in CMU’s Department of Philosophy.
    London and his coauthors advocate for a conceptual shift in which AI tools are viewed as parts of a larger “intervention ensemble,” a set of knowledge, practices, and procedures that are necessary to deliver care to patients. In previous work with other colleagues, London has applied this concept to pharmaceuticals and to autonomous vehicles. The approach treats AI tools as “sociotechnical systems,” and the authors’ proposed framework seeks to advance the responsible integration of AI systems into health care.
    Previous work in this area has been largely descriptive, explaining how AI systems interact with human systems. The framework proposed by London and his colleagues is proactive, providing guidance to designers, funders, and users about how to ensure that AI systems can be integrated into workflows with the greatest potential to help patients. Their approach can also be used for regulation and institutional insights, as well as for appraising, evaluating, and using AI tools responsibly and ethically. To illustrate their framework, the authors apply it to the development of AI systems developed for diagnosing more than mild diabetic retinopathy.
    “Only a small majority of models evaluated through clinical trials have shown a net benefit,” says Melissa McCradden, a Bioethicist at the Hospital for Sick Children and Assistant Professor of Clinical and Public Health at the Dalla Lana School of Public Health, who coauthored the article. “We hope our proposed framework lends precision to evaluation and interests regulatory bodies exploring the kinds of evidence needed to support the oversight of AI systems.” More

  • in

    Measuring long-term heart stress dynamics with smartwatch data

    Biomedical engineers at Duke University have developed a method using data from wearable devices such as smartwatches to digitally mimic an entire week’s worth of an individual’s heartbeats. The previous record covered only a few minutes.
    Called the Longitudinal Hemodynamic Mapping Framework (LHMF), the approach creates “digital twins” of a specific patient’s blood flow to assess its 3D characteristics. The advance is an important step toward improving on the current gold standard in evaluating the risks of heart disease or heart attack, which uses snapshots of a single moment in time — a challenging approach for a disease that progresses over months to years.
    The research was conducted in collaboration with computational scientists at Lawrence Livermore National Laboratory and was published on November 15, 2023, at the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC23). The conference is the leading global conference in the field of high-performance computing.
    “Modeling a patient’s 3D blood flow for even a single day would take a century’s worth of compute time on today’s best supercomputers,” said Cyrus Tanade, a PhD candidate in the laboratory of Amanda Randles, the Alfred Winborne and Victoria Stover Mordecai Associate Professor of Biomedical Sciences at Duke. “If we want to capture blood flow dynamics over long periods of time, we need a paradigm-shifting solution in how we approach 3D personalized simulations.”
    Over the past decade, researchers have steadily made progress toward accurately modeling the pressures and forces created by blood flowing through an individual’s specific vascular geometry. Randles, one of the leaders in the field, has developed a software package called HARVEY to tackle this challenge using the world’s fastest supercomputers.
    One of the most commonly accepted uses of such coronary digital twins is to determine whether or not a patient should receive a stent to treat a plaque or lesion. This computational method is much less invasive than the traditional approach of threading a probe on a guide wire into the artery itself.
    While this application requires only a handful of heartbeat simulations and works for a single snapshot in time, the field’s goal is to track pressure dynamics over weeks or months after a patient leaves a hospital. To get even 10 minutes of simulated data on the Duke group’s computer cluster, however, they had to lock it down for four months.

    “Obviously, that’s not a workable solution to help patients because of the computing costs and time requirements,” Randles said. “Think of it as taking three weeks to simulate what the weather will be like tomorrow. By the time you predict a rainstorm, the water would have already dried up.”
    To ever apply this technology to real-world people over the long term, researchers must find a way to reduce the computational load. The new paper introduces the Longitudinal Hemodynamic Mapping Framework, which cuts what used to take nearly a century of simulation time down to just 24 hours.
    “The solution is to simulate the heartbeats in parallel rather than sequentially by breaking the task up amongst many different nodes,” Tanade said. “Conventionally, the tasks are broken up spatially with parallel computing. But here, they’re broken up in time as well.”
    For example, one could reasonably assume that the specifics of a coronary flow at 10:00 am on a Monday will likely have little impact on the flow at 2:00 pm on a Wednesday. This allowed the team to develop a method to accurately simulate different chunks of time simultaneously and piece them back together. This breakdown made the pieces small enough to be simulated using cloud computing systems like Amazon Web Services rather than requiring large-scale supercomputers.
    To put the mapping framework to the test, researchers used tried and true methods to simulate 750 heartbeats — about 10 minutes of biological time — with the lab’s allotment of computing time on Duke’s computer cluster. Using continuous data on heart rate and electrocardiography from a smartwatch, it produced a complete set of 3D blood flow biomarkers that could correlate with disease progression and adverse events. It took four months to complete and exceeded the existing record by an order of magnitude.
    They then compared these results to those produced by LHMF running on Amazon World Services and Summit, an Oak Ridge National Laboratory system, in just a few hours. The errors were negligible, proving that LHMF could work on a useful time scale.

    The team then further refined LHMF by introducing a clustering method, further reducing the computational costs and allowing them to track the frictional force of blood on vessel walls — a well-known biomarker of cardiovascular disease — for over 700,000 heartbeats, or one week of continuous activity. These results allowed the group to create a personalized, longitudinal hemodynamic map, showing how the forces vary over time and the percentage of time spent in various vulnerable states.
    “The results significantly differed from those obtained over a single heartbeat,” Tanade said. “This demonstrates that capturing longitudinal blood flow metrics provides nuances and information that is otherwise not perceptible with the previous gold standard approach.”
    “If we can create a temporal map of wall stress in critical areas like the coronary artery, we could predict the risk of a patient developing atherosclerosis or the progression of such diseases,” Randles added. “This method could allow us to identify cases of heart disease much earlier than is currently possible.”
    This work was supported by the National Science Foundation (ACI-1548562, DGE164486), the Department of Energy (DE-AC52-07NA27344, DE-AC05-00OR22725), Amazon Web Services, the National Insitutes of Health (DP1AG082343) and the Coulter Foundation. More

  • in

    Immersive engagement in mixed reality can be measured with reaction time

    In the real world/digital world cross-over of mixed reality, a user’s immersive engagement with the program is called presence. Now, UMass Amherst researchers are the first to identify reaction time as a potential presence measurement tool. Their findings, published in IEEE Transactions on Visualization and Computer Graphics, have implications for calibrating mixed reality to the user in real time.
    “In virtual reality, the user is in the virtual world; they have no connection with their physical world around them,” explains Fatima Anwar, assistant professor of electrical and computer engineering, and an author on the paper. “Mixed reality is a combination of both: You can see your physical world, but then on top of that, you have that spatially related information that is virtual.” She gives attaching a virtual keyboard onto a physical table as an example. This is similar to augmented reality but takes it a step further by making the digital elements more interactive with the user and the environment.
    The uses for mixed reality are most obvious within gaming, but Anwar says that it’s rapidly expanding into other fields: academics, industry, construction and healthcare.
    However, mixed reality experiences vary in quality: “Does the user feel that they are present in that environment? How immersive do they feel? And how does that impact their interactions with the environment?” asks Anwar. This is what is defined as “presence.”
    Up to now, presence has been measured with subjective questionnaires after a user exits a mixed-reality program. Unfortunately, when presence is measured after the fact, it’s hard to capture a user’s feelings of the entire experience, especially during long exposure scenes. (Also, people are not very articulate in describing their feelings, making them an unreliable data source.) The ultimate goal is to have an instantaneous measure of presence so that the mixed reality simulation can be adjusted in the moment for optimal presence. “Oh, their presence is going down. Let’s do an intervention,” says Anwar.
    Yasra Chandio, doctoral candidate in computer engineering and lead study author, gives medical procedures as an example of the importance of this real-time presence calibration: If a surgeon needs millimeter-level precision, they may use mixed reality as a guide to tell them exactly where they need to operate.
    “If we just show the organ in front of them, and we don’t adjust for the height of the surgeon, for instance, that could be delaying the surgeon and could have inaccuracies for them,” she says. Low presence can also contribute to cybersickness, a feeling of dizziness or nausea that can occur in the body when a user’s bodily perception does not align with what they’re seeing. However, if the mixed reality system is internally monitoring presence, it can make adjustments in real-time, like moving the virtual organ rendering closer to eye level.

    One marker within mixed reality that can be measured continuously and passively is reaction time, or how quickly a user interacts with the virtual elements. Through a series of experiments, the researchers determined that reaction time is associated with presence such that slow reaction time indicates low presence and high reaction time indicates high presence with 80% predictive accuracy even with the small dataset.
    To test this, the researchers put participants in modified “Fruit Ninja” mixed reality scenarios (without the scoring), adjusting how authentic the digital elements appeared to manipulate presence.
    Presence is a combination of two factors: place illusion and plausibility illusion. “First of all, virtual objects should look real,” says Anwar. That’s place illusion. “The objects should look at how physical things look, and the second thing is: are they behaving in a real way? Do they follow the laws of physics while they’re behaving in the real world?” This is plausibility illusion.
    In one experiment, they adjusted place illusion and the fruit appeared either as lifelike fruit or abstract cartoons. In another experiment, they adjusted the plausibility illusion by showing mugs filling up with coffee either in the correct upright position or sideways.
    What they found: People were quicker in reacting to the lifelike fruit than they would to the cartoonish-looking food. And the same thing happened in the plausibility and implausible behavior of the coffee mug.
    Reaction time is a good measure of presence because it highlights if the virtual elements are a tool or a distraction. “If a person is not feeling present, they would be looking into that environment and figuring out things,” explains Chandio. “Their cognition in perception is focused on something other than the task at hand, because they are figuring out what is going on.”
    “Some people are going to argue, ‘Why would you not create the best scene in the first place?’ but that’s because humans are very complex,” Chandio explains. “What works for me may not work for you may not work for Fatima, because we have different bodies, our hands move differently, we think of the world differently. We perceive differently.” More