More stories

  • in

    High groundwater depletion risk in South Korea in 2080s

    Groundwater is literally the water found beneath the Earth’s surface. It forms when precipitation such as rain and snow seeps into the soil, replenishing rivers and lakes. This resource supplies our drinking water. However, a recent study has alarmed the scientific community by predicting that approximately three million people in currently untapped areas of Korea could face groundwater depletion by 2080.
    A research team, led by Professor Jonghun Kam from Division of Environmental Science and Engineering and Dr. Chang-Kyun Park from the Institute of Environmental and Energy Technology (currently working for LG Energy Solution) at Pohang University of Science and Technology (POSTECH), used an advanced statistical method, to analyze surface and deep groundwater level data from 2009 to 2020, revealing critical spatiotemporal patterns in groundwater levels. Their findings were published in the international journal “Science of the Total Environment.”
    Groundwater is crucial for ecosystems and socioeconomic development, particularly in mountainous regions where water systems are limited. However, recent social and economic activities along with urban development have led to significant groundwater overuse. Additionally, rising land temperatures are altering regional water flows and supplies, necessitating water policies that consider both natural and human impacts to effectively address climate change.
    In a recent study, researchers used an advanced statistical method called “cyclostationary empirical orthogonal function analysis (CSEOF)” to analyze water level data from nearly 200 surface and deep groundwater stations in the southern Korean Peninsula from 2009 to 2020. This analysis helped them identify important spatiotemporal patterns in groundwater levels.
    The first and second principal components revealed that water level patterns mirrored recurring seasonal changes and droughts. While shallow-level groundwater is more sensitive to the seasonality of precipitation than the drought occurrence, deep-level groundwater is more sensitive to the drought occurrence than seasonality of precipitation. This indicates that both shallow-level and deep-level groundwater are crucial for meeting community water needs and mitigating drought effects.
    The third principal component highlighted a decline in groundwater levels in the western Korean Peninsula since 2009. The researchers projected that if this decline in deep groundwater continues, at least three million people in untapped or newly developed areas, primarily in the southwestern part of the peninsula, could face unprecedented groundwater level as a new normal (defined as groundwater depletion) by 2080. If the research team’s predictions are correct, the impact would be particularly severe in drought-prone, untapped areas where groundwater is heavily relied upon.
    Professor Jonghun Kam of POSTECH stated, “By leveraging long-term, multi-layer groundwater level data on Korea and advanced statistical techniques, we successfully analyzed the changing patterns of deep- and shallow-level groundwater levels and predicted the risk of groundwater depletion.” He added, “An integrated national development plan is essential, one that considers not only regional development plans but also balanced water resource management plans.” More

  • in

    The thinnest lens on Earth, enabled by excitons

    Lenses are used to bend and focus light. Normal lenses rely on their curved shape to achieve this effect, but physicists from the University of Amsterdam and Stanford University have made a flat lens of only three atoms thick which relies on quantum effects. This type of lens could be used in future augmented reality glasses.
    When you imagine a lens, you probably picture a piece of curved glass. This type of lens works because light is refracted (bent) when it enters the glass, and again when it exits, allowing us to make things appear larger or closer than they actually are. We have used curved lenses for more than two millennia, allowing us to study the movements of distant planets and stars, to reveal tiny microorganisms, and to improve our vision.
    Ludovico Guarneri, Thomas Bauer, and Jorik van de Groep of the University of Amsterdam, together with colleagues from Stanford University in California, took a different approach. Using a single layer of a unique material called tungsten disulphide (WS2 for short), they constructed a flat lens that is half a millimetre wide, but just 0.0000006 millimetres, or 0.6 nanometres, thick. This makes it the thinnest lens on Earth!
    Rather than relying on a curved shape, the lens is made of concentric rings of WS2 with gaps in between. This is called a ‘Fresnel lens’ or ‘zone plate lens’, and it focuses light using diffraction rather than refraction. The size of, and distance between the rings (compared to the wavelength of the light hitting it) determines the lens’s focal length. The design used here focuses red light 1 mm from the lens.
    Quantum enhancement
    A unique feature of this lens is that its focussing efficiency relies on quantum effects within WS2. These effects allow the material to efficiently absorb and re-emit light at specific wavelengths, giving the lens the built-in ability to work better for these wavelengths.
    This quantum enhancement works as follows. First, WS2 absorbs light by sending an electron to a higher energy level. Due to the ultra-thin structure of the material, the negatively charged electron and the positively charged ‘hole’ it leaves behind in the atomic lattice stay bound together by the electrostatic attraction between them, forming what is known as an ‘exciton’. These excitons quickly disappear again by the electron and hole merging together and sending out light. This re-emitted light contributes to the lens’s efficiency.

    The scientists detected a clear peak in lens efficiency for the specific wavelengths of light sent out by the excitons. While the effect is already observed at room temperature, the lenses are even more efficient when cooled down. This is because excitons do their work better at lower temperatures.
    Augmented reality
    Another one of the lens’s unique features is that, while some of the light passing through it makes a bright focal point, most light passes through unaffected. While this may sound like a disadvantage, it actually opens new doors for use in technology of the future. “The lens can be used in applications where the view through the lens should not be disturbed, but a small part of the light can be tapped to collect information. This makes it perfect for wearable glasses such as for augmented reality,” explains Jorik van de Groep, one of the authors of the paper.
    The researchers are now setting their sights on designing and testing more complex and multifunctional optical coatings whose function (such as focussing light) can be adjusted electrically. “Excitons are very sensitive to the charge density in the material, and therefore we can change the refractive index of the material by applying a voltage,” says Van de Groep. The future of excitonic materials is bright! More

  • in

    Generative AI to protect image privacy

    Image privacy could be protected with the use of generative artificial intelligence. Researchers from Japan, China and Finland created a system which replaces parts of images that might threaten confidentiality with visually similar but AI-generated alternatives. Named “generative content replacement,” in tests, 60% of viewers couldn’t tell which images had been altered. The researchers intend for this system to provide a more visually cohesive option for image censoring, which helps to preserve the narrative of the image while protecting privacy. This research was presented at the Association for Computing Machinery’s CHI Conference on Human Factors in Computing Systems, held in Honolulu, Hawaii, in the U.S., in May 2024.
    With just a few text prompts, generative AI can offer a quick fix for a tricky school essay, a new business strategy or endless meme fodder. The advent of generative AI into daily life has been swift, and the potential scale of its role and influence are still being grappled with. Fears over its impact on future job security, online safety and creative originality have led to strikes from Hollywood writers, court cases over faked photos and heated discussions about authenticity.
    However, a team of researchers has proposed using a sometimes controversial feature of generative AI — its ability to manipulate images — as a way to solve privacy issues.
    “We found that the existing image privacy protection techniques are not necessarily able to hide information while maintaining image aesthetics. Resulting images can sometimes appear unnatural or jarring. We considered this a demotivating factor for people who might otherwise consider applying privacy protection,” explained Associate Professor Koji Yatani from the Graduate School of Engineering at the University of Tokyo. “So, we decided to explore how we can achieve both — that is, robust privacy protection and image useability — at the same time by incorporating the latest generative AI technology.”
    The researchers created a computer system which they named generative content replacement (GCR). This tool identifies what might constitute a privacy threat and automatically replaces it with a realistic but artificially created substitute. For example, personal information on a ticket stub could be replaced with illegible letters, or a private building exchanged for a fake building or other landscape features.
    “There are a number of commonly used image protection methods, such as blurring, color filling or just removing the affected part of the image. Compared to these, our results show that generative content replacement can better maintain the story of the original images and higher visual harmony,” said Yatani. “We found that participants couldn’t detect GCR in 60% of images.”
    For now, the GCR system requires a lot of computation resources, so it won’t be available on any personal devices just yet. The tested system was fully automatic, but the team has since developed a new interface to allow users to customize images, giving more control over the final outcome.
    Although some may be concerned about the risks of this type of realistic image alteration, where the lines between original and altered imagery become more ambiguous, the team is positive about its advantages. “For public users, we believe that the greatest benefit of this research is providing a new option for image privacy protection,” said Yatani. “GCR offers a novel method for protecting against privacy threats, while maintaining visual coherence for storytelling purposes and enabling people to more safely share their content.” More

  • in

    Researchers apply quantum computing methods to protein structure prediction

    Researchers from Cleveland Clinic and IBM recently published findings in the Journal of Chemical Theory and Computation that could lay the groundwork for applying quantum computing methods to protein structure prediction. This publication is the first peer-reviewed quantum computing paper from the Cleveland Clinic-IBM Discovery Accelerator partnership.
    For decades, researchers have leveraged computational approaches to predict protein structures. A protein folds itself into a structure that determines how it functions and binds to other molecules in the body. These structures determine many aspects of human health and disease.
    By accurately predicting the structure of a protein, researchers can better understand how diseases spread and thus how to develop effective therapies. Cleveland Clinic postdoctoral fellow Bryan Raubenolt, Ph.D., and IBM researcher Hakan Doga, Ph.D., spearheaded a team to discover how quantum computing can improve current methods.
    In recent years, machine learning techniques have made significant progress in protein structure prediction. These methods are reliant on training data (a database of experimentally determined protein structures) to make predictions. This means that they are constrained by how many proteins they have been taught to recognize. This can lead to lower levels of accuracy when the programs/algorithms encounter a protein that is mutated or very different from those on which they were trained, which is common with genetic disorders.
    The alternative method is to simulate the physics of protein folding. Simulations allow researchers to look at a given protein’s various possible shapes and find the most stable one. The most stable shape is critical for drug design.
    The challenge is that these simulations are nearly impossible on a classical computer, beyond a certain protein size. In a way, increasing the size of the target protein is comparable to increasing the dimensions of a Rubik’s cube. For a small protein with 100 amino acids, a classical computer would need the time equal to the age of the universe to exhaustively search all the possible outcomes, says Dr. Raubenolt.
    To help overcome these limitations, the research team applied a mix of quantum and classical computing methods. This framework could allow quantum algorithms to address the areas that are challenging for state-of-the-art classical computing, including protein size, intrinsic disorder, mutations and the physics involved in proteins folding. The framework was validated by accurately predicting the folding of a small fragment of a Zika virus protein on a quantum computer, compared to state-of-the-art classical methods.

    The quantum-classical hybrid framework’s initial results outperformed both a classical physics-based method and AlphaFold2. Although the latter is designed to work best with larger proteins, it nonetheless demonstrates this framework’s ability to create accurate models without directly relying on substantial training data.
    The researchers used a quantum algorithm to first model the lowest energy conformation for the fragment’s backbone, which is typically the most computationally demanding step of the calculation. Classical approaches were then used to convert the results obtained from the quantum computer, reconstruct the protein with its sidechains, and perform final refinement of the structure with classical molecular mechanics force fields. The project shows one of the ways that problems can be deconstructed into parts, with quantum computing methods addressing some parts and classical computing others, for increased accuracy.
    “One of the most unique things about this project is the number of disciplines involved,” says Dr. Raubenolt. “Our team’s expertise ranges from computational biology and chemistry, structural biology, software and automation engineering, to experimental atomic and nuclear physics, mathematics, and of course quantum computing and algorithm design. It took the knowledge from each of these areas to create a computational framework that can mimic one of the most important processes for human life.”
    The team’s combination of classical and quantum computing methods is an essential step for advancing our understanding of protein structures, and how they impact our ability to treat and prevent disease. The team plans to continue developing and optimizing quantum algorithms that can predict the structure of larger and more sophisticated proteins.
    “This work is an important step forward in exploring where quantum computing capabilities could show strengths in protein structure prediction,” says Dr. Doga. “Our goal is to design quantum algorithms that can find how to predict protein structures as realistically as possible.” More

  • in

    Theoretical quantum speedup with the quantum approximate optimization algorithm

    In a new paper in Science Advances on May 29, researchers at JPMorgan Chase, the U.S. Department of Energy’s (DOE) Argonne National Laboratory and Quantinuum have demonstrated clear evidence of a quantum algorithmic speedup for the quantum approximate optimization algorithm (QAOA).
    This algorithm has been studied extensively and has been implemented on many quantum computers. It has potential application in fields such as logistics, telecommunications, financial modeling and materials science.
    “This work is a significant step towards reaching quantum advantage, laying the foundation for future impact in production,” said Marco Pistoia, head of Global Technology Applied Research at JPMorgan Chase.
    The team examined whether a quantum algorithm with low implementation costs could provide a quantum speedup over the best-known classical methods. QAOA was applied to the Low Autocorrelation Binary Sequences problem, which has significance in understanding the behavior of physical systems, signal processing and cryptography. The study showed that if the algorithm was asked to tackle increasingly larger problems, the time it would take to solve them would grow at a slower rate than that of a classical solver.
    To explore the quantum algorithm’s performance in an ideal noiseless setting, JPMorgan Chase and Argonne jointly developed a simulator to evaluate the algorithm’s performance at scale. It was built on the Polaris supercomputer, accessed through the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science user facility. The ALCF is supported by DOE’s Advanced Scientific Computing Research program.
    “The large-scale quantum circuit simulations efficiently utilized the DOE petascale supercomputer Polaris located at the ALCF. These results show how high performance computing can complement and advance the field of quantum information science,” said Yuri Alexeev, a computational scientist at Argonne. Jeffrey Larson, a computational mathematician in Argonne’s Mathematics and Computer Science Division, also contributed to this research.
    To take the first step toward practical realization of the speedup in the algorithm, the researchers demonstrated a small-scale implementation on Quantinuum’s System Model H1 and H2 trapped-ion quantum computers. Using algorithm-specific error detection, the team reduced the impact of errors on algorithmic performance by up to 65%.
    “Our long-standing partnership with JPMorgan Chase led to this meaningful and noteworthy three-way research experiment that also brought in Argonne. The results could not have been achieved without the unprecedented and world leading quality of our H-Series Quantum Computer, which provides a flexible device for executing error-correcting and error-detecting experiments on top of gate fidelities that are years ahead of other quantum computers,” said Ilyas Khan, founder and chief product officer of Quantinuum. More

  • in

    Modular, scalable hardware architecture for a quantum computer

    Quantum computers hold the promise of being able to quickly solve extremely complex problems that might take the world’s most powerful supercomputer decades to crack.
    But achieving that performance involves building a system with millions of interconnected building blocks called qubits. Making and controlling so many qubits in a hardware architecture is an enormous challenge that scientists around the world are striving to meet.
    Toward this goal, researchers at MIT and MITRE have demonstrated a scalable, modular hardware platform that integrates thousands of interconnected qubits onto a customized integrated circuit. This “quantum-system-on-chip” (QSoC) architecture enables the researchers to precisely tune and control a dense array of qubits. Multiple chips could be connected using optical networking to create a large-scale quantum communication network.
    By tuning qubits across 11 frequency channels, this QSoC architecture allows for a new proposed protocol of “entanglement multiplexing” for large-scale quantum computing.
    The team spent years perfecting an intricate process for manufacturing two-dimensional arrays of atom-sized qubit microchiplets and transferring thousands of them onto a carefully prepared complementary metal-oxide semiconductor (CMOS) chip. This transfer can be performed in a single step.
    “We will need a large number of qubits, and great control over them, to really leverage the power of a quantum system and make it useful. We are proposing a brand new architecture and a fabrication technology that can support the scalability requirements of a hardware system for a quantum computer,” says Linsen Li, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this architecture.
    Li’s co-authors include Ruonan Han, an associate professor in EECS, leader of the Terahertz Integrated Electronics Group, and member of the Research Laboratory of Electronics (RLE); senior author Dirk Englund, professor of EECS, principal investigator of the Quantum Photonics and Artificial Intelligence Group and of RLE; as well as others at MIT, Cornell University, the Delft Institute of Technology, the Army Research Laboratory, and the MITRE Corporation. The paper appears in Nature.

    Diamond microchiplets
    While there are many types of qubits, the researchers chose to use diamond color centers because of their scalability advantages. They previously used such qubits to produce integrated quantum chips with photonic circuitry.
    Qubits made from diamond color centers are “artificial atoms” that carry quantum information. Because diamond color centers are solid-state systems, the qubit manufacturing is compatible with modern semiconductor fabrication processes. They are also compact and have relatively long coherence times, which refers to the amount of time a qubit’s state remains stable, due to the clean environment provided by the diamond material.
    In addition, diamond color centers have photonic interfaces which allows them to be remotely entangled, or connected, with other qubits that aren’t adjacent to them.
    “The conventional assumption in the field is that the inhomogeneity of the diamond color center is a drawback compared to identical quantum memory like ions and neutral atoms. However, we turn this challenge into an advantage by embracing the diversity of the artificial atoms: Each atom has its own spectral frequency. This allows us to communicate with individual atoms by voltage tuning them into resonance with a laser, much like tuning the dial on a tiny radio,” says Englund.
    This is especially difficult because the researchers must achieve this at a large scale to compensate for the qubit inhomogeneity in a large system.

    To communicate across qubits, they need to have multiple such “quantum radios” dialed into the same channel. Achieving this condition becomes near-certain when scaling to thousands of qubits. To this end, the researchers surmounted that challenge by integrating a large array of diamond color center qubits onto a CMOS chip which provides the control dials. The chip can be incorporated with built-in digital logic that rapidly and automatically reconfigures the voltages, enabling the qubits to reach full connectivity.
    “This compensates for the in-homogenous nature of the system. With the CMOS platform, we can quickly and dynamically tune all the qubit frequencies,” Li explains.
    Lock-and-release fabrication
    To build this QSoC, the researchers developed a fabrication process to transfer diamond color center “microchiplets” onto a CMOS backplane at a large scale.
    They started by fabricating an array of diamond color center microchiplets from a solid block of diamond. They also designed and fabricated nanoscale optical antennas that enable more efficient collection of the photons emitted by these color center qubits in free space.
    Then, they designed and mapped out the chip from the semiconductor foundry. Working in the MIT.nano cleanroom, they post-processed a CMOS chip to add microscale sockets that match up with the diamond microchiplet array.
    They built an in-house transfer setup in the lab and applied a lock-and-release process to integrate the two layers by locking the diamond microchiplets into the sockets on the CMOS chip. Since the diamond microchiplets are weakly bonded to the diamond surface, when they release the bulk diamond horizontally, the microchiplets stay in the sockets.
    “Because we can control the fabrication of both the diamond and the CMOS chip, we can make a complementary pattern. In this way, we can transfer thousands of diamond chiplets into their corresponding sockets all at the same time,” Li says.
    The researchers demonstrated a 500-micron by 500-micron area transfer for an array with 1,024 diamond nanoantennas, but they could use larger diamond arrays and a larger CMOS chip to further scale up the system. In fact, they found that with more qubits, tuning the frequencies actually requires less voltage for this architecture.
    “In this case, if you have more qubits, our architecture will work even better,” Li says.
    The team tested many nanostructures before they determined the ideal microchiplet array for the lock-and-release process. However, making quantum microchiplets is no easy task, and the process took years to perfect.
    “We have iterated and developed the recipe to fabricate these diamond nanostructures in MIT cleanroom, but it is a very complicated process. It took 19 steps of nanofabrication to get the diamond quantum microchiplets, and the steps were not straightforward,” he adds.
    Alongside their QSoC, the researchers developed an approach to characterize the system and measure its performance on a large scale. To do this, they built a custom cryo-optical metrology setup.
    Using this technique, they demonstrated an entire chip with over 4,000 qubits that could be tuned to the same frequency while maintaining their spin and optical properties. They also built a digital twin simulation that connects the experiment with digitized modeling, which helps them understand the root causes of the observed phenomenon and determine how to efficiently implement the architecture.
    In the future, the researchers could boost the performance of their system by refining the materials they used to make qubits or developing more precise control processes. They could also apply this architecture to other solid-state quantum systems. More

  • in

    Bio-inspired cameras and AI help drivers detect pedestrians and obstacles faster

    It’s every driver’s nightmare: a pedestrian stepping out in front of the car seemingly out of nowhere, leaving only a fraction of a second to brake or steer the wheel and avoid the worst. Some cars now have camera systems that can alert the driver or activate emergency braking. But these systems are not yet fast or reliable enough, and they will need to improve dramatically if they are to be used in autonomous vehicles where there is no human behind the wheel.
    Quicker detection using less computational power
    Now, Daniel Gehrig and Davide Scaramuzza from the Department of Informatics at the University of Zurich (UZH) have combined a novel bio-inspired camera with AI to develop a system that can detect obstacles around a car much quicker than current systems and using less computational power. The study is published in this week’s issue of Nature.
    Most current cameras are frame-based, meaning they take snapshots at regular intervals. Those currently used for driver assistance on cars typically capture 30 to 50 frames per second and an artificial neural network can be trained to recognize objects in their images — pedestrians, bikes, and other cars. “But if something happens during the 20 or 30 milliseconds between two snapshots, the camera may see it too late. The solution would be increasing the frame rate, but that translates into more data that needs to be processed in real-time and more computational power,” says Daniel Gehrig, first author of the paper.
    Combining the best of two camera types with AI
    Event cameras are a recent innovation based on a different principle. Instead of a constant frame rate, they have smart pixels that record information every time they detect fast movements. “This way, they have no blind spot between frames, which allows them to detect obstacles more quickly. They are also called neuromorphic cameras because they mimic how human eyes perceive images,” says Davide Scaramuzza, head of the Robotics and Perception Group. But they have their own shortcomings: they can miss things that move slowly and their images are not easily converted into the kind of data that is used to train the AI algorithm.
    Gehrig and Scaramuzza came up with a hybrid system that combines the best of both worlds: It includes a standard camera that collects 20 images per second, a relatively low frame rate compared to the ones currently in use. Its images are processed by an AI system, called a convolutional neural network, that is trained to recognize cars or pedestrians. The data from the event camera is coupled to a different type of AI system, called an asynchronous graph neural network, which is particularly apt for analyzing 3-D data that change over time. Detections from the event camera are used to anticipate detections by the standard camera and also boost its performance. “The result is a visual detector that can detect objects just as quickly as a standard camera taking 5,000 images per second would do but requires the same bandwidth as a standard 50-frame-per-second camera,” says Daniel Gehrig.
    One hundred times faster detections using less data
    The team tested their system against the best cameras and visual algorithms currently on the automotive market, finding that it leads to one hundred times faster detections while reducing the amount of data that must be transmitted between the camera and the onboard computer as well as the computational power needed to process the images without affecting accuracy. Crucially, the system can effectively detect cars and pedestrians that enter the field of view between two subsequent frames of the standard camera, providing additional safety for both the driver and traffic participants — which can make a huge difference, especially at high speeds.
    According to the scientists, the method could be made even more powerful in the future by integrating cameras with LiDAR sensors, like the ones used on self-driving cars. “Hybrid systems like this could be crucial to allow autonomous driving, guaranteeing safety without leading to a substantial growth of data and computational power,” says Davide Scaramuzza. More

  • in

    AI helps medical professionals read confusing EEGs to save lives

    Researchers at Duke University have developed an assistive machine learning model that greatly improves the ability of medical professionals to read the electroencephalography (EEG) charts of intensive care patients.
    Because EEG readings are the only method for knowing when unconscious patients are in danger of suffering a seizure or are having seizure-like events, the computational tool could help save thousands of lives each year. The results appear online May 23 in the New England Journal of Medicine AI.
    EEGs use small sensors attached to the scalp to measure the brain’s electrical signals, producing a long line of up and down squiggles. When a patient is having a seizure, these lines jump up and down dramatically like a seismograph during an earthquake — a signal that is easy to recognize. But other medically important anomalies called seizure-like events are much more difficult to discern.
    “The brain activity we’re looking at exists along a continuum, where seizures are at one end, but there’s still a lot of events in the middle that can also cause harm and require medication,” said Dr. Brandon Westover, associate professor of neurology at Massachusetts General Hospital and Harvard Medical School. “The EEG patterns caused by those events are more difficult to recognize and categorize confidently, even by highly trained neurologists, which not every medical facility has. But doing so is extremely important to the health outcomes of these patients.”
    To build a tool to help make these determinations, the doctors turned to the laboratory of Cynthia Rudin, the Earl D. McLean, Jr. Professor of Computer Science and Electrical and Computer Engineering at Duke. Rudin and her colleagues specialize in developing “interpretable” machine learning algorithms. While most machine learning models are a “black box” that makes it impossible for a human to know how it’s reaching conclusions, interpretable machine learning models essentially must show their work.
    The research group started by gathering EEG samples from over 2,700 patients and having more than 120 experts pick out the relevant features in the graphs, categorizing them as either a seizure, one of four types of seizure-like events or ‘other.’ Each type of event appears in EEG charts as certain shapes or repetitions in the undulating lines. But because these charts are rarely steadfast in their appearance, telltale signals can be interrupted by bad data or can mix together to create a confusing chart.
    “There is a ground truth, but it’s difficult to read,” said Stark Guo, a Ph.D. student working in Rudin’s lab. “The inherent ambiguity in many of these charts meant we had to train the model to place its decisions within a continuum rather than well-defined separate bins.”
    When displayed visually, that continuum looks something like a multicolored starfish swimming away from a predator. Each differently colored arm represents one type of seizure-like event the EEG could represent. The closer the algorithm puts a specific chart toward the tip of an arm, the surer it is of its decision, while those placed closer to the central body are less certain.

    Besides this visual classification, the algorithm also points to the patterns in the brainwaves that it used to make its determination and provides three examples of professionally diagnosed charts that it sees as being similar.
    “This lets a medical professional quickly look at the important sections and either agree that the patterns are there or decide that the algorithm is off the mark,” said Alina Barnett, a postdoctoral research associate in the Rudin lab. “Even if they’re not highly trained to read EEGs, they can make a much more educated decision.”
    Putting the algorithm to the test, the collaborative team had eight medical professionals with relevant experience categorize 100 EEG samples into the six categories, once with the help of AI and once without. The performance of all of the participants greatly improved, with their overall accuracy rising from 47% to 71%. Their performance also rose above those using a similar “black box” algorithm in a previous study.
    “Usually, people think that black box machine learning models are more accurate, but for many important applications, like this one, it’s just not true,” said Rudin. “It’s much easier to troubleshoot models when they are interpretable. And in this case, the interpretable model was actually more accurate. It also provides a bird’s eye view of the types of anomalous electrical signals that occur in the brain, which is really useful for care of critically ill patients.”
    This work was supported by the National Science Foundation (IIS-2147061, HRD-2222336, IIS-2130250, 2014431), the National Institutes of Health (R01NS102190, R01NS102574, R01NS107291, RF1AG064312, RF1NS120947, R01AG073410, R01HL161253, K23NS124656, P20GM130447) and the DHHS LB606 Nebraska Stem Cell Grant. More