More stories

  • in

    The digital dark matter clouding AI

    Artificial intelligence has entered our daily lives. First, it was ChatGPT. Now, it’s AI-generated pizza and beer commercials. While we can’t trust AI to be perfect, it turns out that sometimes we can’t trust ourselves with AI either.
    Cold Spring Harbor Laboratory (CSHL) Assistant Professor Peter Koo has found that scientists using popular computational tools to interpret AI predictions are picking up too much “noise,” or extra information, when analyzing DNA. And he’s found a way to fix this. Now, with just a couple new lines of code, scientists can get more reliable explanations out of powerful AIs known as deep neural networks. That means they can continue chasing down genuine DNA features. Those features might just signal the next breakthrough in health and medicine. But scientists won’t see the signals if they’re drowned out by too much noise.
    So, what causes the meddlesome noise? It’s a mysterious and invisible source like digital “dark matter.” Physicists and astronomers believe most of the universe is filled with dark matter, a material that exerts gravitational effects but that no one has yet seen. Similarly, Koo and his team discovered the data that AI is being trained on lacks critical information, leading to significant blind spots. Even worse, those blind spots get factored in when interpreting AI predictions of DNA function.
    Koo says: “The deep neural network is incorporating this random behavior because it learns a function everywhere. But DNA is only in a small subspace of that. And it introduces a lot of noise. And so we show that this problem actually does introduce a lot of noise across a wide variety of prominent AI models.”
    The digital dark matter is a result of scientists borrowing computational techniques from computer vision AI. DNA data, unlike images, is confined to a combination of four nucleotide letters: A, C, G, T. But image data in the form of pixels can be long and continuous. In other words, we’re feeding AI an input it doesn’t know how to handle properly.
    By applying Koo’s computational correction, scientists can interpret AI’s DNA analyses more accurately.
    Koo says: “We end up seeing sites that become much more crisp and clean, and there is less spurious noise in other regions. One-off nucleotides that are deemed to be very important all of a sudden disappear.”
    Koo believes noise disturbance affects more than AI-powered DNA analyzers. He thinks it’s a widespread affliction among computational processes involving similar types of data. Remember, dark matter is everywhere. Thankfully, Koo’s new tool can help bring scientists out of the darkness and into the light. More

  • in

    Shining a light on neuromorphic computing

    AI, machine learning, and ChatGPT may be relatively new buzzwords in the public domain, but developing a computer that functions like the human brain and nervous system — both hardware and software combined — has been a decades-long challenge. Engineers at the University of Pittsburgh are today exploring how optical “memristors” may be a key to developing neuromorphic computing.
    Resistors with memory, or memristors, have already demonstrated their versatility in electronics, with applications as computational circuit elements in neuromorphic computing and compact memory elements in high-density data storage. Their unique design has paved the way for in-memory computing and captured significant interest from scientists and engineers alike.
    A new review article published in Nature Photonics, titled “Integrated Optical Memristors,” sheds light on the evolution of this technology — and the work that still needs to be done for it to reach its full potential. Led by Nathan Youngblood, assistant professor of electrical and computer engineering at the University of Pittsburgh Swanson School of Engineering, the article explores the potential of optical devices which are analogs of electronic memristors. This new class of device could play a major role in revolutionizing high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence in the optical domain.
    “Researchers are truly captivated by optical memristors because of their incredible potential in high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence,” explained Youngblood. “Imagine merging the incredible advantages of optics with local information processing. It’s like opening the door to a whole new realm of technological possibilities that were previously unimaginable.”
    The review article presents a comprehensive overview of recent progress in this emerging field of photonic integrated circuits. It explores the current state-of-the-art and highlights the potential applications of optical memristors, which combine the benefits of ultrafast, high-bandwidth optical communication with local information processing. However, scalability emerged as the most pressing issue that future research should address.
    “Scaling up in-memory or neuromorphic computing in the optical domain is a huge challenge. Having a technology that is fast, compact, and efficient makes scaling more achievable and would represent a huge step forward,” explained Youngblood.
    “One example of the limitations is that if you were to take phase change materials, which currently have the highest storage density for optical memory, and try to implement a relatively simplistic neural network on-chip, it would take a wafer the size of a laptop to fit all the memory cells needed,” he continued. “Size matters for photonics, and we need to find a way to improve the storage density, energy efficiency, and programming speed to do useful computing at useful scales.”
    Using Light to Revolutionize Computing
    Optical memristors can revolutionize computing and information processing across several applications. They can enable active trimming of photonic integrated circuits (PICs), allowing for on-chip optical systems to be adjusted and reprogrammed as needed without continuously consuming power. They also offer high-speed data storage and retrieval, promising to accelerate processing, reduce energy consumption, and enable parallel processing.
    Optical memristors can even be used for artificial synapses and brain-inspired architectures. Dynamic memristors with nonvolatile storage and nonlinear output replicate the long-term plasticity of synapses in the brain and pave the way for spiking integrate-and-fire computing architectures.
    Research to scale up and improve optical memristor technology could unlock unprecedented possibilities for high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence.
    “We looked at a lot of different technologies. The thing we noticed is that we’re still far away from the target of an ideal optical memristor-something that is compact, efficient, fast, and changes the optical properties in a significant manner,” Youngblood said. “We’re still searching for a material or a device that actually meets all these criteria in a single technology in order for it to drive the field forward.” More

  • in

    Researchers demonstrate secure information transfer using spatial correlations in quantum entangled beams of light

    Researchers at the University of Oklahoma led a study recently published in Science Advances that proves the principle of using spatial correlations in quantum entangled beams of light to encode information and enable its secure transmission.
    Light can be used to encode information for high-data rate transmission, long-distance communication and more. But for secure communication, encoding large amounts of information in light has additional challenges to ensure the privacy and integrity of the data being transferred.
    Alberto Marino, the Ted S. Webb Presidential Professor in the Homer L. Dodge College of Arts, led the research with OU doctoral student and the study’s first author Gaurav Nirala and co-authors Siva T. Pradyumna and Ashok Kumar. Marino also holds positions with OU’s Center for Quantum Research and Technology and with the Quantum Science Center, Oak Ridge National Laboratory.
    “The idea behind the project is to be able to use the spatial properties of the light to encode large amounts of information, just like how an image contains information. However, to be able to do so in a way that is compatible with quantum networks for secure information transfer. When you consider an image, it can be constructed by combining basic spatial patterns know as modes, and depending on how you combine these modes, you can change the image or encoded information,” Marino said.
    “What we’re doing here that is new and different is that we’re not just using those modes to encode information; we’re using the correlations between them,” he added. “We’re using the additional information on how those modes are linked to encode the information.”
    The researchers used two entangled beams of light, meaning that the light waves are interconnected with correlations that are stronger than those that can be achieved with classical light and remain interconnected despite their distance apart.
    “The advantage of the approach we introduce is that you’re not able to recover the encoded information unless you perform joint measurements of the two entangled beams,” Marino said. “This has applications such as secure communication, given that if you were to measure each beam by itself, you would not be able to extract any information. You have to obtain the shared information between both of the beams and combine it in the right way to extract the encoded information.”
    Through a series of images and correlation measurements, the researchers demonstrated results of successfully encoding information in these quantum-entangled beams of light. Only when the two beams were combined using the methods intended did the information resolve into recognizable information encoded in the form of images.
    “The experimental result describes how one can transfer spatial patterns from one optical field to two new optical fields generated using a quantum mechanical process called four-wave mixing,” said Nirala. “The encoded spatial pattern can be retrieved solely by joint measurements of generated fields. One interesting aspect of this experiment is that it offers a novel method of encoding information in light by modifying the correlation between various spatial modes without impacting time-correlations.”
    “What this could enable, in principle, is the ability to securely encode and transmit a lot of information using the spatial properties of the light, just like how an image contains a lot more information than just turning the light on and off,” Marino said. “Using the spatial correlations is a new approach to encode information.”
    “Information encoding in the spatial correlations of entangled twin beams” was published in Science Advances on June 2, 2023. More

  • in

    Quantum computers are better at guessing, new study demonstrates

    Daniel Lidar, the Viterbi Professor of Engineering at USC and Director of the USC Center for Quantum Information Science & Technology, and first author Dr. Bibek Pokharel, a Research Scientist at IBM Quantum, achieved this quantum speedup advantage in the context of a “bitstring guessing game.” They managed strings up to 26 bits long, significantly larger than previously possible, by effectively suppressing errors typically seen at this scale. (A bit is a binary number that is either zero or one).
    Quantum computers promise to solve certain problems with an advantage that increases as the problems increase in complexity. However, they are also highly prone to errors, or noise. The challenge, says Lidar, is “to obtain an advantage in the real world where today’s quantum computers are still ‘noisy.'” This noise-prone condition of current quantum computing is termed the “NISQ” (Noisy Intermediate-Scale Quantum) era, a term adapted from the RISC architecture used to describe classical computing devices. Thus, any present demonstration of quantum speed advantage necessitates noise reduction.
    The more unknown variables a problem has, the harder it usually is for a computer to solve. Scholars can evaluate a computer’s performance by playing a type of game with it to see how quickly an algorithm can guess hidden information. For instance, imagine a version of the TV game Jeopardy, where contestants take turns guessing a secret word of known length, one whole word at a time. The host reveals only one correct letter for each guessed word before changing the secret word randomly.
    In their study, the researchers replaced words with bitstrings. A classical computer would, on average, require approximately 33 million guesses to correctly identify a 26-bit string. In contrast, a perfectly functioning quantum computer, presenting guesses in quantum superposition, could identify the correct answer in just one guess. This efficiency comes from running a quantum algorithm developed more than 25 years ago by computer scientists Ethan Bernstein and Umesh Vazirani. However, noise can significantly hamper this exponential quantum advantage.
    Lidar and Pokharel achieved their quantum speedup by adapting a noise suppression technique called dynamical decoupling. They spent a year experimenting, with Pokharel working as a doctoral candidate under Lidar at USC. Initially, applying dynamical decoupling seemed to degrade performance. However, after numerous refinements, the quantum algorithm functioned as intended. The time to solve problems then grew more slowly than with any classical computer, with the quantum advantage becoming increasingly evident as the problems became more complex.
    Lidar notes that “currently, classical computers can still solve the problem faster in absolute terms.” In other words, the reported advantage is measured in terms of the time-scaling it takes to find the solution, not the absolute time. This means that for sufficiently long bitstrings, the quantum solution will eventually be quicker.
    The study conclusively demonstrates that with proper error control, quantum computers can execute complete algorithms with better scaling of the time it takes to find the solution than conventional computers, even in the NISQ era. More

  • in

    Unveiling the nanoscale frontier: innovating with nanoporous model electrodes

    Researchers at Tohoku University and Tsinghua University have introduced a next-generation model membrane electrode that promises to revolutionize fundamental electrochemical research. This innovative electrode, fabricated through a meticulous process, showcases an ordered array of hollow giant carbon nanotubes (gCNTs) within a nanoporous membrane, unlocking new possibilities for energy storage and electrochemical studies.
    The key breakthrough lies in the construction of this novel electrode. The researchers developed a uniform carbon coating technique on anodic aluminum oxide (AAO) formed on an aluminum substrate, with the barrier layer eliminated. The resulting conformally carbon-coated layer exhibits vertically aligned gCNTs with nanopores ranging from 10 to 200 nm in diameter and 2 μm to 90 μm in length, covering small electrolyte molecules to bio-related large matters such as enzymes and exosomes. Unlike traditional composite electrodes, this self-standing model electrode eliminates inter-particle contact, ensuring minimal contact resistance — something essential for interpreting the corresponding electrochemical behaviors.
    “The potential of this model electrode is immense,” stated Dr. Zheng-Ze Pan, one of the corresponding authors of the study. “By employing the model membrane electrode with its extensive range of nanopore dimensions, we can attain profound insights into the intricate electrochemical processes transpiring within porous carbon electrodes, along with their inherent correlations to the nanopore dimensions.”
    Moreover, the gCNTs are composed of low-crystalline stacked graphene sheets, offering unparalleled access to the electrical conductivity within low-crystalline carbon walls. Through experimental measurements and the utilization of an in-house temperature-programmed desorption system, the researchers constructed an atomic-scale structural model of the low-crystalline carbon walls, enabling detailed theoretical simulations. Dr. Alex Aziz, who carried out the simulation part for this research, points out, “Our advanced simulations provide a unique lens to estimate electron transitions within amorphous carbons, shedding light on the intricate mechanisms governing their electrical behavior.”
    This project was led by Prof. Dr. Hirotomo Nishihara, the Principal Investigator of the Device/System Group at Advanced Institute for Materials Research (WPI-AIMR). The findings are detailed in one of materials science’s top-level journal, ” Advanced Functional Materials.
    Ultimately, the study represents a significant step forward in our understanding of amorphous-based porous carbon materials and their applications in probing various electrochemical systems. More

  • in

    Finally solved! The great mystery of quantized vortex motion

    Liquid helium-4, which is in a superfluid state at cryogenic temperatures close to absolute zero (-273°C), has a special vortex called a quantized vortex that originates from quantum mechanical effects. When the temperature is relatively high, the normal fluid exists simultaneously in the superfluid helium, and when the quantized vortex is in motion, mutual friction occurs between it and the normal-fluid. However, it is difficult to explain precisely how a quantized vortex interacts with a normal-fluid in motion. Although several theoretical models have been proposed, it has not been clear which model is correct.

    A research group led by Professor Makoto Tsubota and Specially Appointed Assistant Professor Satoshi Yui, from the Graduate School of Science and the Nambu Yoichiro Institute of Theoretical and Experimental Physics, Osaka Metropolitan University respectively in cooperation with their colleagues from Florida State University and Keio University, investigated numerically the interaction between a quantized vortex and a normal-fluid. Based on the experimental results, researchers decided on the most consistent of several theoretical models. They found that a model that accounts for changes in the normal-fluid and incorporates more theoretically accurate mutual friction is the most compatible with the experimental results.
    “The subject of this study, the interaction between a quantized vortex and a normal-fluid, has been a great mystery since I began my research in this field 40 years ago,” stated Professor Tsubota. “Computational advances have made it possible to handle this problem, and the brilliant visualization experiment by our collaborators at Florida State University has led to a breakthrough. As is often the case in science, subsequent developments in technology have made it possible to elucidate, and this study is a good example of this.”
    Their findings were published in Nature Communications. More

  • in

    Tiny video capsule shows promise as an alternative to endoscopy

    While indigestible video capsule endoscopes have been around for many years, the capsules have been limited by the fact that they could not be controlled by physicians. They moved passively, driven only by gravity and the natural movement of the body. Now, according to a first-of-its-kind research study at George Washington University, physicians can remotely drive a miniature video capsule to all regions of the stomach to visualize and photograph potential problem areas. The new technology uses an external magnet and hand-held video game style joysticks to move the capsule in three-dimensions in the stomach. This new technology comes closer to the capabilities of a traditional tube-based endoscopy.
    “A traditional endoscopy is an invasive procedure for patients, not to mention it is costly due to the need for anesthesia and time off work,” Andrew Meltzer, a professor of Emergency Medicine at the GW School of Medicine & Health Sciences, said. “If larger studies can prove this method is sufficiently sensitive to detect high-risk lesions, magnetically controlled capsules could be used as a quick and easy way to screen for health problems in the upper GI tract such as ulcers or stomach cancer.”
    More than 7 million traditional endoscopies of the stomach and upper part of the intestine are performed every year in the United States to help doctors investigate and treat stomach pain, nausea, bleeding and other symptoms of disease, including cancer. Despite the benefits of traditional endoscopies, studies suggest some patients have trouble accessing the procedure.
    In fact, Meltzer got interested in the magnetically controlled capsule endoscopy after seeing patients in the emergency room with stomach pain or suspected upper GI bleeding who faced barriers to getting a traditional endoscopy as an outpatient.
    “I would have patients who came to the ER with concerns for a bleeding ulcer and, even if they were clinically stable, I would have no way to evaluate them without admitting them to the hospital for an endoscopy. We could not do an endoscopy in the ER and many patients faced unacceptable barriers to getting an outpatient endoscopy, a crucial diagnostic tool to preventing life-threatening hemorrhage,” Meltzer said. “To help address this problem, I started looking for less invasive ways to visualize the upper gastrointestinal tract for patients with suspected internal bleeding.”
    The study is the first to test magnetically controlled capsule endoscopy in the United States. For patients who come to the ER or a doctor’s office with severe stomach pain, the ability to swallow a capsule and get a diagnosis on the spot — without a second appointment for a traditional endoscopy — is a real plus, not to mention potentially life-saving, says Meltzer. An external magnet allows the capsule to be painlessly driven to visualize all anatomic areas of the stomach and record video and photograph any possible bleeding, inflammatory or malignant lesions.
    While using the joystick requires additional time and training, software is being developed that will use artificial intelligence to self-drive the capsule to all parts of the stomach with a push of the button and record any potential risky abnormalities. That would make it easier to use the system as a diagnostic tool or screening test. In addition, the videos can be easily transmitted for off-site review if a gastroenterologist is not on-site to over-read the images.
    Meltzer and colleagues conducted a study of 40 patients at a physician office building using the magnetically controlled capsule endoscopy. They found that the doctor could direct the capsule to all major parts of the stomach with a 95 percent rate of visualization. Capsules were driven by the ER physician and then the study reports were reviewed by an attending gastroenterologist who was physically off-site.
    To see how the new method compared with a traditional endoscopy, participants in the study also received a follow up endoscopy. No high-risk lesions were missed with the new method and 80 percent of the patients preferred the capsule method to the traditional endoscopy. The team found no safety problems associated with the new method.
    Yet, Meltzer cautions that the study is a pilot and a much bigger trial with more patients must be conducted to make sure the method does not miss important lesions and can be used in place of an endoscopy. A major limitation of the capsule includes the inability to perform biopsies of lesions that are detected.
    The study, “Magnetically Controlled Capsule for Assessment of the Gastric Mucosa in Symptomatic Patients (MAGNET): A Prospective, Single-Arm, Single-Center, Comparative Study,” was published in iGIE, the open-access, online journal of the American Society for Gastrointestinal Endoscopy.
    The medical technology company AnX Robotica funded the research and is the creator of the capsule endoscopy system used in the study, called NaviCam®. More

  • in

    New method improves efficiency of ‘vision transformer’ AI systems

    Vision transformers (ViTs) are powerful artificial intelligence (AI) technologies that can identify or categorize objects in images — however, there are significant challenges related to both computing power requirements and decision-making transparency. Researchers have now developed a new methodology that addresses both challenges, while also improving the ViT’s ability to identify, classify and segment objects in images.
    Transformers are among the most powerful existing AI models. For example, ChatGPT is an AI that uses transformer architecture, but the inputs used to train it are language. ViTs are transformer-based AI that are trained using visual inputs. For example, ViTs could be used to detect and categorize objects in an image, such as identifying all of the cars or all of the pedestrians in an image.
    However, ViTs face two challenges.
    First, transformer models are very complex. Relative to the amount of data being plugged into the AI, transformer models require a significant amount of computational power and use a large amount of memory. This is particularly problematic for ViTs, because images contain so much data.
    Second, it is difficult for users to understand exactly how ViTs make decisions. For example, you might have trained a ViT to identify dogs in an image. But it’s not entirely clear how the ViT is determining what is a dog and what is not. Depending on the application, understanding the ViT’s decision-making process, also known as its model interpretability, can be very important.
    The new ViT methodology, called “Patch-to-Cluster attention” (PaCa), addresses both challenges.

    “We address the challenge related to computational and memory demands by using clustering techniques, which allow the transformer architecture to better identify and focus on objects in an image,” says Tianfu Wu, corresponding author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University. “Clustering is when the AI lumps sections of the image together, based on similarities it finds in the image data. This significantly reduces computational demands on the system. Before clustering, computational demands for a ViT are quadratic. For example, if the system breaks an image down into 100 smaller units, it would need to compare all 100 units to each other — which would be 10,000 complex functions.
    “By clustering, we’re able to make this a linear process, where each smaller unit only needs to be compared to a predetermined number of clusters. Let’s say you tell the system to establish 10 clusters; that would only be 1,000 complex functions,” Wu says.
    “Clustering also allows us to address model interpretability, because we can look at how it created the clusters in the first place. What features did it decide were important when lumping these sections of data together? And because the AI is only creating a small number of clusters, we can look at those pretty easily.”
    The researchers did comprehensive testing of PaCa, comparing it to two state-of-the-art ViTs called SWin and PVT.
    “We found that PaCa outperformed SWin and PVT in every way,” Wu says. “PaCa was better at classifying objects in images, better at identifying objects in images, and better at segmentation — essentially outlining the boundaries of objects in images. It was also more efficient, meaning that it was able to perform those tasks more quickly than the other ViTs.
    “The next step for us is to scale up PaCa by training on larger, foundational data sets.”
    The paper, “PaCa-ViT: Learning Patch-to-Cluster Attention in Vision Transformers,” will be presented at the IEEE/CVF Conference on Computer Vision and Pattern Recognition, being held June 18-22 in Vancouver, Canada. First author of the paper is Ryan Grainger, a Ph.D. student at NC State. The paper was co-authored by Thomas Paniagua, a Ph.D. student at NC State; Xi Song, an independent researcher; and Naresh Cuntoor and Mun Wai Lee of BlueHalo.
    The work was done with support from the Office of the Director of National Intelligence, under contract number 2021-21040700003; the U.S. Army Research Office, under grants W911NF1810295 and W911NF2210010; and the National Science Foundation, under grants 1909644, 1822477, 2024688 and 2013451. More