More stories

  • in

    Progress in algorithms makes small, noisy quantum computers viable

    As reported in a new article in Nature Reviews Physics, instead of waiting for fully mature quantum computers to emerge, Los Alamos National Laboratory and other leading institutions have developed hybrid classical/quantum algorithms to extract the most performance — and potentially quantum advantage — from today’s noisy, error-prone hardware. Known as variational quantum algorithms, they use the quantum boxes to manipulate quantum systems while shifting much of the work load to classical computers to let them do what they currently do best: solve optimization problems.
    “Quantum computers have the promise to outperform classical computers for certain tasks, but on currently available quantum hardware they can’t run long algorithms. They have too much noise as they interact with environment, which corrupts the information being processed,” said Marco Cerezo, a physicist specializing in quantum computing, quantum machine learning, and quantum information at Los Alamos and a lead author of the paper. “With variational quantum algorithms, we get the best of both worlds. We can harness the power of quantum computers for tasks that classical computers can’t do easily, then use classical computers to complement the computational power of quantum devices.”
    Current noisy, intermediate scale quantum computers have between 50 and 100 qubits, lose their “quantumness” quickly, and lack error correction, which requires more qubits. Since the late 1990s, however, theoreticians have been developing algorithms designed to run on an idealized large, error-correcting, fault tolerant quantum computer.
    “We can’t implement these algorithms yet because they give nonsense results or they require too many qubits. So people realized we needed an approach that adapts to the constraints of the hardware we have — an optimization problem,” said Patrick Coles, a theoretical physicist developing algorithms at Los Alamos and the senior lead author of the paper.
    “We found we could turn all the problems of interest into optimization problems, potentially with quantum advantage, meaning the quantum computer beats a classical computer at the task,” Coles said. Those problems include simulations for material science and quantum chemistry, factoring numbers, big-data analysis, and virtually every application that has been proposed for quantum computers.
    The algorithms are called variational because the optimization process varies the algorithm on the fly, as a kind of machine learning. It changes parameters and logic gates to minimize a cost function, which is a mathematical expression that measures how well the algorithm has performed the task. The problem is solved when the cost function reaches its lowest possible value.
    In an iterative function in the variational quantum algorithm, the quantum computer estimates the cost function, then passes that result back to the classical computer. The classical computer then adjusts the input parameters and sends them to the quantum computer, which runs the optimization again.
    The review article is meant to be a comprehensive introduction and pedagogical reference for researches starting on this nascent field. In it, the authors discuss all the applications for algorithms and how they work, as well as cover challenges, pitfalls, and how to address them. Finally, it looks into the future, considering the best opportunities for achieving quantum advantage on the computers that will be available in the next couple of years.
    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    Best of both worlds — Combining classical and quantum systems to meet supercomputing demands

    One of the most interesting phenomena in quantum mechanics is “quantum entanglement.” This phenomenon describes how certain particles are inextricably linked, such that their states can only be described with reference to each other. This particle interaction also forms the basis of quantum computing. And this is why, in recent years, physicists have looked for techniques to generate entanglement. However, these techniques confront a number of engineering hurdles, including limitations in creating large number of “qubits” (quantum bits, the basic unit of quantum information), the need to maintain extremely low temperatures (1 K), and the use of ultrapure materials. Surfaces or interfaces are crucial in the formation of quantum entanglement. Unfortunately, electrons confined to surfaces are prone to “decoherence,” a condition in which there is no defined phase relationship between the two distinct states. Thus, to obtain stable, coherent qubits, the spin states of surface atoms (or equivalently, protons) must be determined.
    Recently, a team of scientists in Japan, including Prof. Takahiro Matsumoto from Nagoya City University, Prof. Hidehiko Sugimoto from Chuo University, Dr. Takashi Ohhara from the Japan Atomic Energy Agency, and Dr. Susumu Ikeda from High Energy Accelerator Research Organization, recognized the need for stable qubits. By looking at the surface spin states, the scientists discovered an entangled pair of protons on the surface of a silicon nanocrystal.
    Prof. Matsumoto, the lead scientist, outlines the significance of their study, “Proton entanglement has been previously observed in molecular hydrogen and plays an important role in a variety of scientific disciplines. However, the entangled state was found in gas or liquid phases only. Now, we have detected quantum entanglement on a solid surface, which can lay the groundwork for future quantum technologies.” Their pioneering study was published in a recent issue of Physical Review B.
    The scientists studied the spin states using a technique known as “inelastic neutron scattering spectroscopy” to determine the nature of surface vibrations. By modeling these surface atoms as “harmonic oscillators,” they showed anti-symmetry of protons. Since the protons were identical (or indistinguishable), the oscillator model restricted their possible spin states, resulting in strong entanglement. Compared to the proton entanglement in molecular hydrogen, the entanglement harbored a massive energy difference between its states, ensuring its longevity and stability. Additionally, the scientists theoretically demonstrated a cascade transition of terahertz entangled photon pairs using the proton entanglement.
    The confluence of proton qubits with contemporary silicon technology could result in an organic union of classical and quantum computing platforms, enabling a much larger number of qubits (106) than currently available (102), and ultra-fast processing for new supercomputing applications. “Quantum computers can handle intricate problems, such as integer factorization and the ‘traveling salesman problem,’ which are virtually impossible to solve with traditional supercomputers. This could be a game-changer in quantum computing with regard to storing, processing, and transferring data, potentially even leading to a paradigm shift in pharmaceuticals, data security, and many other areas,” concludes an optimistic Prof. Matsumoto.
    We could be on the verge of witnessing a technological revolution in quantum computing!
    Story Source:
    Materials provided by Nagoya City University. Note: Content may be edited for style and length. More

  • in

    A mobility-based approach to optimize pandemic lockdown strategies

    A new strategy for modeling the spread of COVID-19 incorporates smartphone-captured data on people’s movements and shows promise for aiding development of optimal lockdown policies. Ritabrata Dutta of Warwick University, U.K., and colleagues present these findings in the open-access journal PLOS Computational Biology.
    Evidence shows that lockdowns are effective in mitigating the spread of COVID-19. However, they do come at a high economic cost, and in practice, not everybody follows government guidance on lockdowns. Thus, Dutta and colleagues propose, an optimal lockdown strategy would balance between controlling the ongoing COVID-19 pandemic and minimizing the economic costs of lockdowns.
    To help guide such a strategy, the researchers developed new mathematical models that simulate the spread of COVID-19. The models focus on England and France and — using a statistical approach known as approximate Bayesian computation — they incorporate both public health data and data on changes in people’s movements, as captured by Google via Android devices; this mobility data serves as a measure of the effectiveness of lockdown policies.
    Then, the researchers demonstrated how their models could be applied to design optimal lockdown strategies for England and France using a mathematical technique called optimal control. They showed that it is possible to design effective lockdown protocols that allow partial reopening of workplaces and schools, while taking into account both public health costs and economic costs. The models can be updated in real time, and they can be adapted to any country for which reliable public health and Google mobility data are available.
    “Our work opens the door to a larger integration between epidemiological models and real-world data to, through the use of supercomputers, determine best public policies to mitigate the effects of a pandemic,” Dutta says. “In a not-so-distant future, policy makers may be able to express certain prioritization criteria, and a computational engine, with an extensive use of different datasets, could determine the best course of action.”
    Next, the researchers plan to refine their country-wide models to work at smaller scales; specifically, each of the 348 local district authorities of the U.K.
    The researchers add, “The integration of big data, epidemiological models and supercomputers can help us design an optimal lockdown strategy in real time, while balancing both public health and economic costs.”
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Is your mobile provider tracking your location? New technology could stop it

    Right now, there is a good chance your phone is tracking your location — even with GPS services turned off. That’s because, to receive service, our phones reveal personal identifiers to cell towers owned by major network operators. This has led to vast and largely unregulated data-harvesting industries based around selling users’ location data to third parties without consent.
    For the first time, researchers at the University of Southern California (USC) Viterbi School of Engineering and Princeton University have found a way to stop this privacy breach using existing cellular networks. The new system, presented at USENIX Security conference on Aug. 11, protects users’ mobile privacy while providing normal mobile connectivity.
    The new architecture, called “Pretty Good Phone Privacy” or PGPP, decouples phone connectivity from authentication and billing by anonymizing personal identifiers sent to cell towers. The software-based solution, described by the researchers as an “architecture change,” does not alter cellular network hardware.
    “We’ve unwittingly accepted that our phones are tracking devices in disguise, but until now we’ve had no other option — using mobile devices meant accepting this tracking,” said study co-author Barath Raghavan, an assistant professor in computer science at USC. “We figured out how to decouple authentication from connectivity and ensure privacy while maintaining seamless connectivity, and it is all done in software.”
    Decoupling authentication and phone connectivity
    Currently, for your phone to work, the network has to know your location and identify you as paying customer. As such, both your identity and location data are tracked by the device at all times. Data brokers and major operators have taken advantage of this system to profit off revealing sensitive user data — to date, in the United States, there are no federal laws restricting the use of location data. More

  • in

    Toward next-generation brain-computer interface systems

    Brain-computer interfaces (BCIs) are emerging assistive devices that may one day help people with brain or spinal injuries to move or communicate. BCI systems depend on implantable sensors that record electrical signals in the brain and use those signals to drive external devices like computers or robotic prosthetics.
    Most current BCI systems use one or two sensors to sample up to a few hundred neurons, but neuroscientists are interested in systems that are able to gather data from much larger groups of brain cells.
    Now, a team of researchers has taken a key step toward a new concept for a future BCI system — one that employs a coordinated network of independent, wireless microscale neural sensors, each about the size of a grain of salt, to record and stimulate brain activity. The sensors, dubbed “neurograins,” independently record the electrical pulses made by firing neurons and send the signals wirelessly to a central hub, which coordinates and processes the signals.
    In a study published on August 12 in Nature Electronics, the research team demonstrated the use of nearly 50 such autonomous neurograins to record neural activity in a rodent.
    The results, the researchers say, are a step toward a system that could one day enable the recording of brain signals in unprecedented detail, leading to new insights into how the brain works and new therapies for people with brain or spinal injuries.
    “One of the big challenges in the field of brain-computer interfaces is engineering ways of probing as many points in the brain as possible,” said Arto Nurmikko, a professor in Brown’s School of Engineering and the study’s senior author. “Up to now, most BCIs have been monolithic devices — a bit like little beds of needles. Our team’s idea was to break up that monolith into tiny sensors that could be distributed across the cerebral cortex. That’s what we’ve been able to demonstrate here.”
    The team, which includes experts from Brown, Baylor University, University of California at San Diego and Qualcomm, began the work of developing the system about four years ago. The challenge was two-fold, said Nurmikko, who is affiliated with Brown’s Carney Institute for Brain Science. The first part required shrinking the complex electronics involved in detecting, amplifying and transmitting neural signals into the tiny silicon neurograin chips. The team first designed and simulated the electronics on a computer, and went through several fabrication iterations to develop operational chips. More

  • in

    New study shows the potential of DNA-based data-structures systems

    Newcastle University research offers important insights into how we could turn to DNA into a green-by-design data structure that organises data like conventional computers.
    The team, led by researchers from Newcastle University’s School of Computing, created new dynamic DNA data structures able to store and recall information in an ordered way from DNA molecules. They also analysed how these structures are able to be interfaced with external nucleic acid computing circuits.
    Publishing their findings in the journal Nature Communications, the scientists present an in vitro implementation of a stack data structure using DNA polymers. Developed as a DNA chemical reaction system, the stack system is able to record combinations of two different DNA signals (0s and 1s), release the signals into solution in reverse order, and then re-record.
    The stack, which is a linear data structure which follows a particular order in which the operations are performed, stores and retrieves information (DNA signal strands) in a last-in first-out order by building and truncating DNA “polymers” of single ssDNA strands. Such a stack data structure may eventually be embedded in an in vivo context to store messenger RNAs and reverse the temporal order of a translational response, among other applications.
    Professor Natalio Krasnogor, of Newcastle University’s School of Computing, who led the study explains: “Our civilisation is data hungry and all that information processing thirst is having a strong environmental impact. For example, digital technologies pollute more than the aviation industry, the top 7000 data centers in the world use around 2% of global electricity and we all heard about the environmental footprint of some cryptocurrencies.
    “In recent years DNA has been shown to be an excellent substrate to store data and the DNA is a renewable, sustainable resource. At Newcastle we are passionate about sustainability and thus we wanted to start taking baby steps into green-by-design molecular information processing in DNA and go beyond simply storing data. We wanted to be able to organise it. In computer science, data structures are at the core of all the algorithms that run our modern economy; this is so because you need a way to have a unified and standardised way to operate on the data that is stored. This is what data structures enable. We are the first to demonstrate a molecular realisation of this crucial component of the modern information age.”
    Study co-author, Dr Annunziata Lopiccolo, Research Associate at Newcastle University’s Centre for Synthetic Biology and the Bioeconomy, added: “If we start thinking about data storage, immediately our minds picture electronic microchips, USB drives and many other existing technologies. But over the last few years biologists challenged the data storage media sector demonstrating that the DNA nature, as a highly stable and resilient media, can function as a quaternary data storage, rather than binary. In our work we wanted to demonstrate that it is possible to use the quaternary code to craft readable inputs and outputs under the form of programmable signals, with a linear and organised data structure. Our work expands knowledge in the context of information processing at the nanoscale level.”
    Study co-author Dr Harold Fellermann, Lecturer at Newcastle University School of Computing added: “Our biomolecular data structure, where both data as well as operations are represented by short pieces of DNA, has been designed with biological implementations in mind. In principle, we can imagine such a device to be used inside a living cell, bacteria for example. This makes it possible to bring computational power to domains that are currently hard to access with traditional silicon-based, electronic computing. In the future, such data structures might be used in environmental monitoring, bioremediation, green production, and even personalised nanomedicine.”
    Study co-author, Dr Benjamin Shirt-Ediss, Research Associate, Newcastle University School of Computing, said: “It was really interesting to develop a computational model of the DNA chemistry and to see good agreement with experimental results coming out of the lab. The computational model allowed us to really get a handle on the performance of the DNA stack data structure — we could systematically explore its absolute limits and suggest future avenues for improvement.”
    The experimental DNA stack system constitutes proof-of principle that a polymerising DNA chemistry can be used as a dynamic data structure to store two types of DNA signal in a last-in first-out order. While more research is needed to determine the best-possible way to archive and access DNA-based data, the study highlights the enormous potential of this technology, and how it could help tackle the rapidly growing data demands.
    Story Source:
    Materials provided by Newcastle University. Note: Content may be edited for style and length. More

  • in

    Impenetrable optical OTP security platform

    An anticounterfeiting smart label and security platform which makes forgery fundamentally impossible has been proposed. The device accomplishes this by controlling a variety of information of light including the color, phase, and polarization in one optical device.
    A POSTECH research team — led by Professor Junsuk Rho of departments of mechanical engineering and chemical engineering, Dr. Inki Kim, and Ph.D. candidates Jaehyuck Jang and Gyeongtae Kim — has developed an encrypted hologram printing platform that works in both natural light and laser light using the metasurface, an ultra-thin optical material with the thickness of one-thousandth of a strand of human hair. The label printed with this technology can produce a holographic color image that retains a specific polarization. The researchers have labeled this a “vectorial hologram.” The findings from this study were recently published in Nature Communications.
    The metasurface devices reported so far can only modulate one property of light such as color, phase, or polarization. To overcome this limitation, the researchers have devised a pixelated bifunctional metasurface by grouping multiple metasurfaces.
    In the unit structure that is the basis of the metasurface, the research team designed a device that uses its size to control the color, the orientation angles to control the phase, and the relative angle difference and the ratio of the group — that generates the left-handed and right-handed circularly polarized light within the pixelized group — to express all polarizations of light. To freely modulate the various degrees of freedoms of light and to maximize the efficiency at the same time, the metasurface plays the roles of a resonator1 and an optical waveguide2 at the same time.
    The vectorial hologram label designed in this manner displays QR codes that contain a variety of colors to the naked eye or when scanned with a camera. Simultaneously, under laser illumination, polarization encoded 3D holographic images are rendered. This holographic image has a special polarization state for each part of the image, which sets it apart from previously reported holograms.
    The vectorial holographic color printing technology developed in this research is an optical approach to the two-level encrypted one-time password (OTP) security mechanism that generates a password required to access the current banking system that verifies the user. First, when a user scans the QR code of the meta-optical device with a smart phone, the first password composed of random numbers is generated. When this password is applied to the meta-optical device as voltage value, the secondary password is displayed as an encrypted holographic image.
    “This vectorial holographic color printing platform is more advanced than the metasurface devices reported so far, and has demonstrated that various degrees of freedoms of light can be modulated with one optical device,” explained Professor Junsuk Rho. “It is a highly perfected optical OTP device which shows promise to be an original optical encryption technology applicable in designing and analyzing meta-atoms.”
    The research team has been conducting leading research on metasurface optical devices for the past five years and the device under development this time shows much potential for commercialization in areas of optical sensors, holographic displays, security and anticounterfeiting applications.
    This study supported by the grant from the Samsung Research Funding & Incubation Center for Future Technology funded by Samsung Electronics.
    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    Deep learning model classifies brain tumors with single MRI scan

    A team of researchers at Washington University School of Medicine have developed a deep learning model that is capable of classifying a brain tumor as one of six common types using a single 3D MRI scan, according to a study published in Radiology: Artificial Intelligence.
    “This is the first study to address the most common intracranial tumors and to directly determine the tumor class or the absence of tumor from a 3D MRI volume,” said Satrajit Chakrabarty, M.S., a doctoral student under the direction of Aristeidis Sotiras, Ph.D., and Daniel Marcus, Ph.D., in Mallinckrodt Institute of Radiology’s Computational Imaging Lab at Washington University School of Medicine in St. Louis, Missouri.
    The six most common intracranial tumor types are high-grade glioma, low-grade glioma, brain metastases, meningioma, pituitary adenoma and acoustic neuroma. Each was documented through histopathology, which requires surgically removing tissue from the site of a suspected cancer and examining it under a microscope.
    According to Chakrabarty, machine and deep learning approaches using MRI data could potentially automate the detection and classification of brain tumors.
    “Non-invasive MRI may be used as a complement, or in some cases, as an alternative to histopathologic examination,” he said.
    To build their machine learning model, called a convolutional neural network, Chakrabarty and researchers from Mallinckrodt Institute of Radiology developed a large, multi-institutional dataset of intracranial 3D MRI scans from four publicly available sources. In addition to the institution’s own internal data, the team obtained pre-operative, post-contrast T1-weighted MRI scans from the Brain Tumor Image Segmentation, The Cancer Genome Atlas Glioblastoma Multiforme, and The Cancer Genome Atlas Low Grade Glioma. More