More stories

  • in

    Breakthrough lays groundwork for future quantum networks

    New Army-funded research could help lay the groundwork for future quantum communication networks and large-scale quantum computers.
    Researchers sent entangled qubit states through a communication cable linking one quantum network node to a second node.
    Scientists at the Pritzker School of Molecular Engineering at the University of Chicago, funded and managed by the U.S. Army Combat Capability Development, known as DEVCOM, Army Research Laboratory’s Center for Distributed Quantum Information, also amplified an entangled state via the same cable first by using the cable to entangle two qubits in each of two nodes, then entangling these qubits further with other qubits in the nodes. The peer-reviewed journal published the research in its Feb. 24, 2021, issue.
    “The entanglement distribution results the team achieved brought together years of their research related to approaches for transferring quantum states and related to advanced fabrication procedures to realize the experiments,” said Dr. Sara Gamble, program manager at the Army Research Office, an element of the Army’s corporate research laboratory, and co-manager of the CDQI, which funded the work. “This is an exciting achievement and one that paves the way for increasingly complex experiments with additional quantum nodes that we’ll need for the large-scale quantum networks and computers of ultimate interest to the Army.”
    Qubits, or quantum bits, are the basic units of quantum information. By exploiting their quantum properties, like superposition, and their ability to be entangled together, scientists and engineers are creating next-generation quantum computers that will be able solve previously unsolvable problems.
    The research team uses superconducting qubits, tiny cryogenic circuits that can be manipulated electrically.

    advertisement

    “Developing methods that allow us to transfer entangled states will be essential to scaling quantum computing,” said Prof. Andrew Cleland, the John A. MacLean senior professor of Molecular Engineering Innovation and Enterprise at University of Chicago, who led the research.
    Entanglement is a correlation that can be created between quantum entities such as qubits. When two qubits are entangled and a measurement is made on one, it will affect the outcome of a measurement made on the other, even if that second qubit is physically far away.
    Entanglement is a correlation that can be created between quantum entities such as qubits. When two qubits are entangled and a measurement is made on one, it will affect the outcome of a measurement made on the other, even if that second qubit is physically far away.
    To send the entangled states through the communication cable — a one-meter-long superconducting cable — the researchers created an experimental set-up with three superconducting qubits in each of two nodes. They connected one qubit in each node to the cable and then sent quantum states, in the form of microwave photons, through the cable with minimal loss of information. The fragile nature of quantum states makes this process quite challenging.
    The researchers developed a system in which the whole transfer process — node to cable to node — takes only a few tens of nanoseconds (a nanosecond is one billionth of a second). That allowed them to send entangled quantum states with very little information loss.

    advertisement

    The system also allowed them to amplify the entanglement of qubits. The researchers used one qubit in each node and entangled them together by essentially sending a half-photon through the cable. They then extended this entanglement to the other qubits in each node. When they were finished, all six qubits in two nodes were entangled in a single globally entangled state.
    “We want to show that superconducting qubits have a viable role going forward,” Cleland said.
    A quantum communication network could potentially take advantage of this advance. The group plans to extend their system to three nodes to build three-way entanglement.
    The researchers developed a system in which the whole transfer process — node to cable to node — takes only a few tens of nanoseconds (a nanosecond is one billionth of a second).
    “The team was able to identify a primary limiting factor in this current experiment related to loss in some of the components,” said Dr. Fredrik Fatemi, branch chief for quantum sciences, DEVCOM ARL, and co-manager of CDQI. “They have a clear path forward for increasingly complex experiments which will enable us to explore new regimes in distributed entanglement.” More

  • in

    Robots learn faster with quantum technology

    Artificial intelligence is part of our modern life. A crucial question for practical applications is how fast such intelligent machines can learn. An experiment has answered this question, showing that quantum technology enables a speed-up in the learning process. The physicists have achieved this result by using a quantum processor for single photons as a robot.
    Robots solving computer games, recognizing human voices, or helping in finding optimal medical treatments: those are only a few astonishing examples of what the field of artificial intelligence has produced in the past years. The ongoing race for better machines has led to the question of how and with what means improvements can be achieved. In parallel, huge recent progress in quantum technologies have confirmed the power of quantum physics, not only for its often peculiar and puzzling theories, but also for real-life applications. Hence, the idea of merging the two fields: on one hand, artificial intelligence with its autonomous machines; on the other hand, quantum physics with its powerful algorithms.
    Over the past few years, many scientists have started to investigate how to bridge these two worlds, and to study in what ways quantum mechanics can prove beneficial for learning robots, or vice versa. Several fascinating results have shown, for example, robots deciding faster on their next move, or the design of new quantum experiments using specific learning techniques. Yet, robots were still incapable of learning faster, a key feature in the development of increasingly complex autonomous machines.
    Within an international collaboration led by Philip Walther, a team of experimental physicists from the University of Vienna, together with theoreticians from the University of Innsbruck, the Austrian Academy of Sciences, the Leiden University, and the German Aerospace Center, have been successful in experimentally proving for the first time a speed-up in the actual robot’s learning time. The team has made use of single photons, the fundamental particles of light, coupled into an integrated photonic quantum processor, which was designed at the Massachusetts Institute of Technology. This processor was used as a robot and for implementing the learning tasks. Here, the robot would learn to route the single photons to a predefined direction. “The experiment could show that the learning time is significantly reduced compared to the case where no quantum physics is used,” says Valeria Saggio, first author of the publication.
    In a nutshell, the experiment can be understood by imagining a robot standing at a crossroad, provided with the task of learning to always take the left turn. The robot learns by obtaining a reward when doing the correct move. Now, if the robot is placed in our usual classical world, then it will try either a left or right turn, and will be rewarded only if the left turn is chosen. In contrast, when the robot exploits quantum technology, the bizarre aspects of quantum physics come into play. The robot can now make use of one of its most famous and peculiar features, the so called superposition principle. This can be intuitively understood by imagining the robot taking the two turns, left and right, at the same time. “This key feature enables the implementation of a quantum search algorithm that reduces the number of trials for learning the correct path. As a consequence, an agent that can explore its environment in superposition will learn significantly faster than its classical counterpart,” says Hans Briegel, who developed the theoretical ideas on quantum learning agents with his group at the University of Innsbruck.
    This experimental demonstration that machine learning can be enhanced by using quantum computing shows promising advantages when combining these two technologies. “We are just at the beginning of understanding the possibilities of quantum artificial intelligence” says Philip Walther, “and thus every new experimental result contributes to the development of this field, which is currently seen as one of the most fertile areas for quantum computing.”

    Story Source:
    Materials provided by University of Vienna. Note: Content may be edited for style and length. More

  • in

    Classic math conundrum solved: Superb algorithm for finding the shortest route

    One of the most classic algorithmic problems deals with calculating the shortest path between two points. A more complicated variant of the problem is when the route traverses a changing network — whether this be a road network or the internet. For 40 years, an algorithm has been sought to provide an optimal solution to this problem. Now, computer scientist Christian Wulff-Nilsen of the University of Copenhagen and two research colleagues have come up with a recipe.
    When heading somewhere new, most of us leave it to computer algorithms to help us find the best route, whether by using a car’s GPS, or public transport and map apps on their phone. Still, there are times when a proposed route doesn’t quite align with reality. This is because road networks, public transportation networks and other networks aren’t static. The best route can suddenly be the slowest, e.g. because a queue has formed due to roadworks or an accident.
    People probably don’t think about the complicated math behind routing suggestions in these types of situations. The software being used is trying to solve a variant for the classic algorithmic “shortest path” problem, the shortest path in a dynamic network. For 40 years, researchers have been working to find an algorithm that can optimally solve this mathematical conundrum. Now, Christian Wulff-Nilsen of the University of Copenhagen’s Department of Computer Science has succeeded in cracking the nut along with two colleagues.
    “We have developed an algorithm, for which we now have mathematical proof, that it is better than every other algorithm up to now — and the closest thing to optimal that will ever be, even if we look 1000 years into the future,” says Associate Professor Wulff-Nilsen. The results were presented at the FOCS 2020 conference.
    Optimally, in this context, refers to an algorithm that spends as little time and as little computer memory as possible to calculate the best route in a given network. This is not just true of road and transportation networks, but also the internet or any other type of network.
    Networks as graphs
    The researchers represent a network as a so-called dynamic graph.” In this context, a graph is an abstract representation of a network consisting of edges, roads for example, and nodes, representing intersections, for example. When a graph is dynamic, it means that it can change over time. The new algorithm handles changes consisting of deleted edges — for example, if the equivalent of a stretch of a road suddenly becomes inaccessible due to roadworks.

    advertisement

    “The tremendous advantage of seeing a network as an abstract graph is that it can be used to represent any type of network. It could be the internet, where you want to send data via as short a route as possible, a human brain or the network of friendship relations on Facebook. This makes graph algorithms applicable in a wide variety of contexts,” explains Christian Wulff-Nilsen.
    Traditional algorithms assume that a graph is static, which is rarely true in the real world. When these kinds of algorithms are used in a dynamic network, they need to be rerun every time a small change occurs in the graph — which wastes time.
    More data necessitates better algorithms
    Finding better algorithms is not just useful when travelling. It is necessary in virtually any area where data is produced, as Christian Wulff-Nilsen points out:
    “We are living in a time when volumes of data grow at a tremendous rate and the development of hardware simply can’t keep up. In order to manage all of the data we produce, we need to develop smarter software that requires less running time and memory. That’s why we need smarter algorithms,” he says.
    He hopes that it will be possible to use this algorithm or some of the techniques behind it in practice, but stresses that this is theoretical evidence and first requires experimentation.
    Background
    The research article “Near-Optimal Decremental SSSP in Dense Weighted Digraphs” was presented at the prestigious FOCS 2020 conference.
    The article was written by Christian Wulff-Nilsen, of the University of Copenhagen’s Department of Computer Science, and former Department of Computer Science PhD student Maximillian Probst Gutenberg and assistant professor Aaron Bernstein of Rutgers University.
    The version of the “shortest path” problem that the researchers solved is called “The Decremental Single-Source Shortest Path Problem.” It is essentially about maintaining the shortest paths in a changing dynamic network from one starting point to all other nodes in a graph. The changes to a network consist of edge removals.
    The paper gives a mathematical proof that the algorithm is essentially the optimal one for dynamic networks. On average, users will be able to change routes according to calculations made in constant time. More

  • in

    Large computer language models carry environmental, social risks

    Computer engineers at the world’s largest companies and universities are using machines to scan through tomes of written material. The goal? Teach these machines the gift of language. Do that, some even claim, and computers will be able to mimic the human brain.
    But this impressive compute capability comes with real costs, including perpetuating racism and causing significant environmental damage, according to a new paper, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” The paper is being presented Wednesday, March 10 at the ACM Conference on Fairness, Accountability and Transparency (ACM FAccT).
    This is the first exhaustive review of the literature surrounding the risks that come with rapid growth of language-learning technologies, said Emily M. Bender, a University of Washington professor of linguistics and a lead author of the paper along with Timnit Gebru, a well-known AI researcher.
    “The question we’re asking is what are the possible dangers of this approach and the answers that we’re giving involve surveying literature across a broad range of fields and pulling them together,” said Bender, who is the UW Howard and Frances Nostrand Endowed Professor.
    What the researchers surfaced was that there are downsides to the ever-growing computing power put into natural language models. They discuss how the ever-increasing size of training data for language modeling exacerbates social and environmental issues. Alarmingly, such language models perpetuate hegemonic language and can deceive people into thinking they are having a “real” conversation with a person rather than a machine. The increased computational needs of these models further contributes to environmental degradation.
    The authors were motivated to write the paper because of a trend within the field towards ever-larger language models and their growing spheres of influence.

    advertisement

    The paper already has generated wide-spread attention due, in part, to the fact that two of the paper’s co-authors say they were fired recently from Google for reasons that remain unsettled. Margaret Mitchell and Gebru, the two now-former Google researchers, said they stand by the paper’s scholarship and point to its conclusions as a clarion call to industry to take heed.
    “It’s very clear that putting in the concerns has to happen right now, because it’s already becoming too late,” said Mitchell, a researcher in AI.
    It takes an enormous amount of computing power to fuel the model language programs, Bender said. That takes up energy at tremendous scale, and that, the authors argue, causes environmental degradation. And those costs aren’t borne by the computer engineers, but rather by marginalized people who cannot afford the environmental costs.
    “It’s not just that there’s big energy impacts here, but also that the carbon impacts of that will bring costs first to people who are not benefiting from this technology,” Bender said. “When we do the cost-benefit analysis, it’s important to think of who’s getting the benefit and who’s paying the cost because they’re not the same people.”
    The large scale of this compute power also can restrict access to only the most well-resourced companies and research groups, leaving out smaller developers outside of the U.S., Canada, Europe and China. That’s because it takes huge machines to run the software necessary to make computers mimic human thought and speech.
    Another risk comes from the training data itself, the authors say. Because the computers read language from the Web and from other sources, they can pick up and perpetuate racist, sexist, ableist, extremist and other harmful ideologies.
    “One of the fallacies that people fall into is well, the internet is big, the internet is everything. If I just scrape the whole internet then clearly I’ve incorporated diverse viewpoints,” Bender said. “But when we did a step-by-step review of the literature, it says that’s not the case right now because not everybody’s on the internet, and of the people who are on the internet, not everybody is socially comfortable participating in the same way.”
    And, people can confuse the language models for real human interaction, believing that they’re actually talking with a person or reading something that a person has spoken or written, when, in fact, the language comes from a machine. Thus, the stochastic parrots.
    “It produces this seemingly coherent text, but it has no communicative intent. It has no idea what it’s saying. There’s no there there,” Bender said. More

  • in

    Robots can use eye contact to draw out reluctant participants in groups

    Eye contact is a key to establishing a connection, and teachers use it often to encourage participation. But can a robot do this too? Can it draw a response simply by making “eye” contact, even with people who are less inclined to speak up. A recent study suggests that it can.
    Researchers at KTH Royal Institute of Technology published results of experiments in which robots led a Swedish word game with individuals whose proficiency in the Nordic language was varied. They found that by redirecting its gaze to less proficient players, a robot can elicit involvement from even the most reluctant participants.
    Researchers Sarah Gillet and Ronald Cumbal say the results offer evidence that robots could play a productive role in educational settings.
    Calling on someone by name isn’t always the best way to elicit engagement, Gillet says. “Gaze can by nature influence very dynamically how much people are participating, especially if there is this natural tendency for imbalance — due to the differences in language proficiency,” she says.
    “If someone is not inclined to participate for some reason, we showed that gaze is able to overcome this difference and help everyone to participate.”
    Cumbal says that studies have shown that robots can support group discussion, but this is the first study to examine what happens when a robot uses gaze in a group interaction that isn’t balanced — when it is dominated by one or more individuals.
    The experiment involved pairs of players — one fluent in Swedish and one who is learning Swedish. The players were instructed to give the robot clues in Swedish so that it could guess the correct term. The face of the robot was an animated projection on a specially designed plastic mask.
    While it would be natural for a fluent speaker to dominate such a scenario, Cumbal says, the robot was able to prompt the participation of the less fluent player by redirecting its gaze naturally toward them and silently waiting for them to hazard an attempt.
    “Robot gaze can modify group dynamics — what role people take in a situation,” he says. “Our work builds on that and shows further that even when there is an imbalance in skills required for the activity, the gaze of a robot can still influence how the participants contribute.”

    Story Source:
    Materials provided by KTH, Royal Institute of Technology. Note: Content may be edited for style and length. More

  • in

    An electrically charged glass display smoothly transitions between a spectrum of colors

    Scientists have developed a see-through glass display with a high white light contrast ratio that smoothly transitions between a broad spectrum of colors when electrically charged. The technology, from researchers at Jilin University in Changchun, China, overcomes limitations of existing electrochromic devices by harnessing interactions between metal ions and ligands, opening the door for numerous future applications. The work appears March 10 in the journal Chem.
    “We believe that the method behind this see-through, non-emissive display may accelerate the development of transparent, eye-friendly displays with improved readability for bright working conditions,” says Yu-Mo Zhang, an associate professor of chemistry at Jilin University and an author on the study. “As an inevitable display technology in the near future, non-emissive see-through displays will be ubiquitous and irreplaceable as a part of the Internet of Things, in which physical objects are interconnected through software.”
    With the application of voltage, electrochromic displays offer a platform in which light’s properties can be continuously and reversibly manipulated. These devices have been proposed for use in windows, energy-saving electronic price tags, flashy billboards, rearview mirrors, augmented virtual reality, and even artificial irises. However, current models come with limitations — they tend to have low contrast ratios, especially for white light, poor stability, and limited color variations, all of which have prevented electrochromic displays from reaching their technological potential.
    To overcome these deficiencies, Yuyang Wang and colleagues developed a simple chemical approach in which metal ions induce a wide variety of switchable dyes to take on particular structures, then stabilize them once they have reached the desired configurations. To trigger a color change, the electrical field is simply applied to switch the metal ions’ valences, forming new bonds between the metal ions and molecular switches.
    “Differently from the traditional electrochromic materials, whose color-changing motifs and redox motifs are located at the same site, this new material is an indirect-redox-color-changing system composed by switchable dyes and multivalent metal ions,” says Zhang.
    To test this approach, the researchers fabricated an electrochromic device by injecting a material containing metal salts, dyes, electrolytes, and solvent into a sandwiched device with two electrodes and adhesive as a spacer. Next, they performed a battery of light spectrum and electrochemical tests, finding that the devices could effectively achieve cyan, magenta, yellow, red, green, black, pink, purple, and gray-black displays, while maintaining a high contrast ratio. The prototype also shifted seamlessly from a colorless, transparent display to black — the most useful color for commercial applications — with high coloration efficiency, low transmittance change voltage, and a white light contrast ratio that would be suitable for real transparent displays.
    “The low cost and simple preparation process of this glass device will also benefit its scalable production and commercial applications,” notes Zhang.
    Next, the researchers plan to optimize the display’s performance so that it may quickly meet the requirements of high-end displays for real-world applications. Additionally, to avoid leakage from its liquid components, they plan to develop improved fabrication technologies that can produce solid or semi-solid electrochromic devices.
    “We are hoping that more and more visionary researchers and engineers cooperate with each other to optimize the electrochromic displays and promote their commercialization,” says Zhang.
    The authors received financial support from the National Natural Science Foundation of China.

    Story Source:
    Materials provided by Cell Press. Note: Content may be edited for style and length. More

  • in

    Finding quvigints in a quantum treasure map

    Researchers have struck quantum gold — and created a new word — by enlisting machine learning to efficiently navigate a 20-dimensional quantum treasure map.
    Physicist Dr Markus Rambach from the ARC Centre of Excellence for Engineered Quantum Systems (EQUS) at The University of Queensland said the team was able to find unknown quantum states more quickly and accurately, using a technique called self-guided tomography.
    The team also introduced the ‘quvigint’, which is like a qubit (the quantum version of a classical bit that takes on the values ‘0’ or ‘1’) except that it takes on not two, but 20 possible values.
    Dr Rambach said high-dimensional quantum states such as quvigints were ideal for storing and sending large amounts of information securely.
    However, finding unknown states becomes increasingly difficult in higher dimensions, because the same scaling that gives quantum devices their power also limits our ability to describe them.
    He said this problem was akin to navigating a high-dimensional quantum treasure map.

    advertisement

    “We know where we are, and that there’s treasure, but we don’t know which way to go to get to it,” Dr Rambach said.
    “Using standard tomography, this problem would be solved by first determining which directions you need to look in to ensure you cover the whole map, then collecting and storing all the relevant data, and finally processing the data to find the treasure.
    “Instead, using self-guided tomography, we pick two directions at random, try them both, pick the one that gets us closer to the treasure based on clues from the machine learning algorithm, and then repeat this until we reach it.
    “This technique saves a huge amount of time and energy, meaning we can find the treasure — the unknown quvigint — much more quickly and easily.”
    To illustrate the technique, the team simulated a quvigint travelling through the atmosphere, as it would when being used to send quantum information between two points on Earth or to a satellite.

    advertisement

    As the quvigint travels, it is modified by atmospheric turbulence.
    Standard tomography is very susceptible to this type of noise, but by using self-guided tomography the team was able to reconstruct the original quvigint with high accuracy.
    Dr Jacq Romero, also at EQUS and UQ, said self-guided tomography was unlike other methods for finding unknown quantum states.
    “Self-guided tomography is efficient, accurate, robust to noise and readily scalable to high dimensions, such as quvigints,” Dr Romero said.
    “Self-guided tomography is a robust tomography method that is agnostic to the physical system, so it can be applied to other systems such as atoms or ions as well.”

    Story Source:
    Materials provided by University of Queensland. Note: Content may be edited for style and length. More

  • in

    Learning to help the adaptive immune system

    Scientists from the Institute of Industrial Science at The University of Tokyo demonstrated how the adaptive immune system uses a method similar to reinforcement learning to control the immune reaction to repeat infections. This work may lead to significant improvements in vaccine development and interventions to boost the immune system.
    In the human body, the adaptive immune system fights germs by remembering previous infections so it can respond quickly if the same pathogens return. This complex process depends on the cooperation of many cell types. Among these are T helpers, which assist by coordinating the response of other parts of the immune system — called effector cells — such as T killer and B cells. When an invading pathogen is detected, antigen presenting cells bring an identifying piece of the germ to a T cell. Certain T cells become activated and multiply many times in a process known as clonal selection. These clones then marshal a particular set of effector cells to battle the germs. Although the immune system has been extensively studied for decades, the “algorithm” used by T cells to optimize the response to threats is largely unknown.
    Now, scientists at The University of Tokyo have used an artificial intelligence framework to show that the number of T helpers act like the “hidden layer” between inputs and outputs in an artificial neural network commonly used in adaptive learning. In this case, the antigens presented are the inputs, and the responding effector immune cells are the output.
    “Just as a neural network can be trained in machine learning, we believe the immune network can reflect associations between antigen patterns and the effective responses to pathogens,” first author Takuya Kato says.
    The main difference between the adaptive immune system compared with computer machine learning is that only the number of T helper cells of each type can be varied, as opposed to the connection weights between nodes in each layer. The team used computer simulations to predict the distribution of T cell abundances after undergoing adaptive learning. These values were found to agree with experimental data based on the genetic sequencing of actual T helper cells.
    “Our theoretical framework may completely change our understanding of adaptive immunity as a real learning system,” says co-author Tetsuya Kobayashi. “This research can shed light on other complex adaptive systems, as well as ways to optimize vaccines to evoke a stronger immune response.”

    Story Source:
    Materials provided by Institute of Industrial Science, The University of Tokyo. Note: Content may be edited for style and length. More