More stories

  • in

    Constraining quantum measurement

    The quantum world and our everyday world are very different places. In a publication that appeared as the “Editor’s Suggestion” in Physical Review A this week, UvA physicists Jasper van Wezel and Lotte Mertens and their colleagues investigate how the act of measuring a quantum particle transforms it into an everyday object.
    Quantum mechanics is the theory that describes the tiniest objects in the world around us, ranging from the constituents of single atoms to small dust particles. This microscopic realm behaves remarkably differently from our everyday experience — despite the fact that all objects in our human-scale world are made of quantum particles themselves. This leads to intriguing physical questions: why are the quantum world and the macroscopic world so different, where is the dividing line between them, and what exactly happens there?
    Measurement problem
    One particular area where the distinction between quantum and classical becomes essential is when we use an everyday object to measure a quantum system. The division between the quantum and everyday worlds then amounts to asking how ‘big’ the measurement device should be to be able to show quantum properties using a display in our everyday world. Finding out the details of measurement, such as how many quantum particles it takes to create a measurement device, is called the quantum measurement problem.
    As experiments probing the world of quantum mechanics become ever more advanced and involve ever larger quantum objects, the invisible line where pure quantum behaviour crosses over into classical measurement outcomes is rapidly being approached. In an article that was highlighted as “Editor’s Suggestion” in Physical Review A this week, UvA physicists Jasper van Wezel and Lotte Mertens and their colleagues take stock of current models that attempt to solve the measurement problem, and particularly those that do so by proposing slight modifications to the one equation that rules all quantum behaviour: Schrödinger’s equation.
    Born’s rule
    The researchers show that such amendments can in principle lead to consistent proposals for solving the measurement problem. However, it turns out to be difficult to create models that satisfy Born’s rule, which tells us how to use Schrödinger’s equation for predicting measurement outcomes. The researchers show that only models with sufficient mathematical complexity (in technical terms: models that are non-linear and non-unitary) can give rise to Born’s rule and therefore have a chance of solving the measurement problem and teaching us about the elusive crossover between quantum physics and the everyday world.
    Story Source:
    Materials provided by Universiteit van Amsterdam. Note: Content may be edited for style and length. More

  • in

    Nonverbal social interactions – even with unfriendly avatars – boost cooperation

    Scientists used animated humanoid avatars to study how nonverbal cues influence people’s behavior. Reported in the Journal of Cognitive Neuroscience, the research offers insight into the brain mechanisms that drive social and economic decision-making.
    The study revealed that participants were more willing to cooperate with animated avatars than with static figures representing their negotiation partners. It also found — somewhat surprisingly — that people were more willing to accept unfair offers from unfriendly avatars than from friendly ones.
    “This work is an extension of previous studies exploring how nonverbal cues influence people’s perceptions of one another,” said Matthew Moore, who led the research at the University of Illinois Urbana-Champaign with psychology professors Florin Dolcos and Sanda Dolcos. The new research was conducted at the U. of I.’s Beckman Institute for Advanced Science and Technology, where Moore was a postdoctoral fellow.
    “Nonverbal interactions represent a huge part of human communication,” Sanda Dolcos said. “We might not be aware of this, but much of the information that we take in is through these nonverbal channels.”
    Previous studies often used still photos or other static representations of people engaged in social interactions to study how people form opinions or make decisions, Florin Dolcos said.
    “By animating the avatars, we’re capturing interactions that are much closer to what happens in real-life situations,” he said. More

  • in

    Quantum computers getting connected

    A promising route towards larger quantum computers is to orchestrate multiple task-optimised smaller systems. To dynamically connect and entangle any two systems, photonic interference emerges as a powerful method, due to its compatibility with on-chip devices and long-distance propagation in quantum networks.
    One of the main obstacles towards the commercialization of quantum photonics remains the nanoscale fabrication and integration of scalable quantum systems due to their notorious sensitivity to the smallest disturbances in the close environment. This has made it an extraordinary challenge to develop systems that can be used for quantum computing while simultaneously offering an efficient optical interface.
    A recent result published in Nature Materials shows how the integration obstacle can be overcome. The work is based on a multi-national collaboration with researchers from Universities of Stuttgart (Physics 3), California — Davis, Linköping and Kyoto, as well as the Fraunhofer Institute at Erlangen, the Helmholtz Centre at Dresden and the Leibniz-Institute at Leipzig.
    The researchers followed a two-step approach. First, their quantum system of choice is the so-called silicon vacancy centre in silicon carbide, which is known to possess particularly robust spin-optical properties. Second, they fabricated nanophotonic waveguides around these colour centres using gentle processing methods that keep the host material essentially free of damage.
    “With our approach, we could demonstrate that the excellent spin-optical properties of our colour centres are maintained after nanophotonic integration.” says Florian Kaiser, Assistant Professor at the University of Stuttgart, the supervisor of this project. “Thanks to the robustness of our quantum devices, we gained enough headroom to perform quantum gates on multiple nuclear spin qubits. As these spins show very long coherence times, they are excellent for implementing small quantum computers.”
    “In this project, we explored the peculiar triangular shape of photonic devices. While this geometry is of commercial appeal because it provides versatility needed for scalable production, little has been known about its utility for high performing quantum hardware. Our studies reveal that light emitted by the colour centre, which carries quantum information across the chip, can be efficiently propagated through a single optical mode. This is a key conclusion for viability of integration of colour centres with other photonic devices, such as nanocavities, optical fibre and single-photon detectors, needed to realize full functionalities of quantum networking and computing.” — says Marina Radulaski, Assistant Professor at the University of California — Davis.
    What makes the silicon carbide platform particularly interesting are its CMOS compatibility and its heavy usage as high-power semiconductor in electric mobility. The researchers now want to benefit from these aspects to leverage the scalable production of spin-photonics chips. Additionally, they want to implement semiconductor circuitry to electrically initialise and readout the quantum states of their spin qubits. “Maximising electrical control — instead of traditional optical control via lasers — is an important step towards system simplification. The combination of efficient nanophotonics with electrical control will allow us to reliably integrate more quantum systems on one chip, which will result in significant performance gains.,” adds Florian Kaiser, “In this sense, we are only at the dawn of quantum technologies with colour centres in silicon carbide. Our successful nanophotonic integration is not only an exciting enabler for distributed quantum computing, but it can also boost the performance of compact quantum sensors.”
    Story Source:
    Materials provided by Universitaet Stuttgart. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence that understands object relationships

    When humans look at a scene, they see objects and the relationships between them. On top of your desk, there might be a laptop that is sitting to the left of a phone, which is in front of a computer monitor.
    Many deep learning models struggle to see the world this way because they don’t understand the entangled relationships between individual objects. Without knowledge of these relationships, a robot designed to help someone in a kitchen would have difficulty following a command like “pick up the spatula that is to the left of the stove and place it on top of the cutting board.”
    In an effort to solve this problem, MIT researchers have developed a model that understands the underlying relationships between objects in a scene. Their model represents individual relationships one at a time, then combines these representations to describe the overall scene. This enables the model to generate more accurate images from text descriptions, even when the scene includes several objects that are arranged in different relationships with one another.
    This work could be applied in situations where industrial robots must perform intricate, multistep manipulation tasks, like stacking items in a warehouse or assembling appliances. It also moves the field one step closer to enabling machines that can learn from and interact with their environments more like humans do.
    “When I look at a table, I can’t say that there is an object at XYZ location. Our minds don’t work like that. In our minds, when we understand a scene, we really understand it based on the relationships between the objects. We think that by building a system that can understand the relationships between objects, we could use that system to more effectively manipulate and change our environments,” says Yilun Du, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-lead author of the paper.
    Du wrote the paper with co-lead authors Shuang Li, a CSAIL PhD student, and Nan Liu, a graduate student at the University of Illinois at Urbana-Champaign; as well as Joshua B. Tenenbaum, the Paul E. Newton Career Development Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences and a member of CSAIL; and senior author Antonio Torralba, the Delta Electronics Professor of Electrical Engineering and Computer Science and a member of CSAIL. The research will be presented at the Conference on Neural Information Processing Systems in December. More

  • in

    Team builds first living robots that can reproduce

    To persist, life must reproduce. Over billions of years, organisms have evolved many ways of replicating, from budding plants to sexual animals to invading viruses.
    Now scientists at the University of Vermont, Tufts University, and the Wyss Institute for Biologically Inspired Engineering at Harvard University have discovered an entirely new form of biological reproduction — and applied their discovery to create the first-ever, self-replicating living robots.
    The same team that built the first living robots (“Xenobots,” assembled from frog cells — reported in 2020) has discovered that these computer-designed and hand-assembled organisms can swim out into their tiny dish, find single cells, gather hundreds of them together, and assemble “baby” Xenobots inside their Pac-Man-shaped “mouth” — that, a few days later, become new Xenobots that look and move just like themselves.
    And then these new Xenobots can go out, find cells, and build copies of themselves. Again and again.
    “With the right design — they will spontaneously self-replicate,” says Joshua Bongard, Ph.D., a computer scientist and robotics expert at the University of Vermont who co-led the new research.
    The results of the new research were published November 29, 2021, in the Proceedings of the National Academy of Sciences.
    Into the Unknown More

  • in

    'Transformational' approach to machine learning could accelerate search for new disease treatments

    Researchers have developed a new approach to machine learning that ‘learns how to learn’ and out-performs current machine learning methods for drug design, which in turn could accelerate the search for new disease treatments.
    The method, called transformational machine learning (TML), was developed by a team from the UK, Sweden, India and Netherlands. It learns from multiple problems and improves performance while it learns.
    TML could accelerate the identification and production of new drugs by improving the machine learning systems which are used to identify them. The results are reported in the Proceedings of the National Academy of Sciences.
    Most types of machine learning (ML) use labelled examples, and these examples are almost always represented in the computer using intrinsic features, such as the colour or shape of an object. The computer then forms general rules that relate the features to the labels.
    “It’s sort of like teaching a child to identify different animals: this is a rabbit, this is a donkey and so on,” said Professor Ross King from Cambridge’s Department of Chemical Engineering and Biotechnology, who led the research. “If you teach a machine learning algorithm what a rabbit looks like, it will be able to tell whether an animal is or isn’t a rabbit. This is the way that most machine learning works — it deals with problems one at a time.”
    However, this is not the way that human learning works: instead of dealing with a single issue at a time, we get better at learning because we have learned things in the past.
    “To develop TML, we applied this approach to machine learning, and developed a system that learns information from previous problems it has encountered in order to better learn new problems,” said King, who is also a Fellow at The Alan Turing Institute. “Where a typical ML system has to start from scratch when learning to identify a new type of animal — say a kitten — TML can use the similarity to existing animals: kittens are cute like rabbits, but don’t have long ears like rabbits and donkeys. This makes TML a much more powerful approach to machine learning.”
    The researchers demonstrated the effectiveness of their idea on thousands of problems from across science and engineering. They say it shows particular promise in the area of drug discovery, where this approach speeds up the process by checking what other ML models say about a particular molecule. A typical ML approach will search for drug molecules of a particular shape, for example. TML instead uses the connection of the drugs to other drug discovery problems.
    “I was surprised how well it works — better than anything else we know for drug design,” said King. “It’s better at choosing drugs than humans are — and without the best science, we won’t get the best results.”
    Story Source:
    Materials provided by University of Cambridge. The original text of this story is licensed under a Creative Commons License. Note: Content may be edited for style and length. More

  • in

    New discovery opens the way for brain-like computers

    Research has long strived to develop computers to work as energy efficiently as our brains. A study, led by researchers at the University of Gothenburg, has succeeded for the first time in combining a memory function with a calculation function in the same component. The discovery opens the way for more efficient technologies, everything from mobile phones to self-driving cars.
    In recent years, computers have been able to tackle advanced cognitive tasks, like language and image recognition or displaying superhuman chess skills, thanks in large part to artificial intelligence (AI). At the same time, the human brain is still unmatched in its ability to perform tasks effectively and energy efficiently.
    “Finding new ways of performing calculations that resemble the brain’s energy-efficient processes has been a major goal of research for decades. Cognitive tasks, like image and voice recognition, require significant computer power, and mobile applications, in particular, like mobile phones, drones and satellites, require energy efficient solutions,” says Johan Åkerman, professor of applied spintronics at the University of Gothenburg.
    Important breakthrough
    Working with a research team at Tohoko University, Åkerman led a study that has now taken an important step forward in achieving this goal. In the study, now published in the highly ranked journal Nature Materials, the researchers succeeded for the first time in linking the two main tools for advanced calculations: oscillator networks and memristors.
    Åkerman describes oscillators as oscillating circuits that can perform calculations and that are comparable to human nerve cells. Memristors are programable resistors that can also perform calculations and that have integrated memory. This makes them comparable to memory cells. Integrating the two is a major advancement by the researchers. More

  • in

    Development of an artificial vision device capable of mimicking human optical illusions

    NIMS has developed an ionic artificial vision device capable of increasing the edge contrast between the darker and lighter areas of an mage in a manner similar to that of human vision. This first-ever synthetic mimicry of human optical illusions was achieved using ionic migration and interaction within solids. It may be possible to use the device to develop compact, energy-efficient visual sensing and image processing hardware systems capable of processing analog signals.
    Numerous artificial intelligence (AI) systems developers have recently shown a great deal of interest in research on various sensors and analog information processing systems inspired by human sensory mechanisms. Most AI systems on which research is being conducted require sophisticated software/programs and complex circuit configurations, including custom-designed processing modules equipped with arithmetic circuits and memory. These systems have disadvantages, however, in that they are large and consume a great deal of power.
    The NIMS research team recently developed an ionic artificial vision device composed of an array of mixed conductor channels placed on a solid electrolyte at regular intervals. This device simulates the way in which human retinal neurons (i.e., photoreceptors, horizontal cells and bipolar cells) process visual signals by responding to input voltage pulses (equivalent to electrical signals from photoreceptors). This causes ions within the solid electrolyte (equivalent to a horizontal cell) to migrate across the mixed conductor channels, which then changes the output channel current (equivalent to a bipolar cell response). By employing such steps, the device, independent of software, was able to process input image signals and produce an output image with increased edge contrast between darker and lighter areas in a manner similar to the way in which the human visual system can increase edge contrast between different colors and shapes by means of visual lateral inhibition.
    The human eye produces various optical illusions associated with tilt angle, size, color and movement, in addition to darkness/lightness, and this process is believed to play a crucial role in the visual identification of different objects. The ionic artificial vision device described here may potentially be used to reproduce these other types of optical illusions. The research team involved hopes to develop visual sensing systems capable of performing human retinal functions by integrating the subject device with other components, including photoreceptor circuits.
    Story Source:
    Materials provided by National Institute for Materials Science, Japan. Note: Content may be edited for style and length. More