More stories

  • in

    Quantum computers getting connected

    A promising route towards larger quantum computers is to orchestrate multiple task-optimised smaller systems. To dynamically connect and entangle any two systems, photonic interference emerges as a powerful method, due to its compatibility with on-chip devices and long-distance propagation in quantum networks.
    One of the main obstacles towards the commercialization of quantum photonics remains the nanoscale fabrication and integration of scalable quantum systems due to their notorious sensitivity to the smallest disturbances in the close environment. This has made it an extraordinary challenge to develop systems that can be used for quantum computing while simultaneously offering an efficient optical interface.
    A recent result published in Nature Materials shows how the integration obstacle can be overcome. The work is based on a multi-national collaboration with researchers from Universities of Stuttgart (Physics 3), California — Davis, Linköping and Kyoto, as well as the Fraunhofer Institute at Erlangen, the Helmholtz Centre at Dresden and the Leibniz-Institute at Leipzig.
    The researchers followed a two-step approach. First, their quantum system of choice is the so-called silicon vacancy centre in silicon carbide, which is known to possess particularly robust spin-optical properties. Second, they fabricated nanophotonic waveguides around these colour centres using gentle processing methods that keep the host material essentially free of damage.
    “With our approach, we could demonstrate that the excellent spin-optical properties of our colour centres are maintained after nanophotonic integration.” says Florian Kaiser, Assistant Professor at the University of Stuttgart, the supervisor of this project. “Thanks to the robustness of our quantum devices, we gained enough headroom to perform quantum gates on multiple nuclear spin qubits. As these spins show very long coherence times, they are excellent for implementing small quantum computers.”
    “In this project, we explored the peculiar triangular shape of photonic devices. While this geometry is of commercial appeal because it provides versatility needed for scalable production, little has been known about its utility for high performing quantum hardware. Our studies reveal that light emitted by the colour centre, which carries quantum information across the chip, can be efficiently propagated through a single optical mode. This is a key conclusion for viability of integration of colour centres with other photonic devices, such as nanocavities, optical fibre and single-photon detectors, needed to realize full functionalities of quantum networking and computing.” — says Marina Radulaski, Assistant Professor at the University of California — Davis.
    What makes the silicon carbide platform particularly interesting are its CMOS compatibility and its heavy usage as high-power semiconductor in electric mobility. The researchers now want to benefit from these aspects to leverage the scalable production of spin-photonics chips. Additionally, they want to implement semiconductor circuitry to electrically initialise and readout the quantum states of their spin qubits. “Maximising electrical control — instead of traditional optical control via lasers — is an important step towards system simplification. The combination of efficient nanophotonics with electrical control will allow us to reliably integrate more quantum systems on one chip, which will result in significant performance gains.,” adds Florian Kaiser, “In this sense, we are only at the dawn of quantum technologies with colour centres in silicon carbide. Our successful nanophotonic integration is not only an exciting enabler for distributed quantum computing, but it can also boost the performance of compact quantum sensors.”
    Story Source:
    Materials provided by Universitaet Stuttgart. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence that understands object relationships

    When humans look at a scene, they see objects and the relationships between them. On top of your desk, there might be a laptop that is sitting to the left of a phone, which is in front of a computer monitor.
    Many deep learning models struggle to see the world this way because they don’t understand the entangled relationships between individual objects. Without knowledge of these relationships, a robot designed to help someone in a kitchen would have difficulty following a command like “pick up the spatula that is to the left of the stove and place it on top of the cutting board.”
    In an effort to solve this problem, MIT researchers have developed a model that understands the underlying relationships between objects in a scene. Their model represents individual relationships one at a time, then combines these representations to describe the overall scene. This enables the model to generate more accurate images from text descriptions, even when the scene includes several objects that are arranged in different relationships with one another.
    This work could be applied in situations where industrial robots must perform intricate, multistep manipulation tasks, like stacking items in a warehouse or assembling appliances. It also moves the field one step closer to enabling machines that can learn from and interact with their environments more like humans do.
    “When I look at a table, I can’t say that there is an object at XYZ location. Our minds don’t work like that. In our minds, when we understand a scene, we really understand it based on the relationships between the objects. We think that by building a system that can understand the relationships between objects, we could use that system to more effectively manipulate and change our environments,” says Yilun Du, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-lead author of the paper.
    Du wrote the paper with co-lead authors Shuang Li, a CSAIL PhD student, and Nan Liu, a graduate student at the University of Illinois at Urbana-Champaign; as well as Joshua B. Tenenbaum, the Paul E. Newton Career Development Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences and a member of CSAIL; and senior author Antonio Torralba, the Delta Electronics Professor of Electrical Engineering and Computer Science and a member of CSAIL. The research will be presented at the Conference on Neural Information Processing Systems in December. More

  • in

    Team builds first living robots that can reproduce

    To persist, life must reproduce. Over billions of years, organisms have evolved many ways of replicating, from budding plants to sexual animals to invading viruses.
    Now scientists at the University of Vermont, Tufts University, and the Wyss Institute for Biologically Inspired Engineering at Harvard University have discovered an entirely new form of biological reproduction — and applied their discovery to create the first-ever, self-replicating living robots.
    The same team that built the first living robots (“Xenobots,” assembled from frog cells — reported in 2020) has discovered that these computer-designed and hand-assembled organisms can swim out into their tiny dish, find single cells, gather hundreds of them together, and assemble “baby” Xenobots inside their Pac-Man-shaped “mouth” — that, a few days later, become new Xenobots that look and move just like themselves.
    And then these new Xenobots can go out, find cells, and build copies of themselves. Again and again.
    “With the right design — they will spontaneously self-replicate,” says Joshua Bongard, Ph.D., a computer scientist and robotics expert at the University of Vermont who co-led the new research.
    The results of the new research were published November 29, 2021, in the Proceedings of the National Academy of Sciences.
    Into the Unknown More

  • in

    'Transformational' approach to machine learning could accelerate search for new disease treatments

    Researchers have developed a new approach to machine learning that ‘learns how to learn’ and out-performs current machine learning methods for drug design, which in turn could accelerate the search for new disease treatments.
    The method, called transformational machine learning (TML), was developed by a team from the UK, Sweden, India and Netherlands. It learns from multiple problems and improves performance while it learns.
    TML could accelerate the identification and production of new drugs by improving the machine learning systems which are used to identify them. The results are reported in the Proceedings of the National Academy of Sciences.
    Most types of machine learning (ML) use labelled examples, and these examples are almost always represented in the computer using intrinsic features, such as the colour or shape of an object. The computer then forms general rules that relate the features to the labels.
    “It’s sort of like teaching a child to identify different animals: this is a rabbit, this is a donkey and so on,” said Professor Ross King from Cambridge’s Department of Chemical Engineering and Biotechnology, who led the research. “If you teach a machine learning algorithm what a rabbit looks like, it will be able to tell whether an animal is or isn’t a rabbit. This is the way that most machine learning works — it deals with problems one at a time.”
    However, this is not the way that human learning works: instead of dealing with a single issue at a time, we get better at learning because we have learned things in the past.
    “To develop TML, we applied this approach to machine learning, and developed a system that learns information from previous problems it has encountered in order to better learn new problems,” said King, who is also a Fellow at The Alan Turing Institute. “Where a typical ML system has to start from scratch when learning to identify a new type of animal — say a kitten — TML can use the similarity to existing animals: kittens are cute like rabbits, but don’t have long ears like rabbits and donkeys. This makes TML a much more powerful approach to machine learning.”
    The researchers demonstrated the effectiveness of their idea on thousands of problems from across science and engineering. They say it shows particular promise in the area of drug discovery, where this approach speeds up the process by checking what other ML models say about a particular molecule. A typical ML approach will search for drug molecules of a particular shape, for example. TML instead uses the connection of the drugs to other drug discovery problems.
    “I was surprised how well it works — better than anything else we know for drug design,” said King. “It’s better at choosing drugs than humans are — and without the best science, we won’t get the best results.”
    Story Source:
    Materials provided by University of Cambridge. The original text of this story is licensed under a Creative Commons License. Note: Content may be edited for style and length. More

  • in

    New discovery opens the way for brain-like computers

    Research has long strived to develop computers to work as energy efficiently as our brains. A study, led by researchers at the University of Gothenburg, has succeeded for the first time in combining a memory function with a calculation function in the same component. The discovery opens the way for more efficient technologies, everything from mobile phones to self-driving cars.
    In recent years, computers have been able to tackle advanced cognitive tasks, like language and image recognition or displaying superhuman chess skills, thanks in large part to artificial intelligence (AI). At the same time, the human brain is still unmatched in its ability to perform tasks effectively and energy efficiently.
    “Finding new ways of performing calculations that resemble the brain’s energy-efficient processes has been a major goal of research for decades. Cognitive tasks, like image and voice recognition, require significant computer power, and mobile applications, in particular, like mobile phones, drones and satellites, require energy efficient solutions,” says Johan Åkerman, professor of applied spintronics at the University of Gothenburg.
    Important breakthrough
    Working with a research team at Tohoko University, Åkerman led a study that has now taken an important step forward in achieving this goal. In the study, now published in the highly ranked journal Nature Materials, the researchers succeeded for the first time in linking the two main tools for advanced calculations: oscillator networks and memristors.
    Åkerman describes oscillators as oscillating circuits that can perform calculations and that are comparable to human nerve cells. Memristors are programable resistors that can also perform calculations and that have integrated memory. This makes them comparable to memory cells. Integrating the two is a major advancement by the researchers. More

  • in

    Development of an artificial vision device capable of mimicking human optical illusions

    NIMS has developed an ionic artificial vision device capable of increasing the edge contrast between the darker and lighter areas of an mage in a manner similar to that of human vision. This first-ever synthetic mimicry of human optical illusions was achieved using ionic migration and interaction within solids. It may be possible to use the device to develop compact, energy-efficient visual sensing and image processing hardware systems capable of processing analog signals.
    Numerous artificial intelligence (AI) systems developers have recently shown a great deal of interest in research on various sensors and analog information processing systems inspired by human sensory mechanisms. Most AI systems on which research is being conducted require sophisticated software/programs and complex circuit configurations, including custom-designed processing modules equipped with arithmetic circuits and memory. These systems have disadvantages, however, in that they are large and consume a great deal of power.
    The NIMS research team recently developed an ionic artificial vision device composed of an array of mixed conductor channels placed on a solid electrolyte at regular intervals. This device simulates the way in which human retinal neurons (i.e., photoreceptors, horizontal cells and bipolar cells) process visual signals by responding to input voltage pulses (equivalent to electrical signals from photoreceptors). This causes ions within the solid electrolyte (equivalent to a horizontal cell) to migrate across the mixed conductor channels, which then changes the output channel current (equivalent to a bipolar cell response). By employing such steps, the device, independent of software, was able to process input image signals and produce an output image with increased edge contrast between darker and lighter areas in a manner similar to the way in which the human visual system can increase edge contrast between different colors and shapes by means of visual lateral inhibition.
    The human eye produces various optical illusions associated with tilt angle, size, color and movement, in addition to darkness/lightness, and this process is believed to play a crucial role in the visual identification of different objects. The ionic artificial vision device described here may potentially be used to reproduce these other types of optical illusions. The research team involved hopes to develop visual sensing systems capable of performing human retinal functions by integrating the subject device with other components, including photoreceptor circuits.
    Story Source:
    Materials provided by National Institute for Materials Science, Japan. Note: Content may be edited for style and length. More

  • in

    'Magic wand' reveals a colorful nano-world

    Scientists have developed new materials for next-generation electronics so tiny that they are not only indistinguishable when closely packed, but they also don’t reflect enough light to show fine details, such as colors, with even the most powerful optical microscopes. Under an optical microscope, carbon nanotubes, for example, look grayish. The inability to distinguish fine details and differences between individual pieces of nanomaterials makes it hard for scientists to study their unique properties and discover ways to perfect them for industrial use.
    In a new report in Nature Communications, researchers from UC Riverside describe a revolutionary imaging technology that compresses lamp light into a nanometer-sized spot. It holds that light at the end of a silver nanowire like a Hogwarts student practicing the “Lumos” spell, and uses it to reveal previously invisible details, including colors.
    The advance, improving color-imaging resolution to an unprecedented 6 nanometer level, will help scientists see nanomaterials in enough detail to make them more useful in electronics and other applications.
    Ming Liu and Ruoxue Yan, associate professors in UC Riverside’s Marlan and Rosemary Bourns College of Engineering, developed this unique tool with a superfocusing technique developed by the team. The technique has been used in previous work to observe the vibration of molecular bonds at 1-nanometer spatial resolution without the need of any focusing lens.
    In the new report, Liu and Yan modified the tool to measure signals spanning the whole visible wavelength range, which can be used to render the color and depict the electronic band structures of the object instead of only molecule vibrations. The tool squeezes the light from a tungsten lamp into a silver nanowire with near-zero scattering or reflection, where light is carried by the oscillation wave of free electrons at the silver surface.
    The condensed light leaves the silver nanowire tip, which has a radius of just 5 nanometers, in a conical path, like the light beam from a flashlight. When the tip passes over an object, its influence on the beam shape and color is detected and recorded.
    “It is like using your thumb to control the water spray from a hose,” Liu said, “You know how to get the desired spraying pattern by changing the thumb position, and likewise, in the experiment, we read the light pattern to retrieve the details of the object blocking the 5 nm-sized light nozzle.”
    The light is then focused into a spectrometer, where it forms a tiny ring shape. By scanning the probe over an area and recording two spectra for each pixel, the researchers can formulate the absorption and scattering images with colors. The originally grayish carbon nanotubes receive their first color photograph, and an individual carbon nanotube now has the chance to exhibit its unique color.
    “The atomically smooth sharp-tip silver nanowire and its nearly scatterless optical coupling and focusing is critical for the imaging,” Yan said. “Otherwise there would be intense stray light in the background that ruins the whole effort. ”
    The researchers expect that the new technology can be an important tool to help the semiconductor industry make uniform nanomaterials with consistent properties for use in electronic devices. The new full-color nano-imaging technique could also be used to improve understanding of catalysis, quantum optics, and nanoelectronics.
    Liu, Yan, and Ma were joined in the research by Xuezhi Ma, a postdoctoral scholar at Temple University who worked on the project as part of his doctoral research at UCR Riverside. Researchers also included UCR students Qiushi Liu, Ning Yu, Da Xu, Sanggon Kim, Zebin Liu, Kaili Jiang, and professor Bryan Wong. The paper, titled “6 nm super-resolution optical transmission and scattering spectroscopic imaging of carbon nanotubes using a nanometer-scale white light source,” is available here.
    Story Source:
    Materials provided by University of California – Riverside. Original written by Holly Ober. Note: Content may be edited for style and length. More

  • in

    How molecular clusters in the nucleus interact with chromosomes

    A cell stores all of its genetic material in its nucleus, in the form of chromosomes, but that’s not all that’s tucked away in there. The nucleus is also home to small bodies called nucleoli — clusters of proteins and RNA that help build ribosomes.
    Using computer simulations, MIT chemists have now discovered how these bodies interact with chromosomes in the nucleus, and how those interactions help the nucleoli exist as stable droplets within the nucleus.
    Their findings also suggest that chromatin-nuclear body interactions lead the genome to take on a gel-like structure, which helps to promote stable interactions between the genome and transcription machineries. These interactions help control gene expression.
    “This model has inspired us to think that the genome may have gel-like features that could help the system encode important contacts and help further translate those contacts into functional outputs,” says Bin Zhang, the Pfizer-Laubach Career Development Associate Professor of Chemistry at MIT, an associate member of the Broad Institute of Harvard and MIT, and the senior author of the study.
    MIT graduate student Yifeng Qi is the lead author of the paper, which appears today in Nature Communications.
    Modeling droplets
    Much of Zhang’s research focuses on modeling the three-dimensional structure of the genome and analyzing how that structure influences gene regulation. More