More stories

  • in

    Bristol team chase down advantage in quantum race

    Quantum researchers at the University of Bristol have dramatically reduced the time to simulate an optical quantum computer, with a speedup of around one billion over previous approaches.
    Quantum computers promise exponential speedups for certain problems, with potential applications in areas from drug discovery to new materials for batteries. But quantum computing is still in its early stages, so these are long-term goals. Nevertheless, there are exciting intermediate milestones on the journey to building a useful device. One currently receiving a lot of attention is “quantum advantage,” where a quantum computer performs a task beyond the capabilities of even the world’s most powerful supercomputers.
    Experimental work from the University of Science and Technology of China (USTC) was the first to claim quantum advantage using photons — particles of light, in a protocol called “Gaussian Boson Sampling” (GBS). Their paper claimed that the experiment, performed in 200 seconds, would take 600 million years to simulate on the world’s largest supercomputer.
    Taking up the challenge, a team at the University of Bristol’s Quantum Engineering Technology Labs (QET Labs), in collaboration with researchers at Imperial College London and Hewlett Packard Enterprise, have reduced this simulation time down to just a few months, a speedup factor of around one billion.
    Their paper “The boundary for quantum advantage in Gaussian boson sampling,” published today in the journal Science Advances, comes at a time when other experimental approaches claiming quantum advantage, such as from the quantum computing team at Google, are also leading to improved classical algorithms for simulating these experiments.
    Joint first author Jake Bulmer, PhD student in QET Labs, said: “There is an exciting race going on where, on one side, researchers are trying to build increasingly complex quantum computing systems which they claim cannot be simulated by conventional computers. At the same time, researchers like us are improving simulation methods so we can simulate these supposedly impossible to simulate machines!”
    “As researchers develop larger scale experiments, they will look to make claims of quantum advantage relative to classical simulations. Our results will provide an essential point of comparison by which to establish the computational power of future GBS experiments,” said joint first author, Bryn Bell, Marie Curie Research Fellow at Imperial College London, now Senior Quantum Engineer at Oxford Quantum Circuits.
    The team’s methods do not exploit any errors in the experiment and so one next step for the research is to combine their new methods with techniques that exploit the imperfections of the real-world experiment. This would further speed up simulation time and build a greater understanding of which areas require improvements.
    “These quantum advantage experiments represent a tremendous achievement of physics and engineering. As a researcher, it is exciting to contribute to the understanding of where the computational complexity of these experiments arises. We were surprised by the magnitude of the improvements we achieved — it is not often that you can claim to find a one-billion-fold improvement!” said Jake Bulmer.
    Anthony Laing, co-Director of QET Labs and an author on the work, said: “As we develop more sophisticated quantum computing technologies, this type of work is vital. It helps us understand the bar we must get over before we can begin to solve problems in clean energy and healthcare that affect us all. The work is a great example of teamwork and collaboration among researchers in the UK Quantum Computing and Simulation Hub and Hewlett Packard Enterprise.”
    Story Source:
    Materials provided by University of Bristol. Note: Content may be edited for style and length. More

  • in

    Robot performs first laparoscopic surgery without human help

    A robot has performed laparoscopic surgery on the soft tissue of a pig without the guiding hand of a human — a significant step in robotics toward fully automated surgery on humans. Designed by a team of Johns Hopkins University researchers, the Smart Tissue Autonomous Robot (STAR) is described today in Science Robotics.
    “Our findings show that we can automate one of the most intricate and delicate tasks in surgery: the reconnection of two ends of an intestine. The STAR performed the procedure in four animals and it produced significantly better results than humans performing the same procedure,” said senior author Axel Krieger, an assistant professor of mechanical engineering at Johns Hopkins’ Whiting School of Engineering.
    The robot excelled at intestinal anastomosis, a procedure that requires a high level of repetitive motion and precision. Connecting two ends of an intestine is arguably the most challenging step in gastrointestinal surgery, requiring a surgeon to suture with high accuracy and consistency. Even the slightest hand tremor or misplaced stitch can result in a leak that could have catastrophic complications for the patient.
    Working with collaborators at the Children’s National Hospital in Washington, D.C. and Jin Kang, a Johns Hopkins professor of electrical and computer engineering, Krieger helped create the robot, a vision-guided system designed specifically to suture soft tissue. Their current iteration advances a 2016 model that repaired a pig’s intestines accurately, but required a large incision to access the intestine and more guidance from humans.
    The team equipped the STAR with new features for enhanced autonomy and improved surgical precision, including specialized suturing tools and state-of-the art imaging systems that provide more accurate visualizations of the surgical field.
    Soft-tissue surgery is especially hard for robots because of its unpredictability, forcing them to be able to adapt quickly to handle unexpected obstacles, Krieger said. The STAR has a novel control system that can adjust the surgical plan in real time, just as a human surgeon would.
    “What makes the STAR special is that it is the first robotic system to plan, adapt, and execute a surgical plan in soft tissue with minimal human intervention,” Krieger said.
    A structural-light based three-dimensional endoscope and machine learning-based tracking algorithm developed by Kang and his students guides STAR. “We believe an advanced three-dimensional machine vision system is essential in making intelligent surgical robots smarter and safer,” Kang said.
    As the medical field moves towards more laparoscopic approaches for surgeries, it will be important to have an automated robotic system designed for such procedures to assist, Krieger said.
    “Robotic anastomosis is one way to ensure that surgical tasks that require high precision and repeatability can be performed with more accuracy and precision in every patient independent of surgeon skill,” Krieger said. “We hypothesize that this will result in a democratized surgical approach to patient care with more predictable and consistent patient outcomes.”
    The team from Johns Hopkins also included Hamed Saeidi, Justin D. Opfermann, Michael Kam, Shuwen Wei, and Simon Leonard. Michael H. Hsieh, director of Transitional Urology at Children’s National Hospital, also contributed to the research.
    The work was supported by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under award numbers 1R01EB020610 and R21EB024707.
    Story Source:
    Materials provided by Johns Hopkins University. Original written by Catherine Graham. Note: Content may be edited for style and length. More

  • in

    Technique improves AI ability to understand 3D space using 2D images

    Researchers have developed a new technique, called MonoCon, that improves the ability of artificial intelligence (AI) programs to identify three-dimensional (3D) objects, and how those objects relate to each other in space, using two-dimensional (2D) images. For example, the work would help the AI used in autonomous vehicles navigate in relation to other vehicles using the 2D images it receives from an onboard camera.
    “We live in a 3D world, but when you take a picture, it records that world in a 2D image,” says Tianfu Wu, corresponding author of a paper on the work and an assistant professor of electrical and computer engineering at North Carolina State University.
    “AI programs receive visual input from cameras. So if we want AI to interact with the world, we need to ensure that it is able to interpret what 2D images can tell it about 3D space. In this research, we are focused on one part of that challenge: how we can get AI to accurately recognize 3D objects — such as people or cars — in 2D images, and place those objects in space.”
    While the work may be important for autonomous vehicles, it also has applications for manufacturing and robotics.
    In the context of autonomous vehicles, most existing systems rely on lidar — which uses lasers to measure distance — to navigate 3D space. However, lidar technology is expensive. And because lidar is expensive, autonomous systems don’t include much redundancy. For example, it would be too expensive to put dozens of lidar sensors on a mass-produced driverless car.
    “But if an autonomous vehicle could use visual inputs to navigate through space, you could build in redundancy,” Wu says. “Because cameras are significantly less expensive than lidar, it would be economically feasible to include additional cameras — building redundancy into the system and making it both safer and more robust. More

  • in

    Quantum computing: Vibrating atoms make robust qubits, physicists find

    MIT physicists have discovered a new quantum bit, or “qubit,” in the form of vibrating pairs of atoms known as fermions. They found that when pairs of fermions are chilled and trapped in an optical lattice, the particles can exist simultaneously in two states — a weird quantum phenomenon known as superposition. In this case, the atoms held a superposition of two vibrational states, in which the pair wobbled against each other while also swinging in sync, at the same time.
    The team was able to maintain this state of superposition among hundreds of vibrating pairs of fermions. In so doing, they achieved a new “quantum register,” or system of qubits, that appears to be robust over relatively long periods of time. The discovery, published today in the journal Nature, demonstrates that such wobbly qubits could be a promising foundation for future quantum computers.
    A qubit represents a basic unit of quantum computing. Where a classical bit in today’s computers carries out a series of logical operations starting from one of either two states, 0 or 1, a qubit can exist in a superposition of both states. While in this delicate in-between state, a qubit should be able to simultaneously communicate with many other qubits and process multiple streams of information at a time, to quickly solve problems that would take classical computers years to process.
    There are many types of qubits, some of which are engineered and others that exist naturally. Most qubits are notoriously fickle, either unable to maintain their superposition or unwilling to communicate with other qubits.
    By comparison, the MIT team’s new qubit appears to be extremely robust, able to maintain a superposition between two vibrational states, even in the midst of environmental noise, for up to 10 seconds. The team believes the new vibrating qubits could be made to briefly interact, and potentially carry out tens of thousands of operations in the blink of an eye.
    “We estimate it should take only a millisecond for these qubits to interact, so we can hope for 10,000 operations during that coherence time, which could be competitive with other platforms,” says Martin Zwierlein, the Thomas A. Frank Professor of Physics at MIT. “So, there is concrete hope toward making these qubits compute.”
    Zwierlein is a co-author on the paper, along with lead author Thomas Hartke, Botond Oreg, and Ningyuan Jia, who are all members of MIT’s Research Laboratory of Electronics. More

  • in

    Kirigami robotic grippers are delicate enough to lift egg yolks

    Engineering researchers from North Carolina State University have demonstrated a new type of flexible, robotic grippers that are able to lift delicate egg yolks without breaking them, and that are precise enough to lift a human hair. The work has applications for both soft robotics and biomedical technologies.
    The work draws on the art of kirigami, which involves both cutting and folding two-dimensional (2D) sheets of material to form three-dimensional (3D) shapes. Specifically, the researchers have developed a new technique that involves using kirigami to convert 2D sheets into curved 3D structures by cutting parallel slits across much of the material. The final shape of the 3D structure is determined in large part by the outer boundary of the material. For example, a 2D material that has a circular boundary would form a spherical 3D shape.
    “We have defined and demonstrated a model that allows users to work backwards,” says Yaoye Hong, first author of a paper on the work and a Ph.D. student at NC State. “If users know what sort of curved, 3D structure they need, they can use our approach to determine the boundary shape and pattern of slits they need to use in the 2D material. And additional control of the final structure is made possible by controlling the direction in which the material is pushed or pulled.”
    “Our technique is quite a bit simpler than previous techniques for converting 2D materials into curved 3D structures, and it allows designers to create a wide variety of customized structures from 2D materials,” says Jie Yin, corresponding author of the paper and an associate professor of mechanical and aerospace engineering at NC State.
    The researchers demonstrated the utility of their technique by creating grippers capable of grabbing and lifting objects ranging from egg yolks to a human hair.
    “We’ve shown that our technique can be used to create tools capable of grasping and moving even extremely fragile objects,” Yin says.
    “Conventional grippers grasp an object firmly — they grab things by putting pressure on them,” Yin says. “That can pose problems when attempting to grip fragile objects, such as egg yolks. But our grippers essentially surround an object and then lift it — similar to the way we cup our hands around an object. This allows us to ‘grip’ and move even delicate objects, without sacrificing precision.”
    However, the researchers note that there are a host of other potential applications, such as using the technique to design biomedical technologies that conform to the shape of a joint — like the human knee.
    “Think of smart bandages or monitoring devices capable of bending and moving with your knee or elbow,” Yin says.
    “This is proof-of-concept work that shows our technique works,” Yin says. “We’re now in the process of integrating this technique into soft robotics technologies to address industrial challenges. We are also exploring how this technique could be used to create devices that could be used to apply warmth to the human knee, which would have therapeutic applications.
    “We’re open to working with industry partners to explore additional applications and to find ways to move this approach from the lab into practical use.”
    Video of the technology can be found at https://youtu.be/1oEXhKBoYc8.
    Story Source:
    Materials provided by North Carolina State University. Original written by Matt Shipman. Note: Content may be edited for style and length. More

  • in

    A virtual reality 'Shopping Task' could help test for cognitive decline in adults

    New research from the Institute of Psychiatry, Psychology & Neuroscience (IoPPN) at King’s College London suggests that a virtual reality test in which participants “go to the shops” could offer a potentially promising way of effectively assessing functional cognition, the thinking and processing skills needed to accomplish complex everyday activities.
    The research, published in the Journal of Medical Internet Research, uses a novel virtual reality shopping task called “VStore” to measure cognition, which asks participants to take part in tests designed to mirror the real world. Researchers hope that it will be able to test for age-related cognitive decline in the future.
    The trial recruited 142 healthy individuals aged 20-79 years. Each participant was asked to “go to the shops,” first verbally recalling a list of 12 items, before being assessed for the amount of time it took to collect the items, as well as select the corresponding items on a virtual self-checkout machine, pay, and order coffee.
    Cognition tests, such as those used to measure the deficits present in several neuropsychiatric disorders including Alzheimer’s disease, schizophrenia, and depression, are traditionally time-consuming and onerous. Vstore — the technology that the researchers used in this study — is designed to overcome these limitations to provide a more accurate, engaging, and cost-effective process to explore a person’s cognitive health.
    The immersive environment (a virtual shop) mirrored the complexity of everyday life and meant that participants were better able to engage brain structures that are associated with spatial navigation, such as the hippocampus and entorhinal cortex, both of which can be affected in the early stages of Alzheimer disease.
    Researchers were able to establish that Vstore effectively engages a range of key neuropsychological functions simultaneously, suggesting that the functional tasks embedded in virtual reality may engage a greater range of cognitive domains than standard assessments.
    Prof Sukhi Shergill, the study’s lead author from King’s IoPPN and Kent and Medway Medical School (KMMS) said, “Virtual Reality appears to offer us significant advantages over more traditional pen-and-paper methods. The simple act of going to a shop to collect and pay for a list of items is something that we are all familiar with, but also actively engages multiple parts of the brain. Our study suggests that VStore may be suitable for evaluating functional cognition in the future. However, more works needs to be done before we can confirm this.”
    Lilla Porffy, the study’s first author from King’s IoPPN said, “These are promising findings adding to a growing body of evidence showing that virtual reality can be used to measure cognition and related everyday functioning effectively and accurately. The next steps will be to confirm these results and expand research into conditions characterised by cognitive complaints and functional difficulties such as psychosis and Alzheimer’s Disease.”
    This study was possible thanks to funding from the Medical Research Council and the National Institute for Health Research Maudsley Biomedical Research Centre. VStore was designed by Vitae VR.
    Story Source:
    Materials provided by King’s College London. Note: Content may be edited for style and length. More

  • in

    Physicist solves century old problem of radiation reaction

    A Lancaster physicist has proposed a radical solution to the question of how a charged particle, such as an electron, responded to its own electromagnetic field.
    This question has challenged physicists for over 100 years but mathematical physicist Dr Jonathan Gratus has suggested an alternative approach — published in the Journal of Physics A- with controversial implications.
    It is well established that if a point charge accelerates it produces electromagnetic radiation. This radiation has both energy and momentum, which must come from somewhere. It is usually assumed that they come from the energy and momentum of the charged particle, damping the motion.
    The history of attempts to calculate this radiation reaction (also known as radiation damping) date back to Lorentz in 1892. Major contributions were then made by many well known physicists including Plank, Abraham, von Laue, Born, Schott, Pauli, Dirac and Landau. Active research continues to this day with many articles published every year.
    The challenge is that according to Maxwell’s equations, the electric field at the actual point where the point particle is, is infinite. Hence the force on that point particle should also be infinite.
    Various methods have been used to renormalise away this infinity. This leads to the well established Lorentz-Abraham-Dirac equation.
    Unfortunately, this equation has well known pathological solutions. For example, a particle obeying this equation may accelerate forever with no external force or accelerate before any force is applied. There is also the quantum version of radiation damping. Ironically, this is one of the few phenomena where the quantum version occurs at lower energies than the classical one.
    Physicists are actively searching for this effect. This requires `colliding’ very high energy electrons and powerful laser beams, a challenge as the biggest particle accelerators are not situated near the most powerful lasers. However, firing lasers into plasmas will produce high energy electron, which can then interact with the laser beam. This only requires a powerful laser. Current results show that quantum radiation reaction does exist.
    The alternative approach is to consider many charged particles, where each particle responds to the fields of all the other charged particles, but not itself. This approach was hitherto dismissed, since it was assumed that this would not conserve energy and momentum.
    However, Dr Gratus shows that this assumption is false, with the energy and momentum of one particle’s radiation coming from the external fields used to accelerate it.
    He said: “The controversial implications of this result is that there need not be classical radiation reaction at all. We may therefore consider the discovery of quantum radiation reaction as similar to the discovery of Pluto, which was found following predictions based on discrepancies in the motion of Neptune. Corrected calculations showed there were no discrepancies. Similarly radiation reaction was predicted, found and then shown not to be needed.”
    Story Source:
    Materials provided by Lancaster University. Note: Content may be edited for style and length. More

  • in

    Engineers build a molecular framework to bridge experimental and computer sciences for peptide-based materials engineering

    Researchers in the Stephenson School of Biomedical Engineering, Gallogly College of Engineering, at the University of Oklahoma have developed a framework published in Science Advances that solves the challenge of bridging experimental and computer sciences to better predict peptide structures. Peptide-based materials have been used in energy, security and health fields for the past two decades.
    Handan Acar, Ph.D., the Peggy and Charles Stephenson Assistant Professor of Biomedical Engineering at OU, teamed up with Andrew White, Ph.D., an associate professor of chemical engineering at the University of Rochester, to introduce a new strategy to study fundamentals of molecular engineering. Seren Hamsici, a doctoral student in Acar’s lab, is the first author of the study.
    Proteins are responsible for the structure, function and regulation of the body’s organs and tissues. They are formed by amino acids and come together in different interactions, called intermolecular interactions, that are essential to how proteins perform different roles in the body. When these protein interactions behave abnormally, medical issues result, such as when they clump together to form plaques in the brain that leads to Alzheimer’s Disease.
    “In the peptide-engineering field, the general approach is to take those natural proteins and make incremental changes to identify the properties of the end aggregated products, and then find an application for which the identified properties would be useful,” Acar said. “However, there are more than 500 natural and unnatural amino acids. Especially when you consider the size of the peptides, this approach is just not practical.”
    Machine learning has great potential to counter this challenge, but Acar says the complex way peptides assemble and disassemble has prevented artificial intelligence methods from being effective so far.
    “Clearly, computational methods, such as machine learning, are necessary,” she said. “Yet, the peptide aggregation is very complex. It is currently not possible to identify the effects of individual amino acids with computational methods.”
    To counter those challenges, the research team came up with a new approach. They developed a framework that would help bridge materials science and engineering research with computational science to lay the groundwork for artificial intelligence and machine learning advancements. More