More stories

  • in

    Cutting surgical robots down to size

    Teleoperated surgical robots are becoming commonplace in operating rooms, but many are massive (sometimes taking up an entire room) and difficult to manipulate. A new collaboration between Harvard’s Wyss Institute and Sony Corporation has created the mini-RCM, a surgical robot the size of a tennis ball that weighs as much as a penny, and performed significantly better than manually operated tools in delicate mock-surgical procedures.
    Minimally invasive laparoscopic surgery, in which a surgeon uses tools and a tiny camera inserted into small incisions to perform operations, has made surgical procedures safer for both patients and doctors over the last half-century. Recently, surgical robots have started to appear in operating rooms to further assist surgeons by allowing them to manipulate multiple tools at once with greater precision, flexibility, and control than is possible with traditional techniques. However, these robotic systems are extremely large, often taking up an entire room, and their tools can be much larger than the delicate tissues and structures on which they operate.
    A collaboration between Wyss Associate Faculty member Robert Wood, Ph.D. and Robotics Engineer Hiroyuki Suzuki of Sony Corporation has brought surgical robotics down to the microscale by creating a new, origami-inspired miniature remote center of motion manipulator (the “mini-RCM”). The robot is the size of a tennis ball, weighs about as much as a penny, and successfully performed a difficult mock surgical task, as described in a recent issue of Nature Machine Intelligence.
    “The Wood lab’s unique technical capabilities for making micro-robots have led to a number of impressive inventions over the last few years, and I was convinced that it also had the potential to make a breakthrough in the field of medical manipulators as well,” said Suzuki, who began working with Wood on the mini-RCM in 2018 as part of a Harvard-Sony collaboration. “This project has been a great success.”
    A mini robot for micro tasks
    To create their miniature surgical robot, Suzuki and Wood turned to the Pop-Up MEMS manufacturing technique developed in Wood’s lab, in which materials are deposited on top of each other in layers that are bonded together, then laser-cut in a specific pattern that allows the desired three-dimensional shape to “pop up,” as in a children’s pop-up picture book. This technique greatly simplifies the mass-production of small, complex structures that would otherwise have to be painstakingly constructed by hand.

    advertisement

    The team created a parallelogram shape to serve as the main structure of the robot, then fabricated three linear actuators (mini-LAs) to control the robot’s movement: one parallel to the bottom of the parallelogram that raises and lowers it, one perpendicular to the parallelogram that rotates it, and one at the tip of the parallelogram that extends and retracts the tool in use. The result was a robot that is much smaller and lighter than other microsurgical devices previously developed in academia.
    The mini-LAs are themselves marvels in miniature, built around a piezoelectric ceramic material that changes shape when an electrical field is applied. The shape change pushes the mini-LA’s “runner unit” along its “rail unit” like a train on train tracks, and that linear motion is harnessed to move the robot. Because piezoelectric materials inherently deform as they change shape, the team also integrated LED-based optical sensors into the mini-LA to detect and correct any deviations from the desired movement, such as those caused by hand tremors.
    Steadier than a surgeon’s hands
    To mimic the conditions of a teleoperated surgery, the team connected the mini-RCM to a Phantom Omni device, which manipulated the mini-RCM in response to the movements of a user’s hand controlling a pen-like tool. Their first test evaluated a human’s ability to trace a tiny square smaller than the tip of a ballpoint pen, looking through a microscope and either tracing it by hand, or tracing it using the mini-RCM. The mini-RCM tests dramatically improved user accuracy, reducing error by 68% compared to manual operation — an especially important quality given the precision required to repair small and delicate structures in the human body.
    Given the mini-RCM’s success on the tracing test, the researchers then created a mock version of a surgical procedure called retinal vein cannulation, in which a surgeon must carefully insert a needle through the eye to inject therapeutics into the tiny veins at the back of the eyeball. They fabricated a silicone tube the same size as the retinal vein (about twice the thickness of a human hair), and successfully punctured it with a needle attached to the end of the mini-RCM without causing local damage or disruption.
    In addition to its efficacy in performing delicate surgical maneuvers, the mini-RCM’s small size provides another important benefit: it is easy to set up and install and, in the case of a complication or electrical outage, the robot can be easily removed from a patient’s body by hand.
    “The Pop-Up MEMS method is proving to be a valuable approach in a number of areas that require small yet sophisticated machines, and it was very satisfying to know that it has the potential to improve the safety and efficiency of surgeries to make them even less invasive for patients,” said Wood, who is also the Charles River Professor of Engineering and Applied Sciences at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS).
    The researchers aim to increase the force of the robot’s actuators to cover the maximum forces experienced during an operation, and improve its positioning precision. They are also investigating using a laser with a shorter pulse during the machining process, to improve the mini-LAs’ sensing resolution.
    “This unique collaboration between the Wood lab and Sony illustrates the benefits that can arise from combining the real-world focus of industry with the innovative spirit of academia, and we look forward to seeing the impact this work will have on surgical robotics in the near future,” said Wyss Institute Founding Director Don Ingber, M.D., Ph.D., who is also the the Judah Folkman Professor of Vascular Biology at Harvard Medical School and Boston Children’s Hospital, and Professor of Bioengineering at SEAS. More

  • in

    Machines rival expert analysis of stored red blood cell quality

    Each year, nearly 120 million units* of donated blood flow from donor veins into storage bags at collection centres around the world. The fluid is packed, processed and reserved for later use. But once outside the body, stored red blood cells (RBCs) undergo continuous deterioration. By day 42 in most countries, the products are no longer usable.
    For years, labs have used expert microscopic examinations to assess the quality of stored blood. How viable is a unit by day 24? How about day 37? Depending on what technicians’ eyes perceive, answers may vary. This manual process is laborious, complex and subjective.
    Now, after three years of research, a study published in the Proceedings of the National Academy of Sciences unveils two new strategies to automate the process and achieve objective RBC quality scoring — with results that match and even surpass expert assessment.
    The methodologies showcase the potential in combining artificial intelligence with state-of-the-art imaging to solve a longstanding biomedical problem. If standardized, it could ensure more consistent, accurate assessments, with increased efficiency and better patient outcomes.
    Trained machines match expert human assessment
    The interdisciplinary collaboration spanned five countries, twelve institutions and nineteen authors, including universities, research institutes, and blood collection centres in Canada, USA, Switzerland, Germany and the UK. The research was led by computational biologist Anne Carpenter of the Broad Institute of Harvard and MIT, physicist Michael Kolios of Ryerson University’s Department of Physics, and Jason Acker of the Canadian Blood Services.

    advertisement

    They first investigated whether a neural network could be taught to “see” in images of RBCs the same six categories of cell degradation as human experts could. To generate the vast quantity of images required, imaging flow cytometry played a crucial role. Joseph Sebastian, co-author and Ryerson undergraduate then working under Kolios, explains.
    “With this technique, RBCs are suspended and flowed through the cytometer, an instrument that takes thousands of images of individual blood cells per second. We can then examine each RBC without handling or inadvertently damaging them, which sometimes happens during microscopic examinations.”
    The researchers used 40,900 cell images to train the neural networks on classifying RBCs into the six categories — in a collection that is now the world’s largest, freely available database of RBCs individually annotated with the various categories of deterioration.
    When tested, the machine learning algorithm achieved 77% agreement with human experts. Although a 23% error rate might sound high, perfectly matching an expert’s judgment in this test is impossible: even human experts agree only 83% of the time. Thus, this fully-supervised machine learning model could be effective to replace tedious visual examination by humans with little loss of accuracy.
    Even so, the team wondered: could a different strategy push the upper limits of accuracy further?

    advertisement

    Machines surpass human vision, detect cellular subtleties
    In the study’s second part, the researchers avoided human input altogether and devised an alternative, “weakly-supervised” deep learning model in which neural networks learned about RBC degradation on their own.
    Instead of being taught the six visual categories used by experts, the machines learned solely by analyzing over one million images of RBCs, unclassed and ordered only by blood storage duration time. Eventually, the machines correctly discerned features in single RBCs that correspond to the descent from healthy to unhealthy cells.
    “Allowing the computer to teach itself the progression of stored red blood cells as they degrade is a really exciting development,” says Carpenter, “particularly because it can capture more subtle changes in cells that humans don’t recognize.”
    When tested against other relevant tests such as a biochemical assay, the weakly-supervised trained machines predicted RBC quality better than the current six-category assessment method used by experts.
    Deep learning strategies: Blood quality and beyond
    Further training is still needed before the model is ready for clinical testing, but the outlook is promising. The fully-supervised machine learning model could soon automate and streamline the current manual method, minimizing sample handling, discrepancies and procedural errors in blood quality assessments.
    The second, alternative weakly-supervised framework may further eliminate human subjectivity from the process. Objective, accurate blood quality predictions would allow doctors to better personalize blood products to patients. Beyond stored blood, the time-based deep learning strategy may be transferable to other applications involving chronological progression, such as the spread of cancer.
    “People used to ask what the alternative is to the manual process,” says Kolios. “Now, we’ve developed an approach that integrates cutting-edge developments from several disciplines, including computational biology, transfusion medicine, and medical physics. It’s a testament to how technology and science are now interconnecting to solve today’s biomedical problems.”
    *Data reported by the World Health Organization More

  • in

    Storing information in antiferromagnetic materials

    Researchers at Mainz University were able to show that information can be stored in antiferromagnetic materials and to measure the efficiency of the writing operation.
    We all store more and more information, while the end devices are supposed to get smaller and smaller. However, due to continuous technological improvement, conventional electronics based on silicon is rapidly reaching its limits — for example limits of physical nature such as the bit size or the number of electrons required to store information. Spintronics, and antiferromagnetic materials in particular, offers an alternative. It is not only electrons that are used to store information, but also their spin containing magnetic information. In this way, twice as much information can be stored in the same room. So far, however, it has been controversial whether it is even possible to store information electrically in antiferromagnetic materials.
    Physicists unveil the potential of antiferromagnetic materials
    Researchers at Johannes Gutenberg University Mainz (JGU), in collaboration with Tohoku University in Sendai in Japan, have now been able to prove that it works: “We were not only able to show that information storage in antiferromagnetic materials is fundamentally possible, but also to measure how efficiently information can be written electrically in insulating antiferromagnetic materials,” said Dr. Lorenzo Baldrati, Marie Sklowdoska-Curie Fellow in Professor Mathias Kläui’s group at JGU. For their measurements, the researchers used the antiferromagnetic insulator Cobalt oxide CoO — a model material that paves the way for applications. The result: Currents are much more efficient than magnetic fields to manipulate antiferromagnetic materials. This discovery opens the way toward applications ranging from smart cards that cannot be erased by external magnetic fields to ultrafast computers — thanks to the superior properties of antiferromagnets over ferromagnets. The research paper has recently been published in Physical Review Letters. In further steps, the researchers at JGU want to investigate how quickly information can be saved and how “small” the memory can be written to.
    Active German-Japanese exchange
    “Our longstanding collaboration with the leading university in the field of spintronics, Tohoku University, has generated another exciting piece of work,” emphasized Professor Mathias Kläui. “With the support of the German Exchange Service, the Graduate School of Excellence Materials Science in Mainz, and the German Research Foundation, we initiated a lively exchange between Mainz and Sendai, working with theory groups at the forefront of this topic. We have opportunities for first joint degrees between our universities, which is noticed by students. This is a next step in the formation of an international team of excellence in the burgeoning field of antiferromagnetic spintronics.”

    Story Source:
    Materials provided by Johannes Gutenberg Universitaet Mainz. Note: Content may be edited for style and length. More

  • in

    Contagion model predicts flooding in urban areas

    Inspired by the same modeling and mathematical laws used to predict the spread of pandemics, researchers at Texas A&M University have created a model to accurately forecast the spread and recession process of floodwaters in urban road networks. With this new approach, researchers have created a simple and powerful mathematical approach to a complex problem.
    “We were inspired by the fact that the spread of epidemics and pandemics in communities has been studied by people in health sciences and epidemiology and other fields, and they have identified some principles and rules that govern the spread process in complex social networks,” said Dr. Ali Mostafavi, associate professor in the Zachry Department of Civil and Environmental Engineering. “So we ask ourselves, are these spreading processes the same for the spread of flooding in cities? We tested that, and surprisingly, we found that the answer is yes.”
    The findings of this study were recently published in Nature Scientific Reports.
    The contagion model, Susceptible-Exposed-Infected-Recovered (SEIR), is used to mathematically model the spread of infectious diseases. In relation to flooding, Mostafavi and his team integrated the SEIR model with the network spread process in which the probability of flooding of a road segment depends on the degree to which the nearby road segments are flooded.
    In the context of flooding, susceptible is a road that can be flooded because it is in a flood plain; exposed is a road that has flooding due to rainwater or overflow from a nearby channel; infected is a road that is flooded and cannot be used; and recovered is a road where the floodwater has receded.
    The research team verified the model’s use with high-resolution historical data of road flooding in Harris County during Hurricane Harvey in 2017. The results show that the model can monitor and predict the evolution of flooded roads over time.

    advertisement

    “The power of this approach is it offers a simple and powerful mathematical approach and provides great potential to support emergency managers, public officials, residents, first responders and other decision makers for flood forecast in road networks,” Mostafavi said.
    The proposed model can achieve decent precision and recall for the spatial spread of the flooded roads.
    “If you look at the flood monitoring system of Harris County, it can show you if a channel is overflowing now, but they’re not able to predict anything about the next four hours or next eight hours. Also, the existing flood monitoring systems provide limited information about the propagation of flooding in road networks and the impacts on urban mobility. But our models, and this specific model for the road networks, is robust at predicting the future spread of flooding,” he said. “In addition to flood prediction in urban networks, the findings of this study provide very important insights about the universality of the network spread processes across various social, natural, physical and engineered systems; this is significant for better modeling and managing cities, as complex systems.”
    The only limitation to this flood prediction model is that it cannot identify where the initial flooding will begin, but Mostafavi said there are other mechanisms in place such as sensors on flood gauges that can address this.
    “As soon as flooding is reported in these areas, we can use our model, which is very simple compared to hydraulic and hydrologic models, to predict the flood propagation in future hours. The forecast of road inundations and mobility disruptions is critical to inform residents to avoid high-risk roadways and to enable emergency managers and responders to optimize relief and rescue in impacted areas based on predicted information about road access and mobility. This forecast could be the difference between life and death during crisis response,” he said.
    Civil engineering doctoral student and graduate research assistant Chao Fan led the analysis and modeling of the Hurricane Harvey data, along with Xiangqi (Alex) Jiang, a graduate student in computer science, who works in Mostafavi’s UrbanResilience.AI Lab.
    “By doing this research, I realize the power of mathematical models in addressing engineering problems and real-world challenges.
    This research expands my research capabilities and will have a long-term impact on my career,” Fan said. “In addition, I am also very excited that my research can contribute to reducing the negative impacts of natural disasters on infrastructure services.”

    Story Source:
    Materials provided by Texas A&M University. Original written by Alyson Chapman. Note: Content may be edited for style and length. More

  • in

    Beam me up: Researchers use 'behavioral teleporting' to study social interactions

    Teleporting is a science fiction trope often associated with Star Trek. But a different kind of teleporting is being explored at the NYU Tandon School of Engineering, one that could let researchers investigate the very basis of social behavior, study interactions between invasive and native species to preserve natural ecosystems, explore predator/prey relationship without posing a risk to the welfare of the animals, and even fine-tune human/robot interfaces.
    The team, led by Maurizio Porfiri, Institute Professor at NYU Tandon, devised a novel approach to getting physically separated fish to interact with each other, leading to insights about what kinds of cues influence social behavior.
    The innovative system, called “behavioral teleporting” — the transfer of the complete inventory of behaviors and actions (ethogram) of a live zebrafish onto a remotely located robotic replica — allowed the investigators to independently manipulate multiple factors underpinning social interactions in real-time. The research, “Behavioral teleporting of individual ethograms onto inanimate robots: experiments on social interactions in live zebrafish,” appears in the Cell Press journal iScience.
    The team, including Mert Karakaya, a Ph.D. candidate in the Department of Mechanical and Aerospace Engineering at NYU Tandon, and Simone Macrì of the Centre for Behavioral Sciences and Mental Health, Istituto Superiore di Sanità, Rome, devised a setup consisting of two separate tanks, each containing one fish and one robotic replica. Within each tank, the live fish of the pair swam with the zebrafish replica matching the morphology and locomotory pattern of the live fish located in the other tank.
    An automated tracking system scored each of the live subjects’ locomotory patterns, which were, in turn, used to control the robotic replica swimming in the other tank via an external manipulator. Therefore, the system allowed the transfer of the complete ethogram of each fish across tanks within a fraction of a second, establishing a complex robotics-mediated interaction between two remotely-located live animals. By independently controlling the morphology of these robots, the team explored the link between appearance and movements in social behavior.
    The investigators found that the replica teleported the fish motion in almost all trials (85% of the total experimental time), with a 95% accuracy at a maximum time lag of less than two-tenths of a second. The high accuracy in the replication of fish trajectory was confirmed by equivalent analysis on speed, turn rate, and acceleration.

    advertisement

    Porfiri explained that the behavioral teleporting system avoids the limits of typical modeling using robots.
    “Since existing approaches involve the use of a mathematical representation of social behavior for controlling the movements of the replica, they often lead to unnatural behavioral responses of live animals,” he said. “But because behavioral teleporting ‘copy/pastes’ the behavior of a live fish onto robotic proxies, it confers a high degree of precision with respect to such factors as position, speed, turn rate, and acceleration.”
    Porfiri’s previous research proving robots are viable as behavior models for zebrafish showed that schools of zebrafish could be made to follow the lead of their robotic counterparts.
    “In humans, social behavior unfolds in actions, habits, and practices that ultimately define our individual life and our society,” added Macrì. “These depend on complex processes, mediated by individual traits — baldness, height, voice pitch, and outfit, for example — and behavioral feedback, vectors that are often difficult to isolate. This new approach demonstrates that we canisolate influences on the quality of social interaction and determine which visual features really matter.”
    The research included experiments to understand the asymmetric relationship between large and small fish and identify leader/follower roles, in which a large fish swam with a small replica that mirrored the behavior of the small fish positioned in the other tank and vice-versa.

    advertisement

    Karakaya said the team was surprised to find that the smaller — not larger — fish “led” the interactions.
    “There are no strongly conclusive results on why that could be, but one reason might be due to the ‘curious’ nature of the smaller individuals to explore a novel space,” he said. “In known environments, large fish tend to lead; however, in new environments larger and older animals can be cautious in their approach, whereas the smaller and younger ones could be ‘bolder.'”
    The method also led to the discovery that interaction between fish was not determined by locomotor patterns alone, but also by appearance.
    “It is interesting to see that, as is the case with our own species, there is a relationship between appearance and social interaction,” he added.
    Karakaya added that this could serve as an important tool for human interactions in the near future, whereby, through the closed-loop teleporting, people could use robots as proxies of themselves.
    “One example would be the colonies on Mars, where experts from Earth could use humanoid robots as an extension of themselves to interact with the environment and people there. This would provide easier and more accurate medical examination, improve human contact, and reduce isolation. Detailed studies on the behavioral and psychological effects of these proxies must be completed to better understand how these techniques can be implemented into daily life.”
    This work was supported by the National Science Foundation, the National Institute on Drug Abuse, and the Office of Behavioral and Social Sciences Research. More

  • in

    Robo-teammate can detect, share 3D changes in real-time

    Something is different, and you can’t quite put your finger on it. But your robot can.
    Even small changes in your surroundings could indicate danger. Imagine a robot could detect those changes, and a warning could immediately alert you through a display in your eyeglasses. That is what U.S. Army scientists are developing with sensors, robots, real-time change detection and augmented reality wearables.
    Army researchers demonstrated in a real-world environment the first human-robot team in which the robot detects physical changes in 3D and shares that information with a human in real-time through augmented reality, who is then able to evaluate the information received and decide follow-on action.
    “This could let robots inform their Soldier teammates of changes in the environment that might be overlooked by or not perceptible to the Soldier, giving them increased situational awareness and offset from potential adversaries,” said Dr. Christopher Reardon, a researcher at the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. “This could detect anything from camouflaged enemy soldiers to IEDs.”
    Part of the lab’s effort in contextual understanding through the Artificial Intelligence for Mobility and Maneuver Essential Research Program, this research explores how to provide contextual awareness to autonomous robotic ground platforms in maneuver and mobility scenarios. Researchers also participate with international coalition partners in the Technical Cooperation Program’s Contested Urban Environment Strategic Challenge, or TTCP CUESC, events to test and evaluate human-robot teaming technologies.
    Most academic research in the use of mixed reality interfaces for human-robot teaming does not enter real-world environments, but rather uses external instrumentation in a lab to manage the calculations necessary to share information between a human and robot. Likewise, most engineering efforts to provide humans with mixed-reality interfaces do not examine teaming with autonomous mobile robots, Reardon said.
    Reardon and his colleagues from the Army and the University of California, San Diego, published their research, Enabling Situational Awareness via Augmented Reality of Autonomous Robot-Based Environmental Change Detection, at the 12th International Conference on Virtual, Augmented, and Mixed Reality, part of the International Conference on Human-Computer Interaction.
    The research paired a small autonomous mobile ground robot, equipped with laser ranging sensors, known as LIDAR, to build a representation of the environment, with a human teammate wearing augmented reality glasses. As the robot patrolled the environment, it compared its current and previous readings to detect changes in the environment. Those changes were then instantly displayed in the human’s eyewear to determine whether the human could interpret the changes in the environment.
    In studying communication between the robot and human team, the researchers tested different resolution LIDAR sensors on the robot to collect measurements of the environment and detect changes. When those changes were shared using augmented reality to the human, the researchers found that human teammates could interpret changes that even the lower-resolution LIDARs detected. This indicates that — depending on the size of the changes expected to encounter — lighter, smaller and less expensive sensors could perform just as well, and run faster in the process.
    This capability has the potential to be incorporated into future Soldier mixed-reality interfaces such as the Army’s Integrated Visual Augmentation System goggles, or IVAS.
    “Incorporating mixed reality into Soldiers’ eye protection is inevitable,” Reardon said. “This research aims to fill gaps by incorporating useful information from robot teammates into the Soldier-worn visual augmentation ecosystem, while simultaneously making the robots better teammates to the Soldier.”
    Future studies will continue to explore how to strengthen the teaming between humans and autonomous agents by allowing the human to interact with the detected changes, which will provide more information to the robot about the context of the change-for example, changes made by adversaries versus natural environmental changes or false positives, Reardon said. This will improve the autonomous context understanding and reasoning capabilities of the robotic platform, such as by enabling the robot to learn and predict what types of changes constitute a threat. In turn, providing this understanding to autonomy will help researchers learn how improve teaming of Soldiers with autonomous platforms. More

  • in

    The mathematical magic of bending grids

    How can you turn something flat into something three-dimensional? In architecture and design this question often plays an important role. A team of mathematicians from TU Wien (Vienna) has now presented a technique that solves this problem in an amazingly simple way: You choose any curved surface and from its shape you can calculate a flat grid of straight bars that can be folded out to the desired curved structure with a single movement. The result is a stable form that can even carry loads due to its mechanical tension.
    The step into the third dimension
    Suppose you screw ordinary straight bars together at right angles to form a grid, so that a completely regular pattern of small squares is created. Such a grid can be distorted: all angles of the grid change simultaneously, parallel bars remain parallel, and the squares become parallelograms. But this does not change the fact that all bars are in the same plane. The structure is still flat.
    The crucial question now is: What happens if the bars are not parallel at the beginning, but are joined together at different angles? “Such a grid can no longer be distorted within the plane,” explains Przemyslaw Musialski. “When you open it up, the bars have to bend. They move out of the plane into the third dimension and form a curved shape.”
    At the Center for Geometry and Computational Design (GCD) (Institute for Discrete Mathematics and Geometry) at TU Wien, Musialski and his team developed a method that can be used to calculate what the flat, two-dimensional grid must look like in order to produce exactly the desired three-dimensional shape when it is unfolded. “Our method is based on findings in differential geometry, it is relatively simple and does not require computationally intensive simulations,” says Stefan Pillwein, first author of the current publication, which was presented at the SIGGRAPH conference and published in the journal ACM Transactions on Graphics.
    Experiments with the laser scanner
    The team then tried out the mathematical methods in practice: The calculated grids were made of wood, screwed together and unfolded. The resulting 3D shapes were then measured with a laser scanner. This proved that the resulting 3D structures did indeed correspond excellently to the calculated shapes.
    Now even a mini pavilion roof was produced; measuring 3.1 x 2.1 x 0.9 metres. “We wanted to know whether this technology would also work on a large scale — and it worked out perfectly,” says Stefan Pillwein.
    “Transforming a simple 2D grid into a 3D form with a single opening movement not only looks amazing, it has many technical advantages,” says Przemyslaw Musialski. “Such grids are simple and inexpensive to manufacture, they are easy to transport and set up. Our method makes it possible to create even sophisticated shapes, not just simple domes.”
    The structures also have very good static properties: “The curved elements are under tension and have a natural structural stability — in architecture this is called active bending,” explains Musialski. Very large distances can be spanned with very thin rods. This is ideal for architectural applications.

    Story Source:
    Materials provided by Vienna University of Technology. Note: Content may be edited for style and length. More

  • in

    Predicting computational power of early quantum computers

    Quantum physicists at the University of Sussex have created an algorithm that speeds up the rate of calculations in the early quantum computers which are currently being developed. They have created a new way to route the ions — or charged atoms — around the quantum computer to boost the efficiency of the calculations.
    The Sussex team have shown how calculations in such a quantum computer can be done most efficiently, by using their new ‘routing algorithm’. Their paper “Efficient Qubit Routing for a Globally Connected Trapped Ion Quantum Computer” is published in the journal Advanced Quantum Technologies.
    The team working on this project was led by Professor Winfried Hensinger and included Mark Webber, Dr Steven Herbert and Dr Sebastian Weidt. The scientists have created a new algorithm which regulates traffic within the quantum computer just like managing traffic in a busy city. In the trapped ion design the qubits can be physically transported over long distances, so they can easily interact with other qubits. Their new algorithm means that data can flow through the quantum computer without any ‘traffic jams’. This in turn gives rise to a more powerful quantum computer.
    Quantum computers are expected to be able to solve problems that are too complex for classical computers. Quantum computers use quantum bits (qubits) to process information in a new and powerful way. The particular quantum computer architecture the team analysed first is a ‘trapped ion’ quantum computer, consisting of silicon microchips with individual charged atoms, or ions, levitating above the surface of the chip. These ions are used to store data, where each ion holds one quantum bit of information. Executing calculations on such a quantum computer involves moving around ions, similar to playing a game of Pacman, and the faster and more efficiently the data (the ions) can be moved around, the more powerful the quantum computer will be.
    In the global race to build a large scale quantum computer there are two leading methods, ‘superconducting’ devices which groups such as IBM and Google focus on, and ‘trapped ion’ devices which are used by the University of Sussex’s Ion Quantum Technology group, and the newly emerged company Universal Quantum, among others.
    Superconducting quantum computers have stationary qubits which are typically only able to interact with qubits that are immediately next to each other. Calculations involving distant qubits are done by communicating through a chain of adjacent qubits, a process similar to the telephone game (also referred to as ‘Chinese Whispers’), where information is whispered from one person to another along a line of people. In the same way as in the telephone game, the information tends to get more corrupted the longer the chain is. Indeed, the researchers found that this process will limit the computational power of superconducting quantum computers.
    In contrast, by deploying their new routing algorithm for their trapped ion architecture, the Sussex scientists have discovered that their quantum computing approach can achieve an impressive level of computational power. ‘Quantum Volume’ is a new benchmark which is being used to compare the computational power of near term quantum computers. They were able to use Quantum Volume to compare their architecture against a model for superconducting qubits, where they assumed similar levels of errors for both approaches. They found that the trapped-ion approach performed consistently better than the superconducting qubit approach, because their routing algorithm essentially allows qubits to directly interact with many more qubits, which in turn gives rise to a higher expected computational power.
    Mark Webber, a doctoral researcher in the Sussex Centre for Quantum technologies, at the University of Sussex, said:
    “We can now predict the computational power of the quantum computers we are constructing. Our study indicates a fundamental advantage for trapped ion devices, and the new routing algorithm will allow us to maximize the performance of early quantum computers.”
    Professor Hensinger, director of the Sussex Centre for Quantum Technologies at the University of Sussex said:
    “Indeed, this work is yet another stepping stone towards building practical quantum computers that can solve real world problems.”
    Professor Winfried Hensinger and Dr Sebastian Weidt have recently launched their spin-out company Universal Quantum which aims to build the world’s first large scale quantum computer. It has attracted backing from some of the world’s most powerful tech investors. The team was the first to publish a blue-print for how to build a large scale trapped ion quantum computer in 2017.

    Story Source:
    Materials provided by University of Sussex. Original written by Anna Ford. Note: Content may be edited for style and length. More