More stories

  • in

    Printed robots with bones, ligaments, and tendons

    3D printing is advancing rapidly, and the range of materials that can be used has expanded considerably. While the technology was previously limited to fast-curing plastics, it has now been made suitable for slow-curing plastics as well. These have decisive advantages as they have enhanced elastic properties and are more durable and robust.
    The use of such polymers is made possible by a new technology developed by researchers at ETH Zurich and a US start-up. As a result, researchers can now 3D print complex, more durable robots from a variety of high-quality materials in one go. This new technology also makes it easy to combine soft, elastic, and rigid materials. The researchers can also use it to create delicate structures and parts with cavities as desired.
    Materials that return to their original state
    Using the new technology, researchers at ETH Zurich have succeeded for the first time in printing a robotic hand with bones, ligaments and tendons made of different polymers in one go. “We wouldn’t have been able to make this hand with the fast-curing polyacrylates we’ve been using in 3D printing so far,” explains Thomas Buchner, a doctoral student in the group of ETH Zurich robotics professor Robert Katzschmann and first author of the study. “We’re now using slow-curing thiolene polymers. These have very good elastic properties and return to their original state much faster after bending than polyacrylates.” This makes thiolene polymers ideal for producing the elastic ligaments of the robotic hand.
    In addition, the stiffness of thiolenes can be fine-tuned very well to meet the requirements of soft robots. “Robots made of soft materials, such as the hand we developed, have advantages over conventional robots made of metal. Because they’re soft, there is less risk of injury when they work with humans, and they are better suited to handling fragile goods,” Katzschmann explains.
    Scanning instead of scraping
    3D printers typically produce objects layer by layer: nozzles deposit a given material in viscous form at each point; a UV lamp then cures each layer immediately. Previous methods involved a device that scraped off surface irregularities after each curing step. This works only with fast-curing polyacrylates. Slow-curing polymers such as thiolenes and epoxies would gum up the scraper.
    To accommodate the use of slow-curing polymers, the researchers developed 3D printing further by adding a 3D laser scanner that immediately checks each printed layer for any surface irregularities. “A feedback mechanism compensates for these irregularities when printing the next layer by calculating any necessary adjustments to the amount of material to be printed in real time and with pinpoint accuracy,” explains Wojciech Matusik, a professor at the Massachusetts Institute of Technology (MIT) in the US and co-author of the study. This means that instead of smoothing out uneven layers, the new technology simply takes the unevenness into account when printing the next layer.
    Inkbit, an MIT spin-off, was responsible for developing the new printing technology. The ETH Zurich researchers developed several robotic applications and helped optimise the printing technology for use with slow-curing polymers. The researchers from Switzerland and the US have now jointly published the technology and their sample applications in the journal Nature.
    At ETH Zurich, Katzschmann’s group will use the technology to explore further possibilities and to design even more sophisticated structures and develop additional applications. Inkbit is planning to use the new technology to offer a 3D printing service to its customers and to sell the new printers. More

  • in

    This 3D printer can watch itself fabricate objects

    With 3D inkjet printing systems, engineers can fabricate hybrid structures that have soft and rigid components, like robotic grippers that are strong enough to grasp heavy objects but soft enough to interact safely with humans.
    These multimaterial 3D printing systems utilize thousands of nozzles to deposit tiny droplets of resin, which are smoothed with a scraper or roller and cured with UV light. But the smoothing process could squish or smear resins that cure slowly, limiting the types of materials that can be used.
    Researchers from MIT, the MIT spinout Inkbit, and ETH Zurich have developed a new 3D inkjet printing system that works with a much wider range of materials. Their printer utilizes computer vision to automatically scan the 3D printing surface and adjust the amount of resin each nozzle deposits in real time to ensure no areas have too much or too little material.
    Since it does not require mechanical parts to smooth the resin, this contactless system works with materials that cure more slowly than the acrylates which are traditionally used in 3D printing. Some slower-curing material chemistries can offer improved performance over acrylates, such as greater elasticity, durability, or longevity.
    In addition, the automatic system makes adjustments without stopping or slowing the printing process, making this production-grade printer about 660 times faster than a comparable 3D inkjet printing system.
    The researchers used this printer to create complex, robotic devices that combine soft and rigid materials. For example, they made a completely 3D-printed robotic gripper shaped like a human hand and controlled by a set of reinforced, yet flexible, tendons.
    “Our key insight here was to develop a machine vision system and completely active feedback loop. This is almost like endowing a printer with a set of eyes and a brain, where the eyes observe what is being printed, and then the brain of the machine directs it as to what should be printed next,” says co-corresponding author Wojciech Matusik, a professor of electrical engineering and computer science at MIT who leads the Computational Design and Fabrication Group within the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

    He is joined on the paper by lead author Thomas Buchner, a doctoral student at ETH Zurich, co-corresponding author Robert Katzschmann, PhD ’18, assistant professor of robotics who leads the Soft Robotics Laboratory at ETH Zurich; as well as others at ETH Zurich and Inkbit. The research will appear in Nature.
    Contact free
    This paper builds off a low-cost, multimaterial 3D printer known as MultiFab that the researchers introduced in 2015. By utilizing thousands of nozzles to deposit tiny droplets of resin that are UV-cured, MultiFab enabled high-resolution 3D printing with up to 10 materials at once.
    With this new project, the researchers sought a contactless process that would expand the range of materials they could use to fabricate more complex devices.
    They developed a technique, known as vision-controlled jetting, which utilizes four high-frame-rate cameras and two lasers that rapidly and continuously scan the print surface. The cameras capture images as thousands of nozzles deposit tiny droplets of resin.
    The computer vision system converts the image into a high-resolution depth map, a computation that takes less than a second to perform. It compares the depth map to the CAD (computer-aided design) model of the part being fabricated, and adjusts the amount of resin being deposited to keep the object on target with the final structure.

    The automated system can make adjustments to any individual nozzle. Since the printer has 16,000 nozzles, the system can control fine details of the device being fabricated.
    “Geometrically, it can print almost anything you want made of multiple materials. There are almost no limitations in terms of what you can send to the printer, and what you get is truly functional and long-lasting,” says Katzschmann.
    The level of control afforded by the system enables it to print very precisely with wax, which is used as a support material to create cavities or intricate networks of channels inside an object. The wax is printed below the structure as the device is fabricated. After it is complete, the object is heated so the wax melts and drains out, leaving open channels throughout the object.
    Because it can automatically and rapidly adjust the amount of material being deposited by each of the nozzles in real time, the system doesn’t need to drag a mechanical part across the print surface to keep it level. This enables the printer to use materials that cure more gradually, and would be smeared by a scraper.
    Superior materials
    The researchers used the system to print with thiol-based materials, which are slower-curing than the traditional acrylic materials used in 3D printing. However, thiol-based materials are more elastic and don’t break as easily as acrylates. They also tend to be more stable over a wider range of temperatures and don’t degrade as quickly when exposed to sunlight.
    “These are very important properties when you want to fabricate robots or systems that need to interact with a real-world environment,” says Katzschmann.
    The researchers used thiol-based materials and wax to fabricate several complex devices that would otherwise be nearly impossible to make with existing 3D printing systems. For one, they produced a functional, tendon-driven robotic hand that has 19 independently actuatable tendons, soft fingers with sensor pads, and rigid, load-bearing bones.
    “We also produced a six-legged walking robot that can sense objects and grasp them, which was possible due to the system’s ability to create airtight interfaces of soft and rigid materials, as well as complex channels inside the structure,” says Buchner.
    The team also showcased the technology through a heart-like pump with integrated ventricles and artificial heart valves, as well as metamaterials that can be programmed to have non-linear material properties.
    “This is just the start. There is an amazing number of new types of materials you can add to this technology. This allows us to bring in whole new material families that couldn’t be used in 3D printing before,” Matusik says.
    The researchers are now looking at using the system to print with hydrogels, which are used in tissue-engineering applications, as well as silicon materials, epoxies, and special types of durable polymers.
    They also want to explore new application areas, such as printing customizable medical devices, semiconductor polishing pads, and even more complex robots.
    This research was funded, in part, by Credit Suisse, the Swiss National Science Foundation, the Defense Advanced Research Projects Agency (DARPA), and the National Science Foundation (NSF). More

  • in

    New deep learning AI tool helps ecologists monitor rare birds through their songs

    Researchers have developed a new deep learning AI tool that generates life-like birdsongs to train bird identification tools, helping ecologists to monitor rare species in the wild. The findings are presented in the British Ecological Society journal, Methods in Ecology and Evolution.
    Identifying common bird species through their song has never been easier, with numerous phone apps and software available to both ecologists and the public. But what if the identification software has never heard a particular bird before, or only has a small sample of recordings to reference? This is a problem facing ecologists and conservationists monitoring some of the world’s rarest birds.
    To overcome this problem, researchers at the University of Moncton, Canada, have developed ECOGEN, a first of its kind deep learning tool, that can generate lifelike bird sounds to enhance the samples of underrepresented species. These can then be used to train audio identification tools used in ecological monitoring, which often have disproportionately more information on common species.
    The researchers found that adding artificial birdsong samples generated by ECOGEN to a birdsong identifier improved the bird song classification accuracy by 12% on average.
    Dr Nicolas Lecomte, one of the lead researchers, said: “Due to significant global changes in animal populations, there is an urgent need for automated tools, such acoustic monitoring, to track shifts in biodiversity. However, the AI models used to identify species in acoustic monitoring lack comprehensive reference libraries.
    “With ECOGEN, you can address this gap by creating new instances of bird sounds to support AI models. Essentially, for species with limited wild recordings, such as those that are rare, elusive, or sensitive, you can expand your sound library without further disrupting the animals or conducting additional fieldwork.”
    The researchers say that creating synthetic bird songs in this way can contribute to the conservation of endangered bird species and also provide valuable insight into their vocalisations, behaviours and habitat preferences.

    The ECOGEN tool has other potential applications. For instance, it could be used to help conserve extremely rare species, like the critically endangered regent honeyeaters, where young individuals are unable to learn their species’ songs because there aren’t enough adult birds to learn from.
    The tool could benefit other types of animal as well. Dr Lecomte added: “While ECOGEN was developed for birds, we’re confident that it could be applied to mammals, fish (yes they can produce sounds!), insects and amphibians.”
    As well as its versatility, a key advantage of the ECOGEN tool is its accessibility, due to it being open source and able to used on even basic computers.
    ECOGEN works by converting real recordings of bird songs into spectrograms (visual representations of sounds) and then generating new AI images from these to increase the dataset for rare species with few recordings. These spectrograms are then converted back into audio to train bird sound identifiers. In this study the researchers used a dataset of 23,784 wild bird recordings from around the world, covering 264 species. More

  • in

    New water treatment method can generate green energy

    Researchers from ICIQ in Spain have designed micromotors that move around on their own to purify wastewater. The process creates ammonia, which can serve as a green energy source. Now, an AI method developed at the University of Gothenburg will be used to tune the motors to achieve the best possible results.
    Micromotors have emerged as a promising tool for environmental remediation, largely due to their ability to autonomously navigate and perform specific tasks on a microscale. The micromotor is comprised of a tube made of silicon and manganese dioxide in which chemical reactions cause the release of bubbles from one end. These bubbles act as a motor that sets the tube in motion.
    Researchers from the Institute of Chemical Research of Catalonia (ICIQ) have built a micromotor covered with the chemical compound laccase, which accelerates the conversion of urea found in polluted water into ammonia when it comes into contact with the motor.
    Green energy source
    “This is an interesting discovery. Today, water treatment plants have trouble breaking down all the urea, which results in eutrophication when the water is released. This is a serious problem in urban areas in particular,” says Rebeca Ferrer, a PhD student at Doctor Katherine Villa´s group at ICIQ.
    Converting urea into ammonia offers other advantages as well. If you can extract the ammonia from the water, you also have a source of green energy as ammonia can be converted into hydrogen.
    There is a great deal of development work to be done, with the bubbles produced by the micromotors posing a problem for researchers.

    “We need to optimise the design so that the tubes can purify the water as efficiently as possible. To do this, we need to see how they move and how long they continue working, but this is difficult to see under a microscope because the bubbles obscure the view,” Ferrer explains.
    Much development work remains
    However, thanks to an AI method developed by researchers at the University of Gothenburg, it is possible to estimate the movements of the micromotors under a microscope. Machine learning enables several motors in the liquid to be monitored simultaneously.
    “If we cannot monitor the micromotor, we cannot develop it. Our AI works well in a laboratory environment, which is where the development work is currently under way,” says Harshith Bachimanchi, a PhD student at the Department of Physics, University of Gothenburg.
    The researchers have trouble saying how long it will be before urban water treatment plants can also become energy producers. Much development work remains, including on the AI method, which needs to be modified to work in large-scale trials.
    “Our goal is to tune the motors to perfection,” Bachimanchi ends. More

  • in

    When we feel things that are not there

    Virtual reality (VR) is not only a technology for games and entertainment, but also has potential in science and medicine. Researchers at Ruhr University Bochum, Germany, have now gained new insights into human perception with the help of VR. They used virtual reality scenarios in which subjects touched their own bodies with a virtual object. To the researchers’ surprise, this led to a tingling sensation at the spot where the avatarized body was touched. This effect occurred even though there was no real physical contact between the virtual object and the body. The scientists led by Dr. Artur Pilacinski and Professor Christian Klaes from the Department of Neurotechnology describe this phenomenon as a phantom touch illusion. They published their results in the journal Scientific Reports of the Nature Publishing Group in September 2023.
    “People in virtual reality sometimes have the feeling that they are touching things, although they are actually only encountering virtual objects,” says first author Artur Pilacinski from the Knappschaftskrankenhaus Bochum Langendreer, University Clinic of Ruhr University Bochum, explaining the origin of the research question. “We show that the phantom touch illusion is described by most subjects as a tingling or prickling, electrifying sensation or as if the wind was passing through their hand.”
    Body sensation arises from complex combination of different sensory perceptions
    The neuroscientists wanted to understand what is behind this phenomenon and find out which processes in the brain and body play a role in it. They observed that the phantom touch illusion also occurred when the subjects touched parts of their bodies that were not visible in virtual reality. Second author Marita Metzler adds: “This suggests that human perception and body sensation are not only based on vision, but on a complex combination of many sensory perceptions and the internal representation of our body.”
    This study involved 36 volunteers wearing VR glasses. First, they got used to the VR environment by moving around and touching virtual objects. Then they were given the task of touching their hand in the virtual environment with a virtual stick.
    Comparison between virtual and suggested touch sensations
    Participants were asked if they felt anything. If not, they were allowed to continue touching and the question was asked again later. If they felt sensations, they were asked to describe them and rate their intensity on different hand locations. This process was repeated for both hands. There was a consistent reporting of the sensation as “tingling” by a majority of participants.

    In a control experiment, it was investigated whether similar sensations could also be perceived without visual contact with virtual objects purely due to experimental situation demands. Here, a small laser pointer was used instead of virtual objects to touch the hand. This control experiment did not result in phantom touch suggesting that phantom touch illusion was unique to virtual touch.
    The discovery of the phantom touch illusion opens up new possibilities for further research into human perception and could also be applied in the fields of virtual reality and medicine. Christian Klaes, member of the Research Department of Neuroscience at Ruhr University, says: “It could even help to deepen the understanding of neurological diseases and disorders that affect the perception of one’s own body.”
    Further collaboration with the University of Sussex
    The Bochum team plans to continue their research on the phantom touch illusion and the underlying processes. For this reason, a collaboration with the University of Sussex has been started. “It is important to first distinguish between the actual sensations of phantom touch and other cognitive processes that may be involved in reporting such embodied sensations, such as suggestion, or experimental situation demands,” says Artur Pilacinski. “We also want to further explore and understand the neural basis of the phantom touch illusion in collaboration with other partners.”
    The research of Artur Pilacinski and Christian Klaes took place within the Research Department of Neuroscience (RDN). The RDN further develops and consolidates a long-established, outstanding research strength of Ruhr University Bochum in the field of systems neuroscience research. More

  • in

    Individual back training machine developed

    Back pain is extremely widespread. According to figures in the most recent 2023 Health Report, issued by the German health insurer DAK, around 18 percent of cases in which employees submit sick notes involve musculoskeletal ailments, above all back complaints. After topping the table of individual diagnoses in 2022, back pain still ranks high, just behind COVID-19 and respiratory ailments. It is pleasing to note that the latest report shows a slight decline in the percentage of back-related conditions in total reported absences, from 6.5% to 5.3%.
    However: “Even young people are reporting back pain in increasing numbers. This trend didn’t just start with the Covid-19 lockdowns,” says Prof. Rainer Burgkart of TUM Klinikum rechts der Isar. In the Burden Disease Incidence Study, conducted in 2020 by the Robert Koch Institute (RKI), with data from over 5,000 patients in Germany, it turned out that almost two thirds of the respondents (61.3%) had experienced back pain in the previous year. Lower back pain affected 55% of women and 48.6% of men, while one in three women (32.6%) and one in five men (22%) suffered from upper back pain. Some years ago the Institute for Health Economics and Management of the Ludwig-Maximilians-Universität (LMU) estimated the economic impact of these ailments at around 50 billion euros. What can be done? “Physiotherapy and targeted muscle and coordination training are highly effective and are often prescribed in case of frequently diagnosed, non-specific back pain,” says Dr. Burgkart, an orthopedic specialist. “However, on completing targeted treatment, most patients slip back into their old patterns of behavior and their back muscles become weaker again.” An invention by TUM and Klinikum rechts der Isar — the GyroTrainer — is designed to promote long-term, tailor-made back exercises in the future.
    GyroTrainer: Algorithm decides on intensity of training
    Prof. Burgkart from Klinikum rechts der Isar, in cooperation with the Munich Institute of Robotic and Machine Intelligence (MIRMI) at TUM, the fitness equipment manufacturer Erhard Peuker GmbH, and the hardware and software specialist B&W Embedded Solutions GmbH, developed the GyroTrainer — a back muscle training device that can be adapted to the abilities of individual users. The work was carried out in a three-year research project. The GyroTrainer is based on a round platform 50 cm in diameters It can tilted to the front, back and sideways, and can also rotate. It resembles a gyroscope, which is designed to remain balanced in a wide range of configurations and positions.
    Balance board as the starting point
    A similar principle is used in the GyroTrainer. Users step onto the round platform and try to keep their balance. Sensors and electric motors located below the platform register the user’s movements and can tilt and rotate the disk. The device works like a balance board, with the difference that the stiffness can be varied. The challenge is for users to keep their balance. “Preparing the device correctly is not a simple matter of adjusting it for the individual user,” says researcher Elisabeth Jensen from MIRMI. “First we have to find the right stiffness for that person.” If the user can comfortably keep their balance at a given stiffness level for a certain period of time, a learning algorithm decides on the right initial setting for the platform so that it is neither too easy or too difficult for the person.
    Gaming concept: strengthening the back by playing a game
    Then the actual training can begin. “Our cooperation partners have developed a computer game where the control comes from the user’s movements,” explains TUM researcher Jensen. It is modelled on the Space Invaders game. The player’s spaceship automatically fires at the invaders at regular intervals while trying to evade incoming shots. “This takes skill and concentration,” explains Jensen. The less rigid the platform setting, the harder it is to maintain stability and steer the spaceship. “It is also possible to add disruption factors,” explains the orthopedics specialist Burgkart. The platform rotates suddenly to the left or right, which makes it even harder for the user to stay balanced. “At the start, the platform feels quite firm under the user’s feet, but gradually becomes more unstable. And finally, for users in very good condition, it starts giving extra pushes,” explains Burgkart. Using electromyography (EMG) sensors, the team confirmed that the system effectively activates the abdominal and back muscles that are important for spinal stability, and that the activity becomes even more challenging with the rotational movement. The less rigid the system becomes and the more frequent the sudden rotations occur, the greater the demands on the muscles. “Balancing movements are among the most effective methods,” says Burgkart. He believes that the new training device should be used mainly for preventive purposes, both for primary patients, who have “elevated risk,” and secondary patients, who have suffered from back pain in the past.
    Next steps: from the concept to the product
    After nearly three years of research, it is now clear: the GyroTrainer functions as intended and fulfils its medical purpose. “There are still a few steps to take before it can be used as a product,” says Prof. Burgkart. The most important requirement for the future: the researchers want the device — which for safety reasons still has to be operated by TUM researchers — to be suitable for use without a physiotherapist or trainer. They also want it to be capable of adjusting dynamically to the ability of the individual user. The GyroTrainer already determines the individual stiffness via approximations and can make adjustments at any time using the measured data. In the future, the artificial intelligence function of the device will work as an independent, secure logical system to set the initial rigidity and select the difficulty level of the corresponding game options. It will also be able to make adjustments based on how the user is feeling on the day, fatigue levels and personal training progress. A final important requirement for the new back trainer: it has to fit into any living room: Prof. Burgkart’s vision: “The machine has to be mobile so that people can train on a regular basis without having to go to a physiotherapist.” More

  • in

    New twist on AI makes the most of sparse sensor data

    An innovative approach to artificial intelligence (AI) enables reconstructing a broad field of data, such as overall ocean temperature, from a small number of field-deployable sensors using low-powered “edge” computing, with broad applications across industry, science and medicine.
    “We developed a neural network that allows us to represent a large system in a very compact way,” said Javier Santos, a Los Alamos National Laboratory researcher who applies computational science to geophysical problems. “That compactness means it requires fewer computing resources compared to state-of-the-art convolutional neural network architectures, making it well-suited to field deployment on drones, sensor arrays and other edge-computing applications that put computation closer to its end use.”
    Novel AI approach boosts computing efficiency
    Santos is first author of a paper published by a team of Los Alamos researchers in Nature Machine Intelligence on the novel AI technique, which they dubbed Senseiver. The work, which builds on an AI model called Perceiver IO developed by Google, applies the techniques of natural-language models such as ChatGPT to the problem of reconstructing information about a broad area — such as the ocean — from relatively few measurements.
    The team realized the model would have broad application because of its efficiency. “Using fewer parameters and less memory requires fewer central processing unit cycles on the computer, so it runs faster on smaller computers,” said Dan O’Malley, a coauthor of the paper and Los Alamos researcher who applies machine learning to geoscience problems.
    In a first in the published literature, Santos and his Los Alamos colleagues validated the model by demonstrating its effectiveness on real-world sets of sparse data — meaning information taken from sensors that cover only a tiny portion of the field of interest — and on complex data sets of three-dimensional fluids.
    In a demonstration of the real-world utility of the Senseiver, the team applied the model to a National Oceanic and Atmospheric Administration sea-surface-temperature dataset. The model was able to integrate a multitude of measurements taken over decades from satellites and sensors on ships. From these sparse point measurements, the model forecast temperatures across the entire body of the ocean, which provides information useful to global climate models.

    Bringing AI to drones and sensor networks
    The Senseiver is well-suited to a variety of projects and research areas of interest to Los Alamos.
    “Los Alamos has a wide range of remote sensing capabilities, but it’s not easy to use AI because models are too big and don’t fit on devices in the field, which leads us to edge computing,” said Hari Viswanathan, Los Alamos National Laboratory Fellow, environmental scientist and coauthor of the paper about the Senseiver. “Our work brings the benefits of AI to drones, networks of field-based sensors and other applications currently beyond the reach of cutting-edge AI technology.”
    The AI model will be particularly useful in the Lab’s work identifying and characterizing orphaned wells. The Lab leads the Department of Energy-funded Consortium Advancing Technology for Assessment of Lost Oil & Gas Wells (CATALOG), a federal program tasked with locating and characterizing undocumented orphaned wells and measuring their methane emissions. Viswanathan is the lead scientist of CATALOG.
    The approach offers improved capabilities for large, practical applications such as self-driving cars, remote modeling of assets in oil and gas, medical monitoring of patients, cloud gaming, content delivery and contaminant tracing. More

  • in

    Keep it secret: Cloud data storage security approach taps quantum physics

    Distributed cloud storage is a hot topic for security researchers around the globe pursuing secure data storage, and a team in China is now merging quantum physics with mature cryptography and storage techniques to achieve a cost-effective cloud storage solution.
    Shamir’s secret sharing, a known method, is a key distribution algorithm. It involves distributing private information to a group so that “the secret” can be revealed only when a majority pools their knowledge. It’s common to combine quantum key distribution (QKD) and Shamir’s secret sharing algorithm for secure storage — at an utmost security level. But utmost security solutions tend to bring substantial cost baggage, including significant cloud storage space requirements.
    In AIP Advances, the team presents its method that uses quantum random numbers as encryption keys, disperses the keys via Sharmir’s secret sharing algorithm, applies erasure coding within ciphertext, and securely transmits the data through QKD-protected networks to distributed clouds.
    Their method not only provides quantum security to the entire system but also offers fault tolerance and efficient storage — and this may help speed the adoption of quantum technologies.
    “In essence, our solution is quantum-secure and serves as a practical application of the fusion between quantum and cryptography technologies,” said corresponding author Yong Zhao, vice president of QuantumCTek Co. Ltd., a quantum information technology company. “QKD-generated keys secure both user data uploads to servers and data transmissions to dispersed cloud storage nodes.”
    The team explored whether quantum security services could expand beyond secure data transmission to offer a richer spectrum of quantum security applications such as data storage and processing.
    They came up with a more secure and cost-effective fault-tolerant cloud storage solution. “It not only achieves quantum security but also saves storage space when compared to traditional mirroring methods or ones based on Shamir’s secret sharing, which is commonly used for distributed management of sensitive data,” said Zhao.
    When the team ran the solution through experimental tests ranging from encryption/decryption, key preservation, and data storage, it proved to be effective.
    The solution is currently feasible from both technological and engineering perspectives: It meets the requirement for relevant quantum and cryptographic standards to ensure a secure storage solution capable of withstanding the challenges posed by quantum computing.
    “In the future, we plan to drive the commercial implementation of this technology to offer practical services,” said Zhao. “We’ll explore various usage models in multiuser scenarios, and we’re also considering integrating more quantum technologies, such as quantum secret sharing, into cloud storage.” More