More stories

  • in

    Eco-friendly micro-supercapacitors using fallen leaves?

    A KAIST research team has developed a graphene-inorganic-hybrid micro-supercapacitor made of leaves using femtosecond direct laser writing lithography. The advancement of wearable electronic devices is synonymous with innovations in flexible energy storage devices. Of the various energy storage devices, micro-supercapacitors have drawn a great deal of interest for their high electrical power density, long lifetimes, and short charging times.
    However, there has been an increase in waste battery generation with the increases in the consumption and use of electronic equipment as well as the short replacement period that follows advancements in mobile devices. The safety and environmental issues involved in the collection, recycling, and processing of such waste batteries are creating a number of challenges.
    Forests cover about 30 percent of the Earth’s surface, producing a huge amount of fallen leaves. This naturally occurring biomass comes in large quantities and is both biodegradable and reusable, which makes it an attractive, eco-friendly material. However, if the leaves are left neglected instead of being used efficiently, they can contribute to fires or water pollution.
    To solve both problems at once, a research team led by Professor Young-Jin Kim from the Department of Mechanical Engineering and Dr. Hana Yoon from the Korea Institute of Energy Research developed a one-step technology that can create porous 3D graphene micro-electrodes with high electrical conductivity without additional treatment in atmospheric conditions by irradiating femtosecond laser pulses on the surface of the leaves without additional materials. Taking this strategy further, the team also suggested a method for producing flexible micro-supercapacitors.
    They showed that this technique could quickly and easily produce porous graphene-inorganic-hybrid electrodes at a low price, and validated their performance by using the graphene micro-supercapacitors to power an LED and an electronic watch that could function as a thermometer, hygrometer, and timer. These results open up the possibility of the mass production of flexible and green graphene-based electronic devices.
    Professor Young-Jin Kim said, “Leaves create forest biomass that comes in unmanageable quantities, so using them for next-generation energy storage devices makes it possible for us to reuse waste resources, thereby establishing a virtuous cycle.”
    This research was published in Advanced Functional Materials last month and was sponsored by the Ministry of Agriculture Food and Rural Affairs, the Korea Forest Service, and the Korea Institute of Energy Research.
    Story Source:
    Materials provided by The Korea Advanced Institute of Science and Technology (KAIST). Note: Content may be edited for style and length. More

  • in

    Diamond quantum sensor detects 'magnetic flow' excited by heat

    In recent times, sustainable development has been the overarching guiding principle of research concerning environmental issues, energy crises, and information and communication technology. In this regard, spintronic devices have emerged as promising candidates for surpassing conventional technology, which has run into the problem of excess waste heat generation in miniaturized devices. The electron “spin” responsible for the electric and magnetic property of a material are being used to develop next generation energy-efficient and miniature spintronic devices. At the heart of this new technology are “magnons,” quanta of spin excitation waves, and their detection is key to further progress in this field. Recently, within the field of spintronics, devices based on the interaction between spin and heat flow have emerged as a potential candidate for new thermoelectric devices (devices which convert heat to electricity).
    In the meantime, nitrogen-vacancy (N-V) centers in diamond, basically a point defect consisting of a nitrogen atom paired with an adjacent lattice vacancy, has emerged as a key for high-resolution quantum sensors. Interestingly, recently, it has been demonstrated that N-V centers can detect coherent magnon. However, detecting the thermally excited magnons by heat using N-V centers is difficult since the thermal magnons have much higher energy than the spin state of N-V centers, limiting their interaction.
    Now in a collaborative study published in Physical Review Applied, Associate Professor Toshu An from Japan Advanced Institute of Science and Technology (JAIST) and Dwi Prananto, a PhD graduate from JAIST, along with researchers from Kyoto University, Japan, and the National Institute for Materials Science, Japan, have successfully detected these energetic magnons in yttrium iron garnet (YIG), a magnetic insulator, by using a quantum sensor based on diamond with NV centers.
    To achieve this feat, the team used the interaction between coherent, low-energy magnons and N-V centers as an indirect way to detect the thermally excited magnons. As it turns out, the current produced by thermal magnons modifies the low-energy magnons by exerting a torque on them, which can be picked up by the N-V centers. Therefore, the method provides a way to detect thermal magnons by observing the changes in the coherent magnons.
    To demonstrate this, the researchers set up a YIG garnet sample with two gold antennas placed at the ends of the sample’s surface and placed a small diamond sensor at the center of the sample close to the surface. They then set up low-energy spin waves corresponding to the coherent magnons in the sample using microwaves and generated thermal magnons by producing a temperature gradient across the sample. Sure enough, the diamond sensor picked up on the changes to the coherent magnons caused by the induced thermal magnon current.
    The ability to detect thermal magnons with N-V centers is particularly advantageous, as Dr. An explains: “Our study provides a detection tool for thermal magnon currents that can be placed locally and over a broad range of distances from spin waves. This is not possible with conventional techniques, which require a relatively large electrode and specific configurations with proximal distance to the spin waves.”
    These findings could not only open up new possibilities in quantum sensing but also pave the way for its integration with spin caloritronics. “Our work could lay the foundation for spintronic devices controlled by heat sources,” says Dr. An.
    Story Source:
    Materials provided by Japan Advanced Institute of Science and Technology. Note: Content may be edited for style and length. More

  • in

    Bristol team chase down advantage in quantum race

    Quantum researchers at the University of Bristol have dramatically reduced the time to simulate an optical quantum computer, with a speedup of around one billion over previous approaches.
    Quantum computers promise exponential speedups for certain problems, with potential applications in areas from drug discovery to new materials for batteries. But quantum computing is still in its early stages, so these are long-term goals. Nevertheless, there are exciting intermediate milestones on the journey to building a useful device. One currently receiving a lot of attention is “quantum advantage,” where a quantum computer performs a task beyond the capabilities of even the world’s most powerful supercomputers.
    Experimental work from the University of Science and Technology of China (USTC) was the first to claim quantum advantage using photons — particles of light, in a protocol called “Gaussian Boson Sampling” (GBS). Their paper claimed that the experiment, performed in 200 seconds, would take 600 million years to simulate on the world’s largest supercomputer.
    Taking up the challenge, a team at the University of Bristol’s Quantum Engineering Technology Labs (QET Labs), in collaboration with researchers at Imperial College London and Hewlett Packard Enterprise, have reduced this simulation time down to just a few months, a speedup factor of around one billion.
    Their paper “The boundary for quantum advantage in Gaussian boson sampling,” published today in the journal Science Advances, comes at a time when other experimental approaches claiming quantum advantage, such as from the quantum computing team at Google, are also leading to improved classical algorithms for simulating these experiments.
    Joint first author Jake Bulmer, PhD student in QET Labs, said: “There is an exciting race going on where, on one side, researchers are trying to build increasingly complex quantum computing systems which they claim cannot be simulated by conventional computers. At the same time, researchers like us are improving simulation methods so we can simulate these supposedly impossible to simulate machines!”
    “As researchers develop larger scale experiments, they will look to make claims of quantum advantage relative to classical simulations. Our results will provide an essential point of comparison by which to establish the computational power of future GBS experiments,” said joint first author, Bryn Bell, Marie Curie Research Fellow at Imperial College London, now Senior Quantum Engineer at Oxford Quantum Circuits.
    The team’s methods do not exploit any errors in the experiment and so one next step for the research is to combine their new methods with techniques that exploit the imperfections of the real-world experiment. This would further speed up simulation time and build a greater understanding of which areas require improvements.
    “These quantum advantage experiments represent a tremendous achievement of physics and engineering. As a researcher, it is exciting to contribute to the understanding of where the computational complexity of these experiments arises. We were surprised by the magnitude of the improvements we achieved — it is not often that you can claim to find a one-billion-fold improvement!” said Jake Bulmer.
    Anthony Laing, co-Director of QET Labs and an author on the work, said: “As we develop more sophisticated quantum computing technologies, this type of work is vital. It helps us understand the bar we must get over before we can begin to solve problems in clean energy and healthcare that affect us all. The work is a great example of teamwork and collaboration among researchers in the UK Quantum Computing and Simulation Hub and Hewlett Packard Enterprise.”
    Story Source:
    Materials provided by University of Bristol. Note: Content may be edited for style and length. More

  • in

    Robot performs first laparoscopic surgery without human help

    A robot has performed laparoscopic surgery on the soft tissue of a pig without the guiding hand of a human — a significant step in robotics toward fully automated surgery on humans. Designed by a team of Johns Hopkins University researchers, the Smart Tissue Autonomous Robot (STAR) is described today in Science Robotics.
    “Our findings show that we can automate one of the most intricate and delicate tasks in surgery: the reconnection of two ends of an intestine. The STAR performed the procedure in four animals and it produced significantly better results than humans performing the same procedure,” said senior author Axel Krieger, an assistant professor of mechanical engineering at Johns Hopkins’ Whiting School of Engineering.
    The robot excelled at intestinal anastomosis, a procedure that requires a high level of repetitive motion and precision. Connecting two ends of an intestine is arguably the most challenging step in gastrointestinal surgery, requiring a surgeon to suture with high accuracy and consistency. Even the slightest hand tremor or misplaced stitch can result in a leak that could have catastrophic complications for the patient.
    Working with collaborators at the Children’s National Hospital in Washington, D.C. and Jin Kang, a Johns Hopkins professor of electrical and computer engineering, Krieger helped create the robot, a vision-guided system designed specifically to suture soft tissue. Their current iteration advances a 2016 model that repaired a pig’s intestines accurately, but required a large incision to access the intestine and more guidance from humans.
    The team equipped the STAR with new features for enhanced autonomy and improved surgical precision, including specialized suturing tools and state-of-the art imaging systems that provide more accurate visualizations of the surgical field.
    Soft-tissue surgery is especially hard for robots because of its unpredictability, forcing them to be able to adapt quickly to handle unexpected obstacles, Krieger said. The STAR has a novel control system that can adjust the surgical plan in real time, just as a human surgeon would.
    “What makes the STAR special is that it is the first robotic system to plan, adapt, and execute a surgical plan in soft tissue with minimal human intervention,” Krieger said.
    A structural-light based three-dimensional endoscope and machine learning-based tracking algorithm developed by Kang and his students guides STAR. “We believe an advanced three-dimensional machine vision system is essential in making intelligent surgical robots smarter and safer,” Kang said.
    As the medical field moves towards more laparoscopic approaches for surgeries, it will be important to have an automated robotic system designed for such procedures to assist, Krieger said.
    “Robotic anastomosis is one way to ensure that surgical tasks that require high precision and repeatability can be performed with more accuracy and precision in every patient independent of surgeon skill,” Krieger said. “We hypothesize that this will result in a democratized surgical approach to patient care with more predictable and consistent patient outcomes.”
    The team from Johns Hopkins also included Hamed Saeidi, Justin D. Opfermann, Michael Kam, Shuwen Wei, and Simon Leonard. Michael H. Hsieh, director of Transitional Urology at Children’s National Hospital, also contributed to the research.
    The work was supported by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under award numbers 1R01EB020610 and R21EB024707.
    Story Source:
    Materials provided by Johns Hopkins University. Original written by Catherine Graham. Note: Content may be edited for style and length. More

  • in

    Technique improves AI ability to understand 3D space using 2D images

    Researchers have developed a new technique, called MonoCon, that improves the ability of artificial intelligence (AI) programs to identify three-dimensional (3D) objects, and how those objects relate to each other in space, using two-dimensional (2D) images. For example, the work would help the AI used in autonomous vehicles navigate in relation to other vehicles using the 2D images it receives from an onboard camera.
    “We live in a 3D world, but when you take a picture, it records that world in a 2D image,” says Tianfu Wu, corresponding author of a paper on the work and an assistant professor of electrical and computer engineering at North Carolina State University.
    “AI programs receive visual input from cameras. So if we want AI to interact with the world, we need to ensure that it is able to interpret what 2D images can tell it about 3D space. In this research, we are focused on one part of that challenge: how we can get AI to accurately recognize 3D objects — such as people or cars — in 2D images, and place those objects in space.”
    While the work may be important for autonomous vehicles, it also has applications for manufacturing and robotics.
    In the context of autonomous vehicles, most existing systems rely on lidar — which uses lasers to measure distance — to navigate 3D space. However, lidar technology is expensive. And because lidar is expensive, autonomous systems don’t include much redundancy. For example, it would be too expensive to put dozens of lidar sensors on a mass-produced driverless car.
    “But if an autonomous vehicle could use visual inputs to navigate through space, you could build in redundancy,” Wu says. “Because cameras are significantly less expensive than lidar, it would be economically feasible to include additional cameras — building redundancy into the system and making it both safer and more robust. More

  • in

    Quantum computing: Vibrating atoms make robust qubits, physicists find

    MIT physicists have discovered a new quantum bit, or “qubit,” in the form of vibrating pairs of atoms known as fermions. They found that when pairs of fermions are chilled and trapped in an optical lattice, the particles can exist simultaneously in two states — a weird quantum phenomenon known as superposition. In this case, the atoms held a superposition of two vibrational states, in which the pair wobbled against each other while also swinging in sync, at the same time.
    The team was able to maintain this state of superposition among hundreds of vibrating pairs of fermions. In so doing, they achieved a new “quantum register,” or system of qubits, that appears to be robust over relatively long periods of time. The discovery, published today in the journal Nature, demonstrates that such wobbly qubits could be a promising foundation for future quantum computers.
    A qubit represents a basic unit of quantum computing. Where a classical bit in today’s computers carries out a series of logical operations starting from one of either two states, 0 or 1, a qubit can exist in a superposition of both states. While in this delicate in-between state, a qubit should be able to simultaneously communicate with many other qubits and process multiple streams of information at a time, to quickly solve problems that would take classical computers years to process.
    There are many types of qubits, some of which are engineered and others that exist naturally. Most qubits are notoriously fickle, either unable to maintain their superposition or unwilling to communicate with other qubits.
    By comparison, the MIT team’s new qubit appears to be extremely robust, able to maintain a superposition between two vibrational states, even in the midst of environmental noise, for up to 10 seconds. The team believes the new vibrating qubits could be made to briefly interact, and potentially carry out tens of thousands of operations in the blink of an eye.
    “We estimate it should take only a millisecond for these qubits to interact, so we can hope for 10,000 operations during that coherence time, which could be competitive with other platforms,” says Martin Zwierlein, the Thomas A. Frank Professor of Physics at MIT. “So, there is concrete hope toward making these qubits compute.”
    Zwierlein is a co-author on the paper, along with lead author Thomas Hartke, Botond Oreg, and Ningyuan Jia, who are all members of MIT’s Research Laboratory of Electronics. More

  • in

    Kirigami robotic grippers are delicate enough to lift egg yolks

    Engineering researchers from North Carolina State University have demonstrated a new type of flexible, robotic grippers that are able to lift delicate egg yolks without breaking them, and that are precise enough to lift a human hair. The work has applications for both soft robotics and biomedical technologies.
    The work draws on the art of kirigami, which involves both cutting and folding two-dimensional (2D) sheets of material to form three-dimensional (3D) shapes. Specifically, the researchers have developed a new technique that involves using kirigami to convert 2D sheets into curved 3D structures by cutting parallel slits across much of the material. The final shape of the 3D structure is determined in large part by the outer boundary of the material. For example, a 2D material that has a circular boundary would form a spherical 3D shape.
    “We have defined and demonstrated a model that allows users to work backwards,” says Yaoye Hong, first author of a paper on the work and a Ph.D. student at NC State. “If users know what sort of curved, 3D structure they need, they can use our approach to determine the boundary shape and pattern of slits they need to use in the 2D material. And additional control of the final structure is made possible by controlling the direction in which the material is pushed or pulled.”
    “Our technique is quite a bit simpler than previous techniques for converting 2D materials into curved 3D structures, and it allows designers to create a wide variety of customized structures from 2D materials,” says Jie Yin, corresponding author of the paper and an associate professor of mechanical and aerospace engineering at NC State.
    The researchers demonstrated the utility of their technique by creating grippers capable of grabbing and lifting objects ranging from egg yolks to a human hair.
    “We’ve shown that our technique can be used to create tools capable of grasping and moving even extremely fragile objects,” Yin says.
    “Conventional grippers grasp an object firmly — they grab things by putting pressure on them,” Yin says. “That can pose problems when attempting to grip fragile objects, such as egg yolks. But our grippers essentially surround an object and then lift it — similar to the way we cup our hands around an object. This allows us to ‘grip’ and move even delicate objects, without sacrificing precision.”
    However, the researchers note that there are a host of other potential applications, such as using the technique to design biomedical technologies that conform to the shape of a joint — like the human knee.
    “Think of smart bandages or monitoring devices capable of bending and moving with your knee or elbow,” Yin says.
    “This is proof-of-concept work that shows our technique works,” Yin says. “We’re now in the process of integrating this technique into soft robotics technologies to address industrial challenges. We are also exploring how this technique could be used to create devices that could be used to apply warmth to the human knee, which would have therapeutic applications.
    “We’re open to working with industry partners to explore additional applications and to find ways to move this approach from the lab into practical use.”
    Video of the technology can be found at https://youtu.be/1oEXhKBoYc8.
    Story Source:
    Materials provided by North Carolina State University. Original written by Matt Shipman. Note: Content may be edited for style and length. More

  • in

    A virtual reality 'Shopping Task' could help test for cognitive decline in adults

    New research from the Institute of Psychiatry, Psychology & Neuroscience (IoPPN) at King’s College London suggests that a virtual reality test in which participants “go to the shops” could offer a potentially promising way of effectively assessing functional cognition, the thinking and processing skills needed to accomplish complex everyday activities.
    The research, published in the Journal of Medical Internet Research, uses a novel virtual reality shopping task called “VStore” to measure cognition, which asks participants to take part in tests designed to mirror the real world. Researchers hope that it will be able to test for age-related cognitive decline in the future.
    The trial recruited 142 healthy individuals aged 20-79 years. Each participant was asked to “go to the shops,” first verbally recalling a list of 12 items, before being assessed for the amount of time it took to collect the items, as well as select the corresponding items on a virtual self-checkout machine, pay, and order coffee.
    Cognition tests, such as those used to measure the deficits present in several neuropsychiatric disorders including Alzheimer’s disease, schizophrenia, and depression, are traditionally time-consuming and onerous. Vstore — the technology that the researchers used in this study — is designed to overcome these limitations to provide a more accurate, engaging, and cost-effective process to explore a person’s cognitive health.
    The immersive environment (a virtual shop) mirrored the complexity of everyday life and meant that participants were better able to engage brain structures that are associated with spatial navigation, such as the hippocampus and entorhinal cortex, both of which can be affected in the early stages of Alzheimer disease.
    Researchers were able to establish that Vstore effectively engages a range of key neuropsychological functions simultaneously, suggesting that the functional tasks embedded in virtual reality may engage a greater range of cognitive domains than standard assessments.
    Prof Sukhi Shergill, the study’s lead author from King’s IoPPN and Kent and Medway Medical School (KMMS) said, “Virtual Reality appears to offer us significant advantages over more traditional pen-and-paper methods. The simple act of going to a shop to collect and pay for a list of items is something that we are all familiar with, but also actively engages multiple parts of the brain. Our study suggests that VStore may be suitable for evaluating functional cognition in the future. However, more works needs to be done before we can confirm this.”
    Lilla Porffy, the study’s first author from King’s IoPPN said, “These are promising findings adding to a growing body of evidence showing that virtual reality can be used to measure cognition and related everyday functioning effectively and accurately. The next steps will be to confirm these results and expand research into conditions characterised by cognitive complaints and functional difficulties such as psychosis and Alzheimer’s Disease.”
    This study was possible thanks to funding from the Medical Research Council and the National Institute for Health Research Maudsley Biomedical Research Centre. VStore was designed by Vitae VR.
    Story Source:
    Materials provided by King’s College London. Note: Content may be edited for style and length. More