More stories

  • in

    A mathematical secret of lizard camouflage

    The shape-shifting clouds of starling birds, the organization of neural networks or the structure of an anthill: nature is full of complex systems whose behaviors can be modeled using mathematical tools. The same is true for the labyrinthine patterns formed by the green or black scales of the ocellated lizard. A multidisciplinary team from the University of Geneva (UNIGE) explains, thanks to a very simple mathematical equation, the complexity of the system that generates these patterns. This discovery contributes to a better understanding of the evolution of skin color patterns: the process allows for many different locations of green and black scales but always leads to an optimal pattern for the animal survival. These results are published in the journal Physical Review Letters.
    A complex system is composed of several elements (sometimes only two) whose local interactions lead to global properties that are difficult to predict. The result of a complex system will not be the sum of these elements taken separately since the interactions between them will generate an unexpected behavior of the whole. The group of Michel Milinkovitch, Professor at the Department of Genetics and Evolution, and Stanislav Smirnov, Professor at the Section of Mathematics of the Faculty of Science of the UNIGE, have been interested in the complexity of the distribution of colored scales on the skin of ocellated lizards.
    Labyrinths of scales
    The individual scales of the ocellated lizard (Timon lepidus) change color (from green to black, and vice versa) over the course of the animal’s life, gradually forming a complex labyrinthine pattern as it reaches adulthood. The UNIGE researchers have previously shown that the labyrinths emerge on the skin surface because the network of scales constitutes a so-called ‘cellular automaton’. “This is a computing system invented in 1948 by the mathematician John von Neumann in which each element changes its state according to the states of the neighboring elements,” explains Stanislav Smirnov.
    In the case of the ocellated lizard, the scales change state — green or black — depending on the colors of their neighbors according to a precise mathematical rule. Milinkovitch had demonstrated that this cellular automaton mechanism emerges from the superposition of, on one hand, the geometry of the skin (thick within scales and much thinner between scales) and, on the other hand, the interactions among the pigmentary cells of the skin.
    The road to simplicity
    Szabolcs Zakany, a theoretical physicist in Michel Milinkovitch’s laboratory, teamed up with the two professors to determine whether this change in the color of the scales could obey an even simpler mathematical law. The researchers thus turned to the Lenz-Ising model developed in the 1920’s to describe the behavior of magnetic particles that possess spontaneous magnetization. The particles can be in two different states (+1 or -1) and interact only with their first neighbors.
    “The elegance of the Lenz-Ising model is that it describes these dynamics using a single equation with only two parameters: the energy of the aligned or misaligned neighbors, and the energy of an external magnetic field that tends to push all particles toward the +1 or -1 state,” explains Szabolcs Zakany.
    A maximum disorder for a better survival
    The three UNIGE scientists determined that this model can accurately describe the phenomenon of scale color change in the ocellated lizard. More precisely, they adapted the Lenz-Ising model, usually organized on a square lattice, to the hexagonal lattice of skin scales. At a given average energy, the Lenz-Ising model favors the formation of all state configurations of magnetic particles corresponding to this same energy. In the case of the ocellated lizard, the process of color change favors the formation of all distributions of green and black scales that each time result in a labyrinthine pattern (and not in lines, spots, circles, or single-colored areas…).
    “These labyrinthine patterns, which provides ocellated lizards with an optimal camouflage, have been selected in the course of evolution. These patterns are generated by a complex system, that yet can be simplified as a single equation, where what matters is not the precise location of the green and black scales, but the general appearance of the final patterns,” enthuses Michel Milinkovitch. Each animal will have a different precise location of its green and black scales, but all of these alternative patterns will have a similar appearance (i.e., a very similar ‘energy’ in the Lenz-Ising model) giving these different animals equivalent chances of survival.
    Story Source:
    Materials provided by Université de Genève. Note: Content may be edited for style and length. More

  • in

    Tiny materials lead to a big advance in quantum computing

    Like the transistors in a classical computer, superconducting qubits are the building blocks of a quantum computer. While engineers have been able to shrink transistors to nanometer scales, however, superconducting qubits are still measured in millimeters. This is one reason a practical quantum computing device couldn’t be miniaturized to the size of a smartphone, for instance.
    MIT researchers have now used ultrathin materials to build superconducting qubits that are at least one-hundredth the size of conventional designs and suffer from less interference between neighboring qubits. This advance could improve the performance of quantum computers and enable the development of smaller quantum devices.
    The researchers have demonstrated that hexagonal boron nitride, a material consisting of only a few monolayers of atoms, can be stacked to form the insulator in the capacitors on a superconducting qubit. This defect-free material enables capacitors that are much smaller than those typically used in a qubit, which shrinks its footprint without significantly sacrificing performance.
    In addition, the researchers show that the structure of these smaller capacitors should greatly reduce cross-talk, which occurs when one qubit unintentionally affects surrounding qubits.
    “Right now, we can have maybe 50 or 100 qubits in a device, but for practical use in the future, we will need thousands or millions of qubits in a device. So, it will be very important to miniaturize the size of each individual qubit and at the same time avoid the unwanted cross-talk between these hundreds of thousands of qubits. This is one of the very few materials we found that can be used in this kind of construction,” says co-lead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.
    Wang’s co-lead author is Megan Yamoah ’20, a former student in the Engineering Quantum Systems group who is currently studying at Oxford University on a Rhodes Scholarship. Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics, is a corresponding author, and the senior author is William D. Oliver, a professor of electrical engineering and computer science and of physics, an MIT Lincoln Laboratory Fellow, director of the Center for Quantum Engineering, and associate director of the Research Laboratory of Electronics. The research is published today in Nature Materials.
    Qubit quandaries More

  • in

    Where did that sound come from?

    The human brain is finely tuned not only to recognize particular sounds, but also to determine which direction they came from. By comparing differences in sounds that reach the right and left ear, the brain can estimate the location of a barking dog, wailing fire engine, or approaching car.
    MIT neuroscientists have now developed a computer model that can also perform that complex task. The model, which consists of several convolutional neural networks, not only performs the task as well as humans do, it also struggles in the same ways that humans do.
    “We now have a model that can actually localize sounds in the real world,” says Josh McDermott, an associate professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research. “And when we treated the model like a human experimental participant and simulated this large set of experiments that people had tested humans on in the past, what we found over and over again is it the model recapitulates the results that you see in humans.”
    Findings from the new study also suggest that humans’ ability to perceive location is adapted to the specific challenges of our environment, says McDermott, who is also a member of MIT’s Center for Brains, Minds, and Machines.
    McDermott is the senior author of the paper, which appears today in Nature Human Behavior. The paper’s lead author is MIT graduate student Andrew Francl.
    Modeling localization
    When we hear a sound such as a train whistle, the sound waves reach our right and left ears at slightly different times and intensities, depending on what direction the sound is coming from. Parts of the midbrain are specialized to compare these slight differences to help estimate what direction the sound came from, a task also known as localization. More

  • in

    Eco-friendly micro-supercapacitors using fallen leaves?

    A KAIST research team has developed a graphene-inorganic-hybrid micro-supercapacitor made of leaves using femtosecond direct laser writing lithography. The advancement of wearable electronic devices is synonymous with innovations in flexible energy storage devices. Of the various energy storage devices, micro-supercapacitors have drawn a great deal of interest for their high electrical power density, long lifetimes, and short charging times.
    However, there has been an increase in waste battery generation with the increases in the consumption and use of electronic equipment as well as the short replacement period that follows advancements in mobile devices. The safety and environmental issues involved in the collection, recycling, and processing of such waste batteries are creating a number of challenges.
    Forests cover about 30 percent of the Earth’s surface, producing a huge amount of fallen leaves. This naturally occurring biomass comes in large quantities and is both biodegradable and reusable, which makes it an attractive, eco-friendly material. However, if the leaves are left neglected instead of being used efficiently, they can contribute to fires or water pollution.
    To solve both problems at once, a research team led by Professor Young-Jin Kim from the Department of Mechanical Engineering and Dr. Hana Yoon from the Korea Institute of Energy Research developed a one-step technology that can create porous 3D graphene micro-electrodes with high electrical conductivity without additional treatment in atmospheric conditions by irradiating femtosecond laser pulses on the surface of the leaves without additional materials. Taking this strategy further, the team also suggested a method for producing flexible micro-supercapacitors.
    They showed that this technique could quickly and easily produce porous graphene-inorganic-hybrid electrodes at a low price, and validated their performance by using the graphene micro-supercapacitors to power an LED and an electronic watch that could function as a thermometer, hygrometer, and timer. These results open up the possibility of the mass production of flexible and green graphene-based electronic devices.
    Professor Young-Jin Kim said, “Leaves create forest biomass that comes in unmanageable quantities, so using them for next-generation energy storage devices makes it possible for us to reuse waste resources, thereby establishing a virtuous cycle.”
    This research was published in Advanced Functional Materials last month and was sponsored by the Ministry of Agriculture Food and Rural Affairs, the Korea Forest Service, and the Korea Institute of Energy Research.
    Story Source:
    Materials provided by The Korea Advanced Institute of Science and Technology (KAIST). Note: Content may be edited for style and length. More

  • in

    Diamond quantum sensor detects 'magnetic flow' excited by heat

    In recent times, sustainable development has been the overarching guiding principle of research concerning environmental issues, energy crises, and information and communication technology. In this regard, spintronic devices have emerged as promising candidates for surpassing conventional technology, which has run into the problem of excess waste heat generation in miniaturized devices. The electron “spin” responsible for the electric and magnetic property of a material are being used to develop next generation energy-efficient and miniature spintronic devices. At the heart of this new technology are “magnons,” quanta of spin excitation waves, and their detection is key to further progress in this field. Recently, within the field of spintronics, devices based on the interaction between spin and heat flow have emerged as a potential candidate for new thermoelectric devices (devices which convert heat to electricity).
    In the meantime, nitrogen-vacancy (N-V) centers in diamond, basically a point defect consisting of a nitrogen atom paired with an adjacent lattice vacancy, has emerged as a key for high-resolution quantum sensors. Interestingly, recently, it has been demonstrated that N-V centers can detect coherent magnon. However, detecting the thermally excited magnons by heat using N-V centers is difficult since the thermal magnons have much higher energy than the spin state of N-V centers, limiting their interaction.
    Now in a collaborative study published in Physical Review Applied, Associate Professor Toshu An from Japan Advanced Institute of Science and Technology (JAIST) and Dwi Prananto, a PhD graduate from JAIST, along with researchers from Kyoto University, Japan, and the National Institute for Materials Science, Japan, have successfully detected these energetic magnons in yttrium iron garnet (YIG), a magnetic insulator, by using a quantum sensor based on diamond with NV centers.
    To achieve this feat, the team used the interaction between coherent, low-energy magnons and N-V centers as an indirect way to detect the thermally excited magnons. As it turns out, the current produced by thermal magnons modifies the low-energy magnons by exerting a torque on them, which can be picked up by the N-V centers. Therefore, the method provides a way to detect thermal magnons by observing the changes in the coherent magnons.
    To demonstrate this, the researchers set up a YIG garnet sample with two gold antennas placed at the ends of the sample’s surface and placed a small diamond sensor at the center of the sample close to the surface. They then set up low-energy spin waves corresponding to the coherent magnons in the sample using microwaves and generated thermal magnons by producing a temperature gradient across the sample. Sure enough, the diamond sensor picked up on the changes to the coherent magnons caused by the induced thermal magnon current.
    The ability to detect thermal magnons with N-V centers is particularly advantageous, as Dr. An explains: “Our study provides a detection tool for thermal magnon currents that can be placed locally and over a broad range of distances from spin waves. This is not possible with conventional techniques, which require a relatively large electrode and specific configurations with proximal distance to the spin waves.”
    These findings could not only open up new possibilities in quantum sensing but also pave the way for its integration with spin caloritronics. “Our work could lay the foundation for spintronic devices controlled by heat sources,” says Dr. An.
    Story Source:
    Materials provided by Japan Advanced Institute of Science and Technology. Note: Content may be edited for style and length. More

  • in

    Bristol team chase down advantage in quantum race

    Quantum researchers at the University of Bristol have dramatically reduced the time to simulate an optical quantum computer, with a speedup of around one billion over previous approaches.
    Quantum computers promise exponential speedups for certain problems, with potential applications in areas from drug discovery to new materials for batteries. But quantum computing is still in its early stages, so these are long-term goals. Nevertheless, there are exciting intermediate milestones on the journey to building a useful device. One currently receiving a lot of attention is “quantum advantage,” where a quantum computer performs a task beyond the capabilities of even the world’s most powerful supercomputers.
    Experimental work from the University of Science and Technology of China (USTC) was the first to claim quantum advantage using photons — particles of light, in a protocol called “Gaussian Boson Sampling” (GBS). Their paper claimed that the experiment, performed in 200 seconds, would take 600 million years to simulate on the world’s largest supercomputer.
    Taking up the challenge, a team at the University of Bristol’s Quantum Engineering Technology Labs (QET Labs), in collaboration with researchers at Imperial College London and Hewlett Packard Enterprise, have reduced this simulation time down to just a few months, a speedup factor of around one billion.
    Their paper “The boundary for quantum advantage in Gaussian boson sampling,” published today in the journal Science Advances, comes at a time when other experimental approaches claiming quantum advantage, such as from the quantum computing team at Google, are also leading to improved classical algorithms for simulating these experiments.
    Joint first author Jake Bulmer, PhD student in QET Labs, said: “There is an exciting race going on where, on one side, researchers are trying to build increasingly complex quantum computing systems which they claim cannot be simulated by conventional computers. At the same time, researchers like us are improving simulation methods so we can simulate these supposedly impossible to simulate machines!”
    “As researchers develop larger scale experiments, they will look to make claims of quantum advantage relative to classical simulations. Our results will provide an essential point of comparison by which to establish the computational power of future GBS experiments,” said joint first author, Bryn Bell, Marie Curie Research Fellow at Imperial College London, now Senior Quantum Engineer at Oxford Quantum Circuits.
    The team’s methods do not exploit any errors in the experiment and so one next step for the research is to combine their new methods with techniques that exploit the imperfections of the real-world experiment. This would further speed up simulation time and build a greater understanding of which areas require improvements.
    “These quantum advantage experiments represent a tremendous achievement of physics and engineering. As a researcher, it is exciting to contribute to the understanding of where the computational complexity of these experiments arises. We were surprised by the magnitude of the improvements we achieved — it is not often that you can claim to find a one-billion-fold improvement!” said Jake Bulmer.
    Anthony Laing, co-Director of QET Labs and an author on the work, said: “As we develop more sophisticated quantum computing technologies, this type of work is vital. It helps us understand the bar we must get over before we can begin to solve problems in clean energy and healthcare that affect us all. The work is a great example of teamwork and collaboration among researchers in the UK Quantum Computing and Simulation Hub and Hewlett Packard Enterprise.”
    Story Source:
    Materials provided by University of Bristol. Note: Content may be edited for style and length. More

  • in

    Robot performs first laparoscopic surgery without human help

    A robot has performed laparoscopic surgery on the soft tissue of a pig without the guiding hand of a human — a significant step in robotics toward fully automated surgery on humans. Designed by a team of Johns Hopkins University researchers, the Smart Tissue Autonomous Robot (STAR) is described today in Science Robotics.
    “Our findings show that we can automate one of the most intricate and delicate tasks in surgery: the reconnection of two ends of an intestine. The STAR performed the procedure in four animals and it produced significantly better results than humans performing the same procedure,” said senior author Axel Krieger, an assistant professor of mechanical engineering at Johns Hopkins’ Whiting School of Engineering.
    The robot excelled at intestinal anastomosis, a procedure that requires a high level of repetitive motion and precision. Connecting two ends of an intestine is arguably the most challenging step in gastrointestinal surgery, requiring a surgeon to suture with high accuracy and consistency. Even the slightest hand tremor or misplaced stitch can result in a leak that could have catastrophic complications for the patient.
    Working with collaborators at the Children’s National Hospital in Washington, D.C. and Jin Kang, a Johns Hopkins professor of electrical and computer engineering, Krieger helped create the robot, a vision-guided system designed specifically to suture soft tissue. Their current iteration advances a 2016 model that repaired a pig’s intestines accurately, but required a large incision to access the intestine and more guidance from humans.
    The team equipped the STAR with new features for enhanced autonomy and improved surgical precision, including specialized suturing tools and state-of-the art imaging systems that provide more accurate visualizations of the surgical field.
    Soft-tissue surgery is especially hard for robots because of its unpredictability, forcing them to be able to adapt quickly to handle unexpected obstacles, Krieger said. The STAR has a novel control system that can adjust the surgical plan in real time, just as a human surgeon would.
    “What makes the STAR special is that it is the first robotic system to plan, adapt, and execute a surgical plan in soft tissue with minimal human intervention,” Krieger said.
    A structural-light based three-dimensional endoscope and machine learning-based tracking algorithm developed by Kang and his students guides STAR. “We believe an advanced three-dimensional machine vision system is essential in making intelligent surgical robots smarter and safer,” Kang said.
    As the medical field moves towards more laparoscopic approaches for surgeries, it will be important to have an automated robotic system designed for such procedures to assist, Krieger said.
    “Robotic anastomosis is one way to ensure that surgical tasks that require high precision and repeatability can be performed with more accuracy and precision in every patient independent of surgeon skill,” Krieger said. “We hypothesize that this will result in a democratized surgical approach to patient care with more predictable and consistent patient outcomes.”
    The team from Johns Hopkins also included Hamed Saeidi, Justin D. Opfermann, Michael Kam, Shuwen Wei, and Simon Leonard. Michael H. Hsieh, director of Transitional Urology at Children’s National Hospital, also contributed to the research.
    The work was supported by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under award numbers 1R01EB020610 and R21EB024707.
    Story Source:
    Materials provided by Johns Hopkins University. Original written by Catherine Graham. Note: Content may be edited for style and length. More

  • in

    Technique improves AI ability to understand 3D space using 2D images

    Researchers have developed a new technique, called MonoCon, that improves the ability of artificial intelligence (AI) programs to identify three-dimensional (3D) objects, and how those objects relate to each other in space, using two-dimensional (2D) images. For example, the work would help the AI used in autonomous vehicles navigate in relation to other vehicles using the 2D images it receives from an onboard camera.
    “We live in a 3D world, but when you take a picture, it records that world in a 2D image,” says Tianfu Wu, corresponding author of a paper on the work and an assistant professor of electrical and computer engineering at North Carolina State University.
    “AI programs receive visual input from cameras. So if we want AI to interact with the world, we need to ensure that it is able to interpret what 2D images can tell it about 3D space. In this research, we are focused on one part of that challenge: how we can get AI to accurately recognize 3D objects — such as people or cars — in 2D images, and place those objects in space.”
    While the work may be important for autonomous vehicles, it also has applications for manufacturing and robotics.
    In the context of autonomous vehicles, most existing systems rely on lidar — which uses lasers to measure distance — to navigate 3D space. However, lidar technology is expensive. And because lidar is expensive, autonomous systems don’t include much redundancy. For example, it would be too expensive to put dozens of lidar sensors on a mass-produced driverless car.
    “But if an autonomous vehicle could use visual inputs to navigate through space, you could build in redundancy,” Wu says. “Because cameras are significantly less expensive than lidar, it would be economically feasible to include additional cameras — building redundancy into the system and making it both safer and more robust. More