More stories

  • in

    After AIs mastered Go and Super Mario, scientists have taught them how to 'play' experiments

    Inspired by the mastery of artificial intelligence (AI) over games like Go and Super Mario, scientists at the National Synchrotron Light Source II (NSLS-II) trained an AI agent — an autonomous computational program that observes and acts — how to conduct research experiments at superhuman levels by using the same approach. The Brookhaven team published their findings in the journal Machine Learning: Science and Technology and implemented the AI agent as part of the research capabilities at NSLS-II.
    As a U.S. Department of Energy (DOE) Office of Science User Facility located at DOE’s Brookhaven National Laboratory, NSLS-II enables scientific studies by more than 2000 researchers each year, offering access to the facility’s ultrabright x-rays. Scientists from all over the world come to the facility to advance their research in areas such as batteries, microelectronics, and drug development. However, time at NSLS-II’s experimental stations — called beamlines — is hard to get because nearly three times as many researchers would like to use them as any one station can handle in a day — despite the facility’s 24/7 operations.
    “Since time at our facility is a precious resource, it is our responsibility to be good stewards of that; this means we need to find ways to use this resource more efficiently so that we can enable more science,” said Daniel Olds, beamline scientist at NSLS-II and corresponding author of the study. “One bottleneck is us, the humans who are measuring the samples. We come up with an initial strategy, but adjust it on the fly during the measurement to ensure everything is running smoothly. But we can’t watch the measurement all the time because we also need to eat, sleep and do more than just run the experiment.”
    “This is why we taught an AI agent to conduct scientific experiments as if they were video games. This allows a robot to run the experiment, while we — humans — are not there. It enables round-the-clock, fully remote, hands-off experimentation with roughly twice the efficiency that humans can achieve,” added Phillip Maffettone, research associate at NSLS-II and first author on the study.
    According to the researchers, they didn’t even have to give the AI agent the rules of the ‘game’ to run the experiment. Instead, the team used a method called “reinforcement learning” to train an AI agent on how to run a successful scientific experiment, and then tested their agent on simulated research data from the Pair Distribution Function beamline at NSLS-II.
    Beamline Experiments: A Boss Level Challenge
    Reinforcement learning is one strategy of training an AI agent to master an ability. The idea of reinforcement learning is that the AI agent perceives an environment — a world — and can influence it by performing actions. Depending on how the AI agent interacts with the world, it may receive a reward or a penalty, reflecting if this specific interaction is a good choice or a poor one. The trick is that the AI agent retains the memory of its interactions with the world, so that it can learn from the experience for when it tries again. In this way, the AI agent figures out how to master a task by collecting the most rewards. More

  • in

    Soft robotic dragonfly signals environmental disruptions

    Engineers at Duke University have developed an electronics-free, entirely soft robot shaped like a dragonfly that can skim across water and react to environmental conditions such as pH, temperature or the presence of oil. The proof-of-principle demonstration could be the precursor to more advanced, autonomous, long-range environmental sentinels for monitoring a wide range of potential telltale signs of problems.
    The soft robot is described online March 25 in the journal Advanced Intelligent Systems.
    Soft robots are a growing trend in the industry due to their versatility. Soft parts can handle delicate objects such as biological tissues that metal or ceramic components would damage. Soft bodies can help robots float or squeeze into tight spaces where rigid frames would get stuck.
    The expanding field was on the mind of Shyni Varghese, professor of biomedical engineering, mechanical engineering and materials science, and orthopaedic surgery at Duke, when inspiration struck.
    “I got an email from Shyni from the airport saying she had an idea for a soft robot that uses a self-healing hydrogel that her group has invented in the past to react and move autonomously,” said Vardhman Kumar, a PhD student in Varghese’s laboratory and first author of the paper. “But that was the extent of the email, and I didn’t hear from her again for days. So the idea sort of sat in limbo for a little while until I had enough free time to pursue it, and Shyni said to go for it.”
    In 2012, Varghese and her laboratory created a self-healing hydrogel that reacts to changes in pH in a matter of seconds. Whether it be a crack in the hydrogel or two adjoining pieces “painted” with it, a change in acidity causes the hydrogel to form new bonds, which are completely reversible when the pH returns to its original levels. More

  • in

    The very first structures in the Universe

    The very first moments of the Universe can be reconstructed mathematically even though they cannot be observed directly. Physicists from the Universities of Göttingen and Auckland (New Zealand) have greatly improved the ability of complex computer simulations to describe this early epoch. They discovered that a complex network of structures can form in the first trillionth of a second after the Big Bang. The behaviour of these objects mimics the distribution of galaxies in today’s Universe. In contrast to today, however, these primordial structures are microscopically small. Typical clumps have masses of only a few grams and fit into volumes much smaller than present-day elementary particles. The results of the study have been published in the journal Physical Review D.
    The researchers were able to observe the development of regions of higher density that are held together by their own gravity. “The physical space represented by our simulation would fit into a single proton a million times over,” says Professor Jens Niemeyer, head of the Astrophysical Cosmology Group at the University of Göttingen. “It is probably the largest simulation of the smallest area of the Universe that has been carried out so far.” These simulations make it possible to calculate more precise predictions for the properties of these vestiges from the very beginnings of the Universe.
    Although the computer-simulated structures would be very short-lived and eventually “vaporise” into standard elementary particles, traces of this extreme early phase may be detectable in future experiments. “The formation of such structures, as well as their movements and interactions, must have generated a background noise of gravitational waves,” says Benedikt Eggemeier, a PhD student in Niemeyer’s group and first author of the study. “With the help of our simulations, we can calculate the strength of this gravitational wave signal, which might be measurable in the future.”
    It is also conceivable that tiny black holes could form if these structures undergo runaway collapse. If this happens they could have observable consequences today, or form part of the mysterious dark matter in the Universe. “On the other hand,” says Professor Easther, “If the simulations predict black holes form, and we don’t see them, then we will have found a new way to test models of the infant Universe.”
    Story Source:
    Materials provided by University of Göttingen. Note: Content may be edited for style and length. More

  • in

    How tiny machines become capable of learning

    Microswimmers are artificial, self-propelled, microscopic particles. They are capable of directional motion in a solution. The Molecular Nanophotonics Group at Leipzig University has developed special particles that are smaller than one-thirtieth of the diameter of a hair. They can change their direction of motion by heating tiny gold particles on their surface and converting this energy into motion. “However, these miniaturised machines cannot take in and learn information like their living counterparts. To achieve this, we control the microswimmers externally so that they learn to navigate in a virtual environment through what is known as reinforcement learning,” said Cichos.
    With the help of virtual rewards, the microswimmers find their way through the liquid while repeatedly being thrown off of their path, mainly by Brownian motion. “Our results show that the best swimmer is not the one that is fastest, but rather that there is an optimal speed,” said Viktor Holubec, who worked on the project as a fellow of the Alexander von Humboldt Foundation and has now returned to the university in Prague.
    According to the scientists, linking artificial intelligence and active systems like in these microswimmers is a first small step towards new intelligent microscopic materials that can autonomously perform tasks while also adapting to their new environment. At the same time, they hope that the combination of artificial microswimmers and machine learning methods will provide new insights into the emergence of collective behaviour in biological systems. “Our goal is to develop artificial, smart building blocks that can perceive their environmental influences and actively react to them,” said the physicist. Once this method is fully developed and has been applied to other material systems, including biological ones, it could be used, for example, in the development of smart drugs or microscopic robot swarms.
    Story Source:
    Materials provided by Universität Leipzig. Original written by Susann Husters. Note: Content may be edited for style and length. More

  • in

    Optical fiber could boost power of superconducting quantum computers

    The secret to building superconducting quantum computers with massive processing power may be an ordinary telecommunications technology — optical fiber.
    Physicists at the National Institute of Standards and Technology (NIST) have measured and controlled a superconducting quantum bit (qubit) using light-conducting fiber instead of metal electrical wires, paving the way to packing a million qubits into a quantum computer rather than just a few thousand. The demonstration is described in the March 25 issue of Nature.
    Superconducting circuits are a leading technology for making quantum computers because they are reliable and easily mass produced. But these circuits must operate at cryogenic temperatures, and schemes for wiring them to room-temperature electronics are complex and prone to overheating the qubits. A universal quantum computer, capable of solving any type of problem, is expected to need about 1 million qubits. Conventional cryostats — supercold dilution refrigerators — with metal wiring can only support thousands at the most.
    Optical fiber, the backbone of telecommunications networks, has a glass or plastic core that can carry a high volume of light signals without conducting heat. But superconducting quantum computers use microwave pulses to store and process information. So the light needs to be converted precisely to microwaves.
    To solve this problem, NIST researchers combined the fiber with a few other standard components that convert, convey and measure light at the level of single particles, or photons, which could then be easily converted into microwaves. The system worked as well as metal wiring and maintained the qubit’s fragile quantum states.
    “I think this advance will have high impact because it combines two totally different technologies, photonics and superconducting qubits, to solve a very important problem,” NIST physicist John Teufel said. “Optical fiber can also carry far more data in a much smaller volume than conventional cable.”
    Normally, researchers generate microwave pulses at room temperature and then deliver them through coaxial metal cables to ¬¬cryogenically maintained superconducting qubits. The new NIST setup used an optical fiber instead of metal to guide light signals to cryogenic photodetectors that converted signals back to microwaves and delivered them to the qubit. For experimental comparison purposes, microwaves could be routed to the qubit through either the photonic link or a regular coaxial line. More

  • in

    Semiconductor qubits scale in two dimensions

    The heart of any computer, its central processing unit, is built using semiconductor technology, which is capable of putting billions of transistors onto a single chip. Now, researchers from the group of Menno Veldhorst at QuTech, a collaboration between TU Delft and TNO, have shown that this technology can be used to build a two-dimensional array of qubits to function as a quantum processor. Their work, a crucial milestone for scalable quantum technology, was published today in Nature.
    Quantum computers have the potential to solve problems that are impossible to address with classical computers. Whereas current quantum devices hold tens of qubits — the basic building block of quantum technology — a future universal quantum computer capable of running any quantum algorithm will likely consist of millions to billions of qubits. Quantum dot qubits hold the promise to be a scalable approach as they can be defined using standard semiconductor manufacturing techniques. Veldhorst: ‘By putting four such qubits in a two-by-two grid, demonstrating universal control over all qubits, and operating a quantum circuit that entangles all qubits, we have made an important step forward in realizing a scalable approach for quantum computation.’
    An entire quantum processor
    Electrons trapped in quantum dots, semiconductor structures of only a few tens of nanometres in size, have been studied for more than two decades as a platform for quantum information. Despite all promises, scaling beyond two-qubit logic has remained elusive. To break this barrier, the groups of Menno Veldhorst and Giordano Scappucci decided to take an entirely different approach and started to work with holes (i.e. missing electrons) in germanium. Using this approach, the same electrodes needed to define the qubits could also be used to control and entangle them. ‘No large additional structures have to be added next to each qubit such that our qubits are almost identical to the transistors in a computer chip,’ says Nico Hendrickx, graduate student in the group of Menno Veldhorst and first author of the article. ‘Furthermore, we have obtained excellent control and can couple qubits at will, allowing us to program one, two, three, and four-qubit gates, promising highly compact quantum circuits.’
    2D is key
    After successfully creating the first germanium quantum dot qubit in 2019, the number of qubits on their chips has doubled every year. ‘Four qubits by no means makes a universal quantum computer, of course,’ Veldhorst says. ‘But by putting the qubits in a two-by-two grid we now know how to control and couple qubits along different directions.’ Any realistic architecture for integrating large numbers of qubits requires them to be interconnected along two dimensions.
    Germanium as a highly versatile platform
    Demonstrating four-qubit logic in germanium defines the state-of-the-art for the field of quantum dots and marks an important step toward dense, and extended, two-dimensional semiconductor qubit grids. Next to its compatibility with advanced semiconductor manufacturing, germanium is also a highly versatile material. It has exciting physics properties such as spin-orbit coupling and it can make contact to materials like superconductors. Germanium is therefore considered as an excellent platform in several quantum technologies. Veldhorst: ‘Now that we know how to manufacture germanium and operate an array of qubits, the germanium quantum information route can truly begin.’
    Story Source:
    Materials provided by Delft University of Technology. Note: Content may be edited for style and length. More

  • in

    Wafer-thin nanopaper changes from firm to soft at the touch of a button

    Materials science likes to take nature and the special properties of living beings that could potentially be transferred to materials as a model. A research team led by chemist Professor Andreas Walther of Johannes Gutenberg University Mainz (JGU) has succeeded in endowing materials with a bioinspired property: Wafer-thin stiff nanopaper instantly becomes soft and elastic at the push of a button. “We have equipped the material with a mechanism so that the strength and stiffness can be modulated via an electrical switch,” explained Walther. As soon as an electric current is applied, the nanopaper becomes soft; when the current flow stops, it regains its strength. From an application perspective, this switchability could be interesting for damping materials, for example. The work, which also involved scientists from the University of Freiburg and the Cluster of Excellence on “Living, Adaptive, and Energy-autonomous Materials Systems” (livMatS) funded by the German Research Foundation (DFG), was published in Nature Communications.
    Inspiration from the seafloor: Mechanical switch serves a protective function
    The nature-based inspiration in this case comes from sea cucumbers. These marine creatures have a special defense mechanism: When they are attacked by predators in their habitat on the seafloor, sea cucumbers can adapt and strengthen their tissue so that their soft exterior immediately stiffens. “This is an adaptive mechanical behavior that is fundamentally difficult to replicate,” said Professor Andreas Walther. With their work now published, his team has succeeded in mimicking the basic principle in a modified form using an attractive material and an equally attractive switching mechanism.
    The scientists used cellulose nanofibrils extracted and processed from the cell wall of trees. Nanofibrils are even finer than the microfibers in standard paper and result in a completely transparent, almost glass-like paper. The material is stiff and strong, appealing for lightweight construction. Its characteristics are even comparable to those of aluminum alloys. In their work, the research team applied electricity to these cellulose nanofibril-based nanopapers. By means of specially designed molecular changes, the material becomes flexible as a result. The process is reversible and can be controlled by an on/off switch.
    “This is extraordinary. All the materials around us are not very changeable, they do not easily switch from stiff to elastic and vice versa. Here, with the help of electricity, we can do that in a simple and elegant way,” said Walther. The development is thus moving away from classic static materials toward materials with properties that can be adaptively adjusted. This is relevant for mechanical materials, which can thus be made more resistant to fracture, or for adaptive damping materials, which could switch from stiff to compliant when overloaded, for example.
    Targeting a material with its own energy storage for autonomous on/off switching
    At the molecular level, the process involves heating the material by applying a current and thus reversibly breaking cross-linking points. The material softens in correlation with the applied voltage, i.e., the higher the voltage, the more cross-linking points are broken and the softer the material becomes. Professor Andreas Walther’s vision for the future also starts at the point of power supply: While currently a power source is needed to start the reaction, the next goal would be to produce a material with its own energy storage system, so that the reaction is essentially triggered “internally” as soon as, for example, an overload occurs and damping becomes necessary. “Now we still have to flip the switch ourselves, but our dream would be for the material system to be able to accomplish this on its own.”
    Story Source:
    Materials provided by Johannes Gutenberg Universitaet Mainz. Note: Content may be edited for style and length. More

  • in

    The dark matter mystery deepens with the demise of a reported detection

    First of two parts

    In mystery stories, the chief suspect almost always gets exonerated before the end of the book. Typically because a key piece of evidence turned out to be wrong.

    In science, key evidence is supposed to be right. But sometimes it’s not. In the mystery of the invisible “dark matter” in space, evidence implicating one chief suspect has now been directly debunked. WIMPs, tiny particles widely regarded as prime dark matter candidates, have failed to appear in an experiment designed specifically to test the lone previous study claiming to detect them.

    For decades, physicists have realized that most of the universe’s matter is nothing like earthly matter, which is made mostly from protons and neutrons. Gravitational influences on visible matter (stars and galaxies) indicate that some dark stuff of unknown identity pervades the cosmos. Ordinary matter accounts for less than 20 percent of the cosmic matter abundance.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    For unrelated reasons, theorists have also long suggested that nature possesses mysterious types of tiny particles predicted by a theoretical mathematical framework known as supersymmetry, or SUSY for short. Those particles would be massive by subatomic standards but would interact only weakly with other matter, and so are known as Weakly Interacting Massive particles, hence WIMPs.

    Of the many possible species of WIMPs, one (presumably the lightest one) should have the properties necessary to explain the dark matter messing with the motion of stars and galaxies (SN: 12/27/12). Way back in the last century, searches began for WIMPs in an effort to demonstrate their existence and identify which species made up the dark matter.

    In1998, one research team announced apparent success. An experiment called DAMA (for DArk MAtter, get it?), consisting of a particle detector buried under the Italian Alps, seemingly did detect particles with properties matching some physicists’ expectations for a dark matter signal.

    It was a tricky experiment to perform, relying on the premise that space is full of swarms of WIMPs. A detector containing chunks of sodium iodide should give off a flash of light when hit by a WIMP. But other particles from natural radioactive substances would also produce flashes of light even if WIMPs are a myth.

    So the experimenters adopted a clever suggestion proposed earlier by physicists Katherine Freese, David Spergel and Andrzej Drukier, known formally as an annual modulation test. But let’s just call it the June-December approach.

    As the Earth orbits the sun, the sun also moves, traveling around the Milky Way galaxy, carried by a spiral arm in the direction of the constellation Cygnus. If the galaxy really is full of WIMPs, the sun should be constantly plowing through them, generating a “WIMP wind.” (It’s like the wind you feel if you stick your head out of the window of a moving car.) In June, the Earth’s orbit moves it in the same direction as the sun’s motion around the galaxy — into the wind. But in December, the Earth moves the opposite direction, away from the wind. So more WIMPs should be striking the Earth in June than in December. It’s just like the way your car windshield smashes into more raindrops when driving forward than when going in reverse.

    As the sun moves through space, it should collide with dark matter particles called WIMPs, if they exist. When the Earth’s revolution carries it in the same direction as the sun, in summer, the resulting “WIMP wind” should appear stronger, with more WIMP collisions detected in June than in December.GEOATLAS/GRAPHI-OGRE, ADAPTED BY T. DUBÉ

    At an astrophysics conference in Paris in December 1998, Pierluigi Belli of the DAMA team reported a clear signal (or at least a strong hint) that more particles arrived in June than December. (More precisely, the results showed an annual modulation in frequency of light flashes, peaking around June with a minimum in December.) The DAMA data indicated a WIMP weighing in at 59 billion electron volts, roughly 60 times the mass of a proton.

    But some experts had concerns about the DAMA team’s data analysis. And other searches for WIMPs, with different detectors and strategies, should have found WIMPs if DAMA was right — but didn’t. Still, DAMA persisted. An advanced version of the experiment, DAMA/LIBRA, continued to find the June-December disparity.

    Perhaps DAMA was more sensitive to WIMPs than other experiments. After all, the other searches did not duplicate DAMA’s methods. Some used substances other than sodium iodide as a detecting material, or watched for slight temperature increases as a sign of a WIMP collision rather than flashes of light.

    For that matter, WIMPs might not be what theorists originally thought. DAMA initially reported 60 proton-mass WIMPs based on the belief that the WIMPs collided with iodine atoms. But later data suggested that perhaps the WIMPs were hitting sodium atoms, implying a much lighter WIMP mass — lighter than other experiments had been optimally designed to detect. Yet another possibility: Maybe trace amounts of the metallic element thallium (much heavier atoms than either iodine or sodium) had been the WIMP targets. But a recent review of that proposal found once again that the DAMA results could not be reconciled with the absence of a signal in other experiments.

    And now DAMA’s hope for vindication has been further dashed by a new underground experiment, this one in Spain. Scientists with the ANAIS collaboration have repeated the June-December method with sodium iodide, in an effort to reproduce DAMA’s results with the same method and materials. After three years of operation, the ANAIS team reports no sign of WIMPs.

    To be fair, the no-WIMP conclusion relies on a lot of seriously sophisticated technical analysis. It’s not just a matter of counting light flashes. You have to collect rigorous data on the behavior of nine different sodium iodide modules. You have to correct for the presence of rare radioactive isotopes generated by cosmic ray collisions while the modules were still under construction. And then the statistical analysis needed to discern a winter-summer signal difference is not something you should try at home (unless you’re fully versed in things like the least-square periodogram or the Lomb-Scargle technique). Plus, ANAIS it still going, with plans to collect two more years of data before issuing a final analysis. So the judgment on DAMA’s WIMPs is not necessarily final.

    Nevertheless, it doesn’t look good for WIMPs, at least for the WIMPs motivated by belief in supersymmetry.   

    Sadly for SUSY fans, searches for WIMPs from space are not the only bad news. Attempts to produce WIMPs in particle accelerators have also so far failed. Dark matter might just turn out to consist of some other kind of subatomic particle.

    If so, it would be a plot twist worthy of Agatha Christie, kind of like Poirot turning out to be the killer. For symmetry has long been physicists’ most reliable friend, guiding many great successes, from Einstein’s relativity theory to the standard model of particles and forces.

    Still, failure to find SUSY particles so far does not necessarily mean they don’t exist. Supersymmetry just might be not as simple as it first seemed. And SUSY particles might just be harder to detect than scientists originally surmised. But if supersymmetry does turn out not to be so super, scientists might need to reflect on the ways that faith in symmetry can lead them astray. More