More stories

  • in

    Applying artificial intelligence to science education

    A new review published in the Journal of Research in Science Teaching highlights the potential of machine learning — a subset of artificial intelligence — in science education. Although the authors initiated their review before the COVID-19 outbreak, the pandemic highlights the need to examine cutting-edge digital technologies as we re-think the future of teaching and learning.
    Based on a review of 47 studies, investigators developed a framework to conceptualize machine learning applications in science assessment. The article aims to examine how machine learning has revolutionized the capacity of science assessment in terms of tapping into complex constructs, improving assessment functionality, and facilitating scoring automaticity.
    Based on their investigation, the researchers identified various ways in which machine learning has transformed traditional science assessment, as well as anticipated impacts that it will likely have in the future (such as providing personalized science learning and changing the process of educational decision-making).
    “Machine learning is increasingly impacting every aspect of our lives, including education,” said lead author Xiaoming Zhai, an assistant professor in the University of Georgia’s Mary Frances Early’s Department of Mathematics and Science Education. “It is anticipated that the cutting-edge technology may be able to redefine science assessment practices and significantly change education in the future.”

    Story Source:
    Materials provided by Wiley. Note: Content may be edited for style and length. More

  • in

    Deep learning takes on synthetic biology

    DNA and RNA have been compared to “instruction manuals” containing the information needed for living “machines” to operate. But while electronic machines like computers and robots are designed from the ground up to serve a specific purpose, biological organisms are governed by a much messier, more complex set of functions that lack the predictability of binary code. Inventing new solutions to biological problems requires teasing apart seemingly intractable variables — a task that is daunting to even the most intrepid human brains.
    Two teams of scientists from the Wyss Institute at Harvard University and the Massachusetts Institute of Technology have devised pathways around this roadblock by going beyond human brains; they developed a set of machine learning algorithms that can analyze reams of RNA-based “toehold” sequences and predict which ones will be most effective at sensing and responding to a desired target sequence. As reported in two papers published concurrently today in Nature Communications, the algorithms could be generalizable to other problems in synthetic biology as well, and could accelerate the development of biotechnology tools to improve science and medicine and help save lives.
    “These achievements are exciting because they mark the starting point of our ability to ask better questions about the fundamental principles of RNA folding, which we need to know in order to achieve meaningful discoveries and build useful biological technologies,” said Luis Soenksen, Ph.D., a Postdoctoral Fellow at the Wyss Institute and Venture Builder at MIT’s Jameel Clinic who is a co-first author of the first of the two papers.
    Getting ahold of toehold switches
    The collaboration between data scientists from the Wyss Institute’s Predictive BioAnalytics Initiative and synthetic biologists in Wyss Core Faculty member Jim Collins’ lab at MIT was created to apply the computational power of machine learning, neural networks, and other algorithmic architectures to complex problems in biology that have so far defied resolution. As a proving ground for their approach, the two teams focused on a specific class of engineered RNA molecules: toehold switches, which are folded into a hairpin-like shape in their “off” state. When a complementary RNA strand binds to a “trigger” sequence trailing from one end of the hairpin, the toehold switch unfolds into its “on” state and exposes sequences that were previously hidden within the hairpin, allowing ribosomes to bind to and translate a downstream gene into protein molecules. This precise control over the expression of genes in response to the presence of a given molecule makes toehold switches very powerful components for sensing substances in the environment, detecting disease, and other purposes.
    However, many toehold switches do not work very well when tested experimentally, even though they have been engineered to produce a desired output in response to a given input based on known RNA folding rules. Recognizing this problem, the teams decided to use machine learning to analyze a large volume of toehold switch sequences and use insights from that analysis to more accurately predict which toeholds reliably perform their intended tasks, which would allow researchers to quickly identify high-quality toeholds for various experiments.

    advertisement

    The first hurdle they faced was that there was no dataset of toehold switch sequences large enough for deep learning techniques to analyze effectively. The authors took it upon themselves to generate a dataset that would be useful to train such models. “We designed and synthesized a massive library of toehold switches, nearly 100,000 in total, by systematically sampling short trigger regions along the entire genomes of 23 viruses and 906 human transcription factors,” said Alex Garruss, a Harvard graduate student working at the Wyss Institute who is a co-first author of the first paper. “The unprecedented scale of this dataset enables the use of advanced machine learning techniques for identifying and understanding useful switches for immediate downstream applications and future design.”
    Armed with enough data, the teams first employed tools traditionally used for analyzing synthetic RNA molecules to see if they could accurately predict the behavior of toehold switches now that there were manifold more examples available. However, none of the methods they tried — including mechanistic modeling based on thermodynamics and physical features — were able to predict with sufficient accuracy which toeholds functioned better.
    A picture is worth a thousand base pairs
    The researchers then explored various machine learning techniques to see if they could create models with better predictive abilities. The authors of the first paper decided to analyze toehold switches not as sequences of bases, but rather as two-dimensional “images” of base-pair possibilities. “We know the baseline rules for how an RNA molecule’s base pairs bond with each other, but molecules are wiggly — they never have a single perfect shape, but rather a probability of different shapes they could be in,” said Nicolaas Angenent-Mari, a MIT graduate student working at the Wyss Institute and co-first author of the first paper. “Computer vision algorithms have become very good at analyzing images, so we created a picture-like representation of all the possible folding states of each toehold switch, and trained a machine learning algorithm on those pictures so it could recognize the subtle patterns indicating whether a given picture would be a good or a bad toehold.”
    Another benefit of their visually-based approach is that the team was able to “see” which parts of a toehold switch sequence the algorithm “paid attention” to the most when determining whether a given sequence was “good” or “bad.” They named this interpretation approach Visualizing Secondary Structure Saliency Maps, or VIS4Map, and applied it to their entire toehold switch dataset. VIS4Map successfully identified physical elements of the toehold switches that influenced their performance, and allowed the researchers to conclude that toeholds with more potentially competing internal structures were “leakier” and thus of lower quality than those with fewer such structures, providing insight into RNA folding mechanisms that had not been discovered using traditional analysis techniques.

    advertisement

    “Being able to understand and explain why certain tools work or don’t work has been a secondary goal within the artificial intelligence community for some time, but interpretability needs to be at the forefront of our concerns when studying biology because the underlying reasons for those systems’ behaviors often cannot be intuited,” said Jim Collins, Ph.D., the senior author of the first paper. “Meaningful discoveries and disruptions are the result of deep understanding of how nature works, and this project demonstrates that machine learning, when properly designed and applied, can greatly enhance our ability to gain important insights about biological systems.” Collins is also the Termeer Professor of Medical Engineering and Science at MIT.
    Now you’re speaking my language
    While the first team analyzed toehold switch sequences as 2D images to predict their quality, the second team created two different deep learning architectures that approached the challenge using orthogonal techniques. They then went beyond predicting toehold quality and used their models to optimize and redesign poorly performing toehold switches for different purposes, which they report in the second paper.
    The first model, based on a convolutional neural network (CNN) and multi-layer perceptron (MLP), treats toehold sequences as 1D images, or lines of nucleotide bases, and identifies patterns of bases and potential interactions between those bases to predict good and bad toeholds. The team used this model to create an optimization method called STORM (Sequence-based Toehold Optimization and Redesign Model), which allows for complete redesign of a toehold sequence from the ground up. This “blank slate” tool is optimal for generating novel toehold switches to perform a specific function as part of a synthetic genetic circuit, enabling the creation of complex biological tools.
    “The really cool part about STORM and the model underlying it is that after seeding it with input data from the first paper, we were able to fine-tune the model with only 168 samples and use the improved model to optimize toehold switches. That calls into question the prevailing assumption that you need to generate massive datasets every time you want to apply a machine learning algorithm to a new problem, and suggests that deep learning is potentially more applicable for synthetic biologists than we thought,” said co-first author Jackie Valeri, a graduate student at MIT and the Wyss Institute.
    The second model is based on natural language processing (NLP), and treats each toehold sequence as a “phrase” consisting of patterns of “words,” eventually learning how certain words are put together to make a coherent phrase. “I like to think of each toehold switch as a haiku poem: like a haiku, it’s a very specific arrangement of phrases within its parent language — in this case, RNA. We are essentially training this model to learn how to write a good haiku by feeding it lots and lots of examples,” said co-first author Pradeep Ramesh, Ph.D., a Visiting Postdoctoral Fellow at the Wyss Institute and Machine Learning Scientist at Sherlock Biosciences.
    Ramesh and his co-authors integrated this NLP-based model with the CNN-based model to create NuSpeak (Nucleic Acid Speech), an optimization approach that allowed them to redesign the last 9 nucleotides of a given toehold switch while keeping the remaining 21 nucleotides intact. This technique allows for the creation of toeholds that are designed to detect the presence of specific pathogenic RNA sequences, and could be used to develop new diagnostic tests.
    The team experimentally validated both of these platforms by optimizing toehold switches designed to sense fragments from the SARS-CoV-2 viral genome. NuSpeak improved the sensors’ performances by an average of 160%, while STORM created better versions of four “bad” SARS-CoV-2 viral RNA sensors whose performances improved by up to 28 times.
    “A real benefit of the STORM and NuSpeak platforms is that they enable you to rapidly design and optimize synthetic biology components, as we showed with the development of toehold sensors for a COVID-19 diagnostic,” said co-first author Katie Collins, an undergraduate MIT student at the Wyss Institute who worked with MIT Associate Professor Timothy Lu, M.D., Ph.D., a corresponding author of the second paper.
    “The data-driven approaches enabled by machine learning open the door to really valuable synergies between computer science and synthetic biology, and we’re just beginning to scratch the surface,” said Diogo Camacho, Ph.D., a corresponding author of the second paper who is a Senior Bioinformatics Scientist and co-lead of the Predictive BioAnalytics Initiative at the Wyss Institute. “Perhaps the most important aspect of the tools we developed in these papers is that they are generalizable to other types of RNA-based sequences such as inducible promoters and naturally occurring riboswitches, and therefore can be applied to a wide range of problems and opportunities in biotechnology and medicine.”
    Additional authors of the papers include Wyss Core Faculty member and Professor of Genetics at HMS George Church, Ph.D.; and Wyss and MIT Graduate Students Miguel Alcantar and Bianca Lepe.
    “Artificial intelligence is wave that is just beginning to impact science and industry, and has incredible potential for helping to solve intractable problems. The breakthroughs described in these studies demonstrate the power of melding computation with synthetic biology at the bench to develop new and more powerful bioinspired technologies, in addition to leading to new insights into fundamental mechanisms of biological control,” said Don Ingber, M.D., Ph.D., the Wyss Institute’s Founding Director. Ingber is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at Harvard’s John A. Paulson School of Engineering and Applied Sciences.
    This work was supported by the DARPA Synergistic Discovery and Design program, the Defense Threat Reduction Agency, the Paul G. Allen Frontiers Group, the Wyss Institute for Biologically Inspired Engineering, Harvard University, the Institute for Medical Engineering and Science, the Massachusetts Institute of Technology, the National Science Foundation, the National Human Genome Research Institute, the Department of Energy, the National Institutes of Health, and a CONACyT grant. More

  • in

    This 'squidbot' jets around and takes pics of coral and fish

    Engineers at the University of California San Diego have built a squid-like robot that can swim untethered, propelling itself by generating jets of water. The robot carries its own power source inside its body. It can also carry a sensor, such as a camera, for underwater exploration.
    The researchers detail their work in a recent issue of Bioinspiration and Biomimetics.
    “Essentially, we recreated all the key features that squids use for high-speed swimming,” said Michael T. Tolley, one of the paper’s senior authors and a professor in the Department of Mechanical and Aerospace Engineering at UC San Diego. “This is the first untethered robot that can generate jet pulses for rapid locomotion like the squid and can achieve these jet pulses by changing its body shape, which improves swimming efficiency.”
    This squid robot is made mostly from soft materials such as acrylic polymer, with a few rigid, 3D printed and laser cut parts. Using soft robots in underwater exploration is important to protect fish and coral, which could be damaged by rigid robots. But soft robots tend to move slowly and have difficulty maneuvering.
    The research team, which includes roboticists and experts in computer simulations as well as experimental fluid dynamics, turned to cephalopods as a good model to solve some of these issues. Squid, for example, can reach the fastest speeds of any aquatic invertebrates thanks to a jet propulsion mechanism.
    Their robot takes a volume of water into its body while storing elastic energy in its skin and flexible ribs. It then releases this energy by compressing its body and generates a jet of water to propel itself.
    At rest, the squid robot is shaped roughly like a paper lantern, and has flexible ribs, which act like springs, along its sides. The ribs are connected to two circular plates at each end of the robot. One of them is connected to a nozzle that both takes in water and ejects it when the robot’s body contracts. The other plate can carry a water-proof camera or a different type of sensor.
    Engineers first tested the robot in a water testbed in the lab of Professor Geno Pawlak, in the UC San Diego Department of Mechanical and Aerospace Engineering. Then they took it out for a swim in one of the tanks at the UC San Diego Birch Aquarium at the Scripps Institution of Oceanography.
    They demonstrated that the robot could steer by adjusting the direction of the nozzle. As with any underwater robot, waterproofing was a key concern for electrical components such as the battery and camera.They clocked the robot’s speed at about 18 to 32 centimeters per second (roughly half a mile per hour), which is faster than most other soft robots.
    “After we were able to optimize the design of the robot so that it would swim in a tank in the lab, it was especially exciting to see that the robot was able to successfully swim in a large aquarium among coral and fish, demonstrating its feasibility for real-world applications,” said Caleb Christianson, who led the study as part of his Ph.D. work in Tolley’s research group. He is now a senior medical devices engineering at San Diego-based Dexcom.
    Researchers conducted several experiments to find the optimal size and shape for the nozzle that would propel the robot. This in turn helped them increase the robot’s efficiency and its ability to maneuver and go faster. This was done mostly by simulating this kind of jet propulsion, work that was led by Professor Qiang Zhu and his team in the Department of Structural Engineering at UC San Diego. The team also learned more about how energy can be stored in the elastic component of the robot’s body and skin, which is later released to generate a jet.
    Video: https://www.youtube.com/watch?v=v-UMDnSB8k0&feature=emb_logo

    Story Source:
    Materials provided by University of California – San Diego. Note: Content may be edited for style and length. More

  • in

    Underwater robots to autonomously dock mid-mission to recharge and transfer data

    Robots can be amazing tools for search-and-rescue missions and environmental studies, but eventually they must return to a base to recharge their batteries and upload their data. That can be a challenge if your robot is an autonomous underwater vehicle (AUV) exploring deep ocean waters.
    Now, a Purdue University team has created a mobile docking system for AUVs, enabling them to perform longer tasks without the need for human intervention.
    The team also has published papers on ways to adapt this docking system for AUVs that will explore extraterrestrial lakes, such as those of Jupiter and Saturn’s moons.
    “My research focuses on persistent operation of robots in challenging environments,” said Nina Mahmoudian, an associate professor of mechanical engineering. “And there’s no more challenging environment than underwater.”
    Once a marine robot submerges in water, it loses the ability to transmit and receive radio signals, including GPS data. Some may use acoustic communication, but this method can be difficult and unreliable, especially for long-range transmissions. Because of this, underwater robots currently have a limited range of operation.
    “Typically these robots perform a pre-planned itinerary underwater,” Mahmoudian said. “Then they come to the surface and send out a signal to be retrieved. Humans have to go out, retrieve the robot, get the data, recharge the battery and then send it back out. That’s very expensive, and it limits the amount of time these robots can be performing their tasks.”
    Mahmoudian’s solution is to create a mobile docking station that underwater robots could return to on their own.

    advertisement

    “And what if we had multiple docks, which were also mobile and autonomous?” she said. “The robots and the docks could coordinate with each other, so that they could recharge and upload their data, and then go back out to continue exploring, without the need for human intervention. We’ve developed the algorithms to maximize these trajectories, so we get the optimum use of these robots.”
    A paper on the mission planning system that Mahmoudian and her team developed has been published in IEEE Robotics and Automation Letters. The researchers validated the method by testing the system on a short mission in Lake Superior.
    “What’s key is that the docking station is portable,” Mahmoudian said. “It can be deployed in a stationary location, but it can also be deployed on autonomous surface vehicles or even on other autonomous underwater vehicles. And it’s designed to be platform-agnostic, so it can be utilized with any AUV. The hardware and software work hand-in-hand.”
    Mahmoudian points out that systems like this already exist in your living room. “An autonomous vacuum, like a Roomba, does its vacuum cleaning, and when it runs out of battery, it autonomously returns to its dock to get recharged,” she said, “That’s exactly what we are doing here, but the environment is much more challenging.”
    If her system can successfully function in a challenging underwater environment, then Mahmoudian sees even greater horizons for this technology.
    “This system can be used anywhere,” she said. “Robots on land, air or sea will be able to operate indefinitely. Search-and-rescue robots will be able to explore much wider areas. They will go into the Arctic and explore the effects of climate change. They will even go into space.”
    Video: https://www.youtube.com/watch?v=_kS0_-qc_r0&_ga=2.99992349.282287155.1601990769-129101217.1578788059
    A patent on this mobile underwater docking station design has been issued. The patent was filed through the Secretary of the U.S. Navy. This work is funded by the National Science Foundation (grant 19078610) and the Office of Naval Research (grant N00014-20-1-2085).

    Story Source:
    Materials provided by Purdue University. Original written by Jared Pike. Note: Content may be edited for style and length. More

  • in

    Could megatesla magnetic fields be realized on Earth?

    Magnetic fields are used in various areas of modern physics and engineering, with practical applications ranging from doorbells to maglev trains. Since Nikola Tesla’s discoveries in the 19th century, researchers have strived to realize strong magnetic fields in laboratories for fundamental studies and diverse applications, but the magnetic strength of familiar examples are relatively weak. Geomagnetism is 0.3-0.5 gauss (G) and magnetic tomography (MRI) used in hospitals is about 1 tesla (T = 104 G). By contrast, future magnetic fusion and maglev trains will require magnetic fields on the kilotesla (kT = 107 G) order. To date, the highest magnetic fields experimentally observed are on the kT order.
    Recently, scientists at Osaka University discovered a novel mechanism called a “microtube implosion,” and demonstrated the generation of megatesla (MT = 1010G) order magnetic fields via particle simulations using a supercomputer. Astonishingly, this is three orders of magnitude higher than what has ever been achieved in a laboratory. Such high magnetic fields are expected only in celestial bodies like neutron stars and black holes.
    Irradiating a tiny plastic microtube one-tenth the thickness of a human hair by ultraintense laser pulses produces hot electrons with temperatures of tens of billion of degrees. These hot electrons, along with cold ions, expand into the microtube cavity at velocities approaching the speed of light. Pre-seeding with a kT-order magnetic field causes the imploding charged particles infinitesimally twisted due to Lorenz force. Such a unique cylindrical flow collectively produces unprecedentedly high spin currents of about 1015 ampere/cm2 on the target axis and consequently, generates ultrahigh magnetic fields on the MT order.
    The study conducted by Masakatsu Murakami and colleagues has confirmed that current laser technology can realize MT-order magnetic fields based on the concept. The present concept for generating MT-order magnetic fields will lead to pioneering fundamental research in numerous areas, including materials science, quantum electrodynamics (QED), and astrophysics, as well as other cutting-edge practical applications.

    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More

  • in

    How mobile apps grab our attention

    As part of an international collaboration, Aalto University researchers have shown that our common understanding of what attracts visual attention to screens, in fact, does not transfer to mobile applications. Despite the widespread use of mobile phones and tablets in our everyday lives, this is the first study to empirically test how users’ eyes follow commonly used mobile app elements.
    Previous work on what attracts visual attention, or visual saliency, has centered on desktop and web-interfaces.
    ‘Apps appear differently on a phone than on a desktop computer or browser: they’re on a smaller screen which simply fits fewer elements and, instead of a horizontal view, mobile devices typically use a vertical layout. Until now it was unclear how these factors would affect how apps actually attract our eyes,’ explains Aalto University Professor Antti Oulasvirta.
    In the study, the research team used a large set of representative mobile interfaces and eye tracking to see how users look at screenshots of mobile apps, for both Android and Apple iOS devices.
    According to previous thinking, our eyes should not only jump to bigger or brighter elements, but also stay there longer. Previous studies have also concluded that when we look at certain kinds of images, our attention is drawn to the centre of screens and also spread horizontally across the screen, rather than vertically. The researchers found these principles to have little effect on mobile interfaces.
    ‘It actually came as a surprise that bright colours didn’t affect how people fixate on app details. One possible reason is that the mobile interface itself is full of glossy and colourful elements, so everything on the screen can potentially catch your attention — it’s just how they’re designed. It seems that when everything is made to stand out, nothing pops out in the end,’ says lead author and Post-doctoral Researcher Luis Leiva.
    The study also confirms that some other design principles hold true for mobile apps. Gaze, for example, drifts to the top-left corner, as an indication of exploration or scanning. Text plays an important role, likely due to its role in relaying information; on first use, users thus tend to focus on text elements of a mobile app as parts of icons, labels and logos.
    Image elements drew visual attention more frequently than expected for the area they cover, though the average length of time users spent looking at images was similar to other app elements. Faces, too, attracted concentrated attention, though when accompanied by text, eyes wander much closer to the location of text.
    ‘Various factors influence where our visual attention goes. For photos, these factors include colour, edges, texture and motion. But when it comes to generated visual content, such as graphical user interfaces, design composition is a critical factor to consider,’ says Dr Hamed Tavakoli, who was also part of the Aalto University research team.
    The study was completed with international collaborators including IIT Goa (India), Yildiz Technical University (Turkey) and Huawei Technologies (China). The team will present the findings on 6 October 2020 at MobileHCI’20, the flagship conference on Human-Computer Interaction with mobile devices and services.

    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    Scientists find evidence of exotic state of matter in candidate material for quantum computers

    Using a novel technique, scientists working at the Florida State University-headquartered National High Magnetic Field Laboratory have found evidence for a quantum spin liquid, a state of matter that is promising as a building block for the quantum computers of tomorrow.
    Researchers discovered the exciting behavior while studying the so-called electron spins in the compound ruthenium trichloride. Their findings, published today in the journal Nature Physics , show that electron spins interact across the material, effectively lowering the overall energy. This type of behavior — consistent with a quantum spin liquid — was detected in ruthenium trichloride at high temperatures and in high magnetic fields.
    Spin liquids, first theorized in 1973, remain something of a mystery. Despite some materials showing promising signs for this state of matter, it is extremely challenging to definitively confirm its existence. However, there is great interest in them because scientists believe they could be used for the design of smarter materials in a variety of applications, such as quantum computing.
    This study provides strong support that ruthenium trichloride is a spin liquid, said physicist Kim Modic, a former graduate student who worked at the MagLab’s pulsed field facility and is now an assistant professor at the Institute of Science and Technology Austria.
    “I think this paper provides a fresh perspective on ruthenium trichloride and demonstrates a new way to look for signatures of spin liquids,” said Modic, the paper’s lead author.
    For decades, physicists have extensively studied the charge of an electron, which carries electricity, paving the way for advances in electronics, energy and other areas. But electrons also have a property called spin. Scientists want to also leverage the spin aspect of electrons for technology, but the universal behavior of spins is not yet fully understood.

    advertisement

    In simple terms, electrons can be thought of as spinning on an axis, like a top, oriented in some direction. In magnetic materials, these spins align with one another, either in the same or opposite directions. Called magnetic ordering, this behavior can be induced or suppressed by temperature or magnetic field. Once the magnetic order is suppressed, more exotic states of matter could emerge, such as quantum spin liquids.
    In the search for a spin liquid, the research team homed in on ruthenium trichloride. Its honeycomb-like structure, featuring a spin at each site, is like a magnetic version of graphene — another hot topic in condensed matter physics.
    “Ruthenium is much heavier than carbon, which results in strong interactions among the spins,” said MagLab physicist Arkady Shekhter, a co-author on the paper.
    The team expected those interactions would enhance magnetic frustration in the material. That’s a kind of “three’s company” scenario in which two spins pair up, leaving the third in a magnetic limbo, which thwarts magnetic ordering. That frustration, the team hypothesized, could lead to a spin liquid state. Their data ended up confirming their suspicions.
    “It seems like, at low temperatures and under an applied magnetic field, ruthenium trichloride shows signs of the behavior that we’re looking for,” Modic said. “The spins don’t simply orient themselves depending on the alignment of neighboring spins, but rather are dynamic — like swirling water molecules — while maintaining some correlation between them.”
    The findings were enabled by a new technique that the team developed called resonant torsion magnetometry, which precisely measures the behavior of electron spins in high magnetic fields and could lead to many other new insights about magnetic materials, Modic said.

    advertisement

    “We don’t really have the workhorse techniques or the analytical machinery for studying the excitations of electron spins, like we do for charge systems,” Modic said. “The methods that do exist typically require large sample sizes, which may not be available. Our technique is highly sensitive and works on tiny, delicate samples. This could be a game-changer for this area of research.”
    Modic developed the technique as a postdoctoral researcher and then worked with MagLab physicists Shekhter and Ross McDonald, another co-author on the paper, to measure ruthenium trichloride in high magnetic fields.
    Their technique involved mounting ruthenium trichloride samples onto a cantilever the size of a strand of hair. They repurposed a quartz tuning fork — similar to that in a quartz crystal watch — to vibrate the cantilever in a magnetic field. Instead of using it to tell time precisely, they measured the frequency of vibration to study the interaction between the spins in ruthenium trichloride and the applied magnetic field. They performed their measurements in two powerful magnets at the National MagLab.
    “The beauty of our approach is that it’s a relatively simple setup, which allowed us to carry out our measurements in both a 35-tesla resistive magnet and a 65-tesla pulsed field magnet,” Modic said.
    The next step in the research will be to study this system in the MagLab’s world-record 100-tesla pulsed magnet.
    “That high of a magnetic field should allow us to directly observe the suppression of the spin liquid state, which will help us learn even more about this compound’s inner workings,” Shekhter said. More

  • in

    New algorithm could unleash the power of quantum computers

    A new algorithm that fast forwards simulations could bring greater use ability to current and near-term quantum computers, opening the way for applications to run past strict time limits that hamper many quantum calculations.
    “Quantum computers have a limited time to perform calculations before their useful quantum nature, which we call coherence, breaks down,” said Andrew Sornborger of the Computer, Computational, and Statistical Sciences division at Los Alamos National Laboratory, and senior author on a paper announcing the research. “With a new algorithm we have developed and tested, we will be able to fast forward quantum simulations to solve problems that were previously out of reach.”
    Computers built of quantum components, known as qubits, can potentially solve extremely difficult problems that exceed the capabilities of even the most powerful modern supercomputers. Applications include faster analysis of large data sets, drug development, and unraveling the mysteries of superconductivity, to name a few of the possibilities that could lead to major technological and scientific breakthroughs in the near future.
    Recent experiments have demonstrated the potential for quantum computers to solve problems in seconds that would take the best conventional computer millennia to complete. The challenge remains, however, to ensure a quantum computer can run meaningful simulations before quantum coherence breaks down.
    “We use machine learning to create a quantum circuit that can approximate a large number of quantum simulation operations all at once,” said Sornborger. “The result is a quantum simulator that replaces a sequence of calculations with a single, rapid operation that can complete before quantum coherence breaks down.”
    The Variational Fast Forwarding (VFF) algorithm that the Los Alamos researchers developed is a hybrid combining aspects of classical and quantum computing. Although well-established theorems exclude the potential of general fast forwarding with absolute fidelity for arbitrary quantum simulations, the researchers get around the problem by tolerating small calculation errors for intermediate times in order to provide useful, if slightly imperfect, predictions.
    In principle, the approach allows scientists to quantum-mechanically simulate a system for as long as they like. Practically speaking, the errors that build up as simulation times increase limits potential calculations. Still, the algorithm allows simulations far beyond the time scales that quantum computers can achieve without the VFF algorithm.
    One quirk of the process is that it takes twice as many qubits to fast forward a calculation than would make up the quantum computer being fast forwarded. In the newly published paper, for example, the research group confirmed their approach by implementing a VFF algorithm on a two qubit computer to fast forward the calculations that would be performed in a one qubit quantum simulation.
    In future work, the Los Alamos researchers plan to explore the limits of the VFF algorithm by increasing the number of qubits they fast forward, and checking the extent to which they can fast forward systems. The research was published September 18, 2020 in the journal npj Quantum Information.

    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More