More stories

  • in

    Scientists manipulate magnets at the atomic scale

    Fast and energy-efficient future data processing technologies are on the horizon after an international team of scientists successfully manipulated magnets at the atomic level.
    Physicist Dr Rostislav Mikhaylovskiy from Lancaster University said: “With stalling efficiency trends of current technology, new scientific approaches are especially valuable. Our discovery of the atomically-driven ultrafast control of magnetism opens broad avenues for fast and energy-efficient future data processing technologies essential to keep up with our data hunger.”
    Magnetic materials are heavily used in modern life with applications ranging from fridge magnets to Google and Amazon’s data centers used to store digital information.
    These materials host trillions of mutually aligned elementary magnetic moments or “spins,” whose alignment is largely governed by the arrangement of the atoms in the crystal lattice.
    The spin can be seen as an elementary “needle of a compass,” typically depicted as an arrow showing the direction from North to South poles. In magnets all spins are aligned along the same direction by the force called exchange interaction. The exchange interaction is one of the strongest quantum effects which is responsible for the very existence of magnetic materials.
    The ever-growing demand for efficient magnetic data processing calls for novel means to manipulate the magnetic state and manipulating the exchange interaction would be the most efficient and ultimately fastest way to control magnetism.

    advertisement

    To achieve this result, the researchers used the fastest and the strongest stimulus available: ultrashort laser pulse excitation. They used light to optically stimulate specific atomic vibrations of the magnet’s crystal lattice which extensively disturbed and distorted the structure of the material.
    The results of this study are published in the  journal Nature Materials by the international team from Lancaster, Delft, Nijmegen, Liege and Kiev.
    PhD student Jorrit Hortensius from the Technical University of Delft said: “We optically shake the lattice of a magnet that is made up of alternating up and down small magnetic moments and therefore does not have a net magnetization, unlike the familiar fridge magnets.”
    After shaking the crystal for a very short period of time, the researchers measured how the magnetic properties evolve directly in time. Following the shaking, the magnetic system of the antiferromagnet changes, such that a net magnetization appears: for a fraction of time the material becomes similar to the everyday fridge magnets.
    This all occurs within an unprecedentedly short time of less than a few picoseconds (millionth of a millionth of a second). This time is not only orders of magnitude shorter than the recording time in modern computer hard drives, but also exactly matches the fundamental limit for the magnetization switching.
    Dr Rostislav Mikhaylovskiy from Lancaster University explains: “It has long been thought that the control of magnetism by atomic vibrations is restricted to acoustic excitations (sound waves) and cannot be faster than nanoseconds. We have reduced the magnetic switching time by 1000 times that is a major milestone in itself.”
    Dr Dmytro Afanasiev from the Technical University of Delft adds: “We believe that our findings will stimulate further research into exploring and understanding the exact mechanisms governing the ultrafast lattice control of the magnetic state.”

    Story Source:
    Materials provided by Lancaster University. Note: Content may be edited for style and length. More

  • in

    Algorithm that performs as accurately as dermatologists

    A study has now been presented that boosts the evidence for using AI solutions in skin cancer diagnostics. With an algorithm they devised themselves, scientists at the University of Gothenburg show the capacity of technology to perform at the same level as dermatologists in assessing the severity of skin melanoma.
    The study, published in the Journal of the American Academy of Dermatology, and its results are the work of a research group at the Department of Dermatology and Venereology at Sahlgrenska Academy, University of Gothenburg.
    The study was conducted at Sahlgrenska University Hospital in Gothenburg. Its purpose was, through machine learning (ML), to train an algorithm to determine whether skin melanoma is invasive and there is a risk of it spreading (metastatizing), or whether it remains at a growth stage in which it is confined to the epidermis, with no risk of metastasis.
    The algorithm was trained and validated on 937 dermatoscopic images of melanoma, and subsequently tested on 200 cases. All the cases included were diagnosed by a dermatopathologist.
    The majority of melanomas are found by patients rather than doctors. This suggests that, in most cases, diagnosis is relatively easy. Before surgery, however, it is often much more difficult to determine the stage the melanoma has reached.
    To make the classifications more accurate, dermatologists use dermatoscopes — instruments that combine a type of magnifying glass with bright illumination. In recent years, interest in using ML for skin tumor classifications has increased, and several publications have shown that ML algorithms can perform on par with, or even better than, experienced dermatologists.
    The current study is now giving a further boost to research in this field. When the same classification task was performed by the algorithm on the one hand and seven independent dermatologists on the other, the result was a draw.
    “None of the dermatologists significantly outperformed the ML algorithm,” states Sam Polesie, a researcher at the University of Gothenburg and specialist doctor at Sahlgrenska University Hospital, who is the corresponding author of the study.
    In a developed form, the algorithm could serve as support in the task of assessing the severity of skin melanoma before surgery. The classification affects how extensive an operation needs to be, and is therefore important for both the patient and the surgeon.
    “The results of the study are interesting, and the hope is that the algorithm can be used as clinical decision support in the future. But it needs refining further, and prospective studies that monitor patients over time are necessary, too,” Polesie concludes.

    Story Source:
    Materials provided by University of Gothenburg. Note: Content may be edited for style and length. More

  • in

    Detecting single molecules and diagnosing diseases with a smartphone

    Biomarkers play a central role in the diagnosis of disease and assessment of its course. Among the markers now in use are genes, proteins, hormones, lipids and other classes of molecules. Biomarkers can be found in the blood, in cerebrospinal fluid, urine and various types of tissues, but most of them have one thing in common: They occur in extremely low concentrations, and are therefore technically challenging to detect and quantify.
    Many detection procedures use molecular probes, such as antibodies or short nucleic-acid sequences, which are designed to bind to specific biomarkers. When a probe recognizes and binds to its target, chemical or physical reactions give rise to fluorescence signals. Such methods work well, provided they are sensitive enough to recognize the relevant biomarker in a high percentage of all patients who carry it in their blood. In addition, before such fluorescence-based tests can be used in practice, the biomarkers themselves or their signals must be amplified. The ultimate goal is to enable medical screening to be carried out directly on patients, without having to send the samples to a distant laboratory for analysis.
    Molecular antennas amplify fluorescence signals Philip Tinnefeld, who holds a Chair in Physical Chemistry at LMU, has developed a strategy for determining levels of biomarkers present in low concentrations. He has succeeded in coupling DNA probes to tiny particles of gold or silver. Pairs of particles (‘dimers’) act as nano-antennas that amplify the fluorescence signals. The trick works as follows: Interactions between the nanoparticles and incoming light waves intensify the local electromagnetic fields, and this in turn leads to a massive increase in the amplitude of the fluorescence. In this way, bacteria that contain antibiotic resistance genes and even viruses can be specifically detected.
    “DNA-based nano-antennas have been studied for the last few years,” says Kateryna Trofymchuk, joint first author of the study. “But the fabrication of these nanostructures presents challenges.” Philip Tinnefeld’s research group has now succeeded in configuring the components of their nano-antennas more precisely, and in positioning the DNA molecules that serve as capture probes at the site of signal amplification. Together, these modifications enable the fluorescence signal to be more effectively amplified. Furthermore, in the minuscule volume involved, which is on the order of zeptoliters (a zeptoliter equals 10-21 of a liter), even more molecules can be captured.
    The high degree of positioning control is made possible by DNA nanotechnology, which exploits the structural properties of DNA to guide the assembly of all sorts of nanoscale objects — in extremely large numbers. “In one sample, we can simultaneously produce billions of these nano-antennas, using a procedure that basically consists of pipetting a few solutions together,” says Trofymchuk.
    Routine diagnostics on the smartphone “In the future,” says Viktorija Glembockyte, also joint first author of the publication, “our technology could be utilized for diagnostic tests even in areas in which access to electricity or laboratory equipment is restricted. We have shown that we can directly detect small fragments of DNA in blood serum, using a portable, smartphone-based microscope that runs on a conventional USB power pack to monitor the assay.” Newer smartphones are usually equipped with pretty good cameras. Apart from that, all that’s needed is a laser and a lens — two readily available and cheap components. The LMU researchers used this basic recipe to construct their prototypes.
    They went on to demonstrate that DNA fragments that are specific for antibiotic resistance genes in bacteria could be detected by this set-up. But the assay could be easily modified to detect a whole range of interesting target types, such as viruses. Tinnefeld is optimistic: “The past year has shown that there is always a need for new and innovative diagnostic methods, and perhaps our technology can one day contribute to the development of an inexpensive and reliable diagnostic test that can be carried out at home.”

    Story Source:
    Materials provided by Ludwig-Maximilians-Universität München. Note: Content may be edited for style and length. More

  • in

    New machine learning theory raises questions about nature of science

    A novel computer algorithm, or set of rules, that accurately predicts the orbits of planets in the solar system could be adapted to better predict and control the behavior of the plasma that fuels fusion facilities designed to harvest on Earth the fusion energy that powers the sun and stars.
    The algorithm, devised by a scientist at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL), applies machine learning, the form of artificial intelligence (AI) that learns from experience, to develop the predictions. “Usually in physics, you make observations, create a theory based on those observations, and then use that theory to predict new observations,” said PPPL physicist Hong Qin, author of a paper detailing the concept in Scientific Reports. “What I’m doing is replacing this process with a type of black box that can produce accurate predictions without using a traditional theory or law.”
    Qin (pronounced Chin) created a computer program into which he fed data from past observations of the orbits of Mercury, Venus, Earth, Mars, Jupiter, and the dwarf planet Ceres. This program, along with an additional program known as a “serving algorithm,” then made accurate predictions of the orbits of other planets in the solar system without using Newton’s laws of motion and gravitation. “Essentially, I bypassed all the fundamental ingredients of physics. I go directly from data to data,” Qin said. “There is no law of physics in the middle.”
    The program does not happen upon accurate predictions by accident. “Hong taught the program the underlying principle used by nature to determine the dynamics of any physical system,” said Joshua Burby, a physicist at the DOE’s Los Alamos National Laboratory who earned his Ph.D. at Princeton under Qin’s mentorship. “The payoff is that the network learns the laws of planetary motion after witnessing very few training examples. In other words, his code really ‘learns’ the laws of physics.”
    Machine learning is what makes computer programs like Google Translate possible. Google Translate sifts through a vast amount of information to determine how frequently one word in one language has been translated into a word in the other language. In this way, the program can make an accurate translation without actually learning either language.
    The process also appears in philosophical thought experiments like John Searle’s Chinese Room. In that scenario, a person who did not know Chinese could nevertheless “translate” a Chinese sentence into English or any other language by using a set of instructions, or rules, that would substitute for understanding. The thought experiment raises questions about what, at root, it means to understand anything at all, and whether understanding implies that something else is happening in the mind besides following rules.

    advertisement

    Qin was inspired in part by Oxford philosopher Nick Bostrom’s philosophical thought experiment that the universe is a computer simulation. If that were true, then fundamental physical laws should reveal that the universe consists of individual chunks of space-time, like pixels in a video game. “If we live in a simulation, our world has to be discrete,” Qin said. The black box technique Qin devised does not require that physicists believe the simulation conjecture literally, though it builds on this idea to create a program that makes accurate physical predictions.
    The resulting pixelated view of the world, akin to what is portrayed in the movie The Matrix, is known as a discrete field theory, which views the universe as composed of individual bits and differs from the theories that people normally create. While scientists typically devise overarching concepts of how the physical world behaves, computers just assemble a collection of data points.
    Qin and Eric Palmerduca, a graduate student in the Princeton University Program in Plasma Physics, are now developing ways to use discrete field theories to predict the behavior of particles of plasma in fusion experiments conducted by scientists around the world. The most widely used fusion facilities are doughnut-shaped tokamaks that confine the plasma in powerful magnetic fields.
    Fusion, the power that drives the sun and stars, combines light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei that represents 99% of the visible universe — to generate massive amounts of energy. Scientists are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity.
    “In a magnetic fusion device, the dynamics of plasmas are complex and multi-scale, and the effective governing laws or computational models for a particular physical process that we are interested in are not always clear,” Qin said. “In these scenarios, we can apply the machine learning technique that I developed to create a discrete field theory and then apply this discrete field theory to understand and predict new experimental observations.”
    This process opens up questions about the nature of science itself. Don’t scientists want to develop physics theories that explain the world, instead of simply amassing data? Aren’t theories fundamental to physics and necessary to explain and understand phenomena?

    advertisement

    “I would argue that the ultimate goal of any scientist is prediction,” Qin said. “You might not necessarily need a law. For example, if I can perfectly predict a planetary orbit, I don’t need to know Newton’s laws of gravitation and motion. You could argue that by doing so you would understand less than if you knew Newton’s laws. In a sense, that is correct. But from a practical point of view, making accurate predictions is not doing anything less.”
    Machine learning could also open up possibilities for more research. “It significantly broadens the scope of problems that you can tackle because all you need to get going is data,” Palmerduca said.
    The technique could also lead to the development of a traditional physical theory. “While in some sense this method precludes the need of such a theory, it can also be viewed as a path toward one,” Palmerduca said. “When you’re trying to deduce a theory, you’d like to have as much data at your disposal as possible. If you’re given some data, you can use machine learning to fill in gaps in that data or otherwise expand the data set.”
    Support for this research came from the DOE Office of Science (Fusion Energy Sciences). More

  • in

    Applying quantum computing to a particle process

    A team of researchers at Lawrence Berkeley National Laboratory (Berkeley Lab) used a quantum computer to successfully simulate an aspect of particle collisions that is typically neglected in high-energy physics experiments, such as those that occur at CERN’s Large Hadron Collider.
    The quantum algorithm they developed accounts for the complexity of parton showers, which are complicated bursts of particles produced in the collisions that involve particle production and decay processes.
    Classical algorithms typically used to model parton showers, such as the popular Markov Chain Monte Carlo algorithms, overlook several quantum-based effects, the researchers note in a study published online Feb. 10 in the journal Physical Review Letters that details their quantum algorithm.
    “We’ve essentially shown that you can put a parton shower on a quantum computer with efficient resources,” said Christian Bauer, who is Theory Group leader and serves as principal investigator for quantum computing efforts in Berkeley Lab’s Physics Division, “and we’ve shown there are certain quantum effects that are difficult to describe on a classical computer that you could describe on a quantum computer.” Bauer led the recent study.
    Their approach meshes quantum and classical computing: It uses the quantum solution only for the part of the particle collisions that cannot be addressed with classical computing, and uses classical computing to address all of the other aspects of the particle collisions.
    Researchers constructed a so-called “toy model,” a simplified theory that can be run on an actual quantum computer while still containing enough complexity that prevents it from being simulated using classical methods.

    advertisement

    “What a quantum algorithm does is compute all possible outcomes at the same time, then picks one,” Bauer said. “As the data gets more and more precise, our theoretical predictions need to get more and more precise. And at some point these quantum effects become big enough that they actually matter,” and need to be accounted for.
    In constructing their quantum algorithm, researchers factored in the different particle processes and outcomes that can occur in a parton shower, accounting for particle state, particle emission history, whether emissions occurred, and the number of particles produced in the shower, including separate counts for bosons and for two types of fermions.
    The quantum computer “computed these histories at the same time, and summed up all of the possible histories at each intermediate stage,” Bauer noted.
    The research team used the IBM Q Johannesburg chip, a quantum computer with 20 qubits. Each qubit, or quantum bit, is capable of representing a zero, one, and a state of so-called superposition in which it represents both a zero and a one simultaneously. This superposition is what makes qubits uniquely powerful compared to standard computing bits, which can represent a zero or one.
    Researchers constructed a four-step quantum computer circuit using five qubits, and the algorithm requires 48 operations. Researchers noted that noise in the quantum computer is likely to blame for differences in results with the quantum simulator.

    advertisement

    While the team’s pioneering efforts to apply quantum computing to a simplified portion of particle collider data are promising, Bauer said that he doesn’t expect quantum computers to have a large impact on the high-energy physics field for several years — at least until the hardware improves.
    Quantum computers will need more qubits and much lower noise to have a real breakthrough, Bauer said. “A lot depends on how quickly the machines get better.” But he noted that there is a huge and growing effort to make that happen, and it’s important to start thinking about these quantum algorithms now to be ready for the coming advances in hardware.
    Such quantum leaps in technology are a prime focus of an Energy Department-supported collaborative quantum R&D center that Berkeley Lab is a part of, called the Quantum Systems Accelerator.
    As hardware improves it will be possible to account for more types of bosons and fermions in the quantum algorithm, which will improve its accuracy.
    Such algorithms should eventually have broad impact in the high-energy physics field, he said, and could also find application in heavy-ion-collider experiments.
    Also participating in the study were Benjamin Nachman and Davide Provasoli of the Berkeley Lab Physics Division, and Wibe de Jong of the Berkeley Lab Computational Research Division.
    This work was supported by the U.S. Department of Energy Office of Science. It used resources at the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science user facility. More

  • in

    Spontaneous quantum error correction demonstrated

    To build a universal quantum computer from fragile quantum components, effective implementation of quantum error correction (QEC) is an essential requirement and a central challenge. QEC is used in quantum computing, which has the potential to solve scientific problems beyond the scope of supercomputers, to protect quantum information from errors due to various noise.
    Published by the journal Nature, research co-authored by University of Massachusetts Amherst physicist Chen Wang, graduate students Jeffrey Gertler and Shruti Shirol, and postdoctoral researcher Juliang Li takes a step toward building a fault-tolerant quantum computer. They have realized a novel type of QEC where the quantum errors are spontaneously corrected.
    Today’s computers are built with transistors representing classical bits (0’s or 1’s). Quantum computing is an exciting new paradigm of computation using quantum bits (qubits) where quantum superposition can be exploited for exponential gains in processing power. Fault-tolerant quantum computing may immensely advance new materials discovery, artificial intelligence, biochemical engineering and many other disciplines.
    Since qubits are intrinsically fragile, the most outstanding challenge of building such powerful quantum computers is efficient implementation of quantum error correction. Existing demonstrations of QEC are active, meaning that they require periodically checking for errors and immediately fixing them, which is very demanding in hardware resources and hence hinders the scaling of quantum computers.
    In contrast, the researchers’ experiment achieves passive QEC by tailoring the friction (or dissipation) experienced by the qubit. Because friction is commonly considered the nemesis of quantum coherence, this result may appear quite surprising. The trick is that the dissipation has to be designed specifically in a quantum manner. This general strategy has been known in theory for about two decades, but a practical way to obtain such dissipation and put it in use for QEC has been a challenge.
    “Although our experiment is still a rather rudimentary demonstration, we have finally fulfilled this counterintuitive theoretical possibility of dissipative QEC,” says Chen. “Looking forward, the implication is that there may be more avenues to protect our qubits from errors and do so less expensively. Therefore, this experiment raises the outlook of potentially building a useful fault-tolerant quantum computer in the mid to long run.”
    Chen describes in layman’s terms how strange the quantum world can be. “As in German physicist Erwin Schrödinger’s famous (or infamous) example, a cat packed in a closed box can be dead or alive at the same time. Each logical qubit in our quantum processor is very much like a mini-Schrödinger’s cat. In fact, we quite literally call it a `cat qubit.’ Having lots of such cats can help us solve some of the world’s most difficult problems.
    “Unfortunately, it is very difficult to keep a cat staying that way since any gas, light, or anything leaking into box will destroy the magic: The cat will become either dead or just a regular live cat,” explains Chen. “The most straightforward strategy to protect a Schrodinger’s cat is to make the box as tight as possible, but that also makes it harder to use it for computation. What we just demonstrated was akin to painting the inside of the box in a special way and that somehow helps the cat better survive the inevitable harm of the outside world.”

    Story Source:
    Materials provided by University of Massachusetts Amherst. Note: Content may be edited for style and length. More

  • in

    Mathematical modeling suggests kids half as susceptible to COVID-19 as adults

    A new computational analysis suggests that people under the age of 20 are about half as susceptible to COVID-19 infection as adults, and they are less likely to infect others. Itai Dattner of the University of Haifa, Israel, and colleagues present these findings in the open-access journal PLOS Computational Biology.
    Earlier studies have found differences in symptoms and the clinical course of COVID-19 in children compared to adults. Others have reported that a lower proportion of children are diagnosed compared to older age groups. However, only a few studies have compared transmission patterns between age groups, and their conclusions are not definitive.
    To better understand susceptibility and infectivity of children, Dattner and colleagues fitted mathematical and statistical models of transmission within households to a dataset of COVID-19 testing results from the dense city of Bnei Brak, Israel. The dataset covered 637 households whose members all underwent PCR testing for active infection in spring of 2020. Some individuals also received serology testing for SARS-CoV-2 antibodies.
    By adjusting model parameters to fit the data, the researchers found that people under 20 are 43 percent as susceptible as people over 20. With an infectivity estimated at 63 percent of that of adults, children are also less likely to spread COVID-19 to others. The researchers also found that children are more likely than adults to receive a negative PCR result despite actually being infected.
    These findings could explain worldwide reports that a lower proportion of children are diagnosed compared to adults. They could help inform mathematical modeling of COVID-19 dynamics, public health policy, and control measures. Future computational research could explore transmission dynamics in other settings, such as nursing homes and schools.
    “When we began this research, understanding children’s role in transmission was a top priority, in connection with the question of reopening schools,” Dattner says. “It was exciting to work in a large, multidisciplinary team, which was assembled by the Israeli Ministry of Health to address this topic rapidly.”

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Nanowire could provide a stable, easy-to-make superconducting transistor

    Superconductors — materials that conduct electricity without resistance — are remarkable. They provide a macroscopic glimpse into quantum phenomena, which are usually observable only at the atomic level. Beyond their physical peculiarity, superconductors are also useful. They’re found in medical imaging, quantum computers, and cameras used with telescopes.
    But superconducting devices can be finicky. Often, they’re expensive to manufacture and prone to err from environmental noise. That could change, thanks to research from Karl Berggren’s group in the Department of Electrical Engineering and Computer Science.
    The researchers are developing a superconducting nanowire, which could enable more efficient superconducting electronics. The nanowire’s potential benefits derive from its simplicity, says Berggren. “At the end of the day, it’s just a wire.”
    Berggren will present a summary of the research at this month’s IEEE Solid-state Circuits Conference.
    Resistance is futile
    Most metals lose resistance and become superconducting at extremely low temperatures, usually just a few degrees above absolute zero. They’re used to sense magnetic fields, especially in highly sensitive situations like monitoring brain activity. They also have applications in both quantum and classical computing.

    advertisement

    Underlying many of these superconductors is a device invented in the 1960s called the Josephson junction — essentially two superconductors separated by a thin insulator. “That’s what led to conventional superconducting electronics, and then ultimately to the superconducting quantum computer,” says Berggren.
    However, the Josephson junction “is fundamentally quite a delicate object,” Berggren adds. That translates directly into cost and complexity of manufacturing, especially for the thin insulating later. Josephson junction-based superconductors also may not play well with others: “If you try to interface it with conventional electronics, like the kinds in our phones or computers, the noise from those just swamps the Josephson junction. So, this lack of ability to control larger-scale objects is a real disadvantage when you’re trying to interact with the outside world.”
    To overcome these disadvantages, Berggren is developing a new technology — the superconducting nanowire — with roots older than the Josephson junction itself.
    Cryotron reboot
    In 1956, MIT electrical engineer Dudley Buck published a description of a superconducting computer switch called the cryotron. The device was little more than two superconducting wires: One was straight, and the other was coiled around it. The cryotron acts as a switch, because when current flows through the coiled wire, its magnetic field reduces the current flowing through the straight wire.

    advertisement

    At the time, the cryotron was much smaller than other types of computing switches, like vacuum tubes or transistors, and Buck thought the cryotron could become the building block of computers. But in 1959, Buck died suddenly at age 32, halting the development of the cryotron. (Since then, transistors have been scaled to microscopic sizes and today make up the core logic components of computers.)
    Now, Berggren is rekindling Buck’s ideas about superconducting computer switches. “The devices we’re making are very much like cryotrons in that they don’t require Josephson junctions,” he says. He dubbed his superconducting nanowire device the nano-cryotron in tribute to Buck — though it works a bit differently than the original cryotron.
    The nano-cryotron uses heat to trigger a switch, rather than a magnetic field. In Berggren’s device, current runs through a superconducting, supercooled wire called the “channel.” That channel is intersected by an even smaller wire called a “choke” — like a multilane highway intersected by a side road. When current is sent through the choke, its superconductivity breaks down and it heats up. Once that heat spreads from the choke to the main channel, it causes the main channel to also lose its superconducting state.
    Berggren’s group has already demonstrated proof-of-concept for the nano-cryotron’s use as an electronic component. A former student of Berggren’s, Adam McCaughan, developed a device that uses nano-cryotrons to add binary digits. And Berggren has successfully used nano-cryotrons as an interface between superconducting devices and classical, transistor-based electronics.
    Berggren says his group’s superconducting nanowire could one day complement — or perhaps compete with — Josephson junction-based superconducting devices. “Wires are relatively easy to make, so it may have some advantages in terms of manufacturability,” he says.
    He thinks the nano-cryotron could one day find a home in superconducting quantum computers and supercooled electronics for telescopes. Wires have low power dissipation, so they may also be handy for energy-hungry applications, he says. “It’s probably not going to replace the transistors in your phone, but if it could replace the transistor in a server farm or data center? That would be a huge impact.”
    Beyond specific applications, Berggren takes a broad view of his work on superconducting nanowires. “We’re doing fundamental research, here. While we’re interested in applications, we’re just also interested in: What are some different kinds of ways to do computing? As a society, we’ve really focused on semiconductors and transistors. But we want to know what else might be out there.”
    Initial funding for nano-cryotron research in the Berggren lab was provided by the National Science Foundation. More