More stories

  • in

    Machine learning takes on synthetic biology: algorithms can bioengineer cells for you

    If you’ve eaten vegan burgers that taste like meat or used synthetic collagen in your beauty routine — both products that are “grown” in the lab — then you’ve benefited from synthetic biology. It’s a field rife with potential, as it allows scientists to design biological systems to specification, such as engineering a microbe to produce a cancer-fighting agent. Yet conventional methods of bioengineering are slow and laborious, with trial and error being the main approach.
    Now scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a new tool that adapts machine learning algorithms to the needs of synthetic biology to guide development systematically. The innovation means scientists will not have to spend years developing a meticulous understanding of each part of a cell and what it does in order to manipulate it; instead, with a limited set of training data, the algorithms are able to predict how changes in a cell’s DNA or biochemistry will affect its behavior, then make recommendations for the next engineering cycle along with probabilistic predictions for attaining the desired goal.
    “The possibilities are revolutionary,” said Hector Garcia Martin, a researcher in Berkeley Lab’s Biological Systems and Engineering (BSE) Division who led the research. “Right now, bioengineering is a very slow process. It took 150 person-years to create the anti-malarial drug, artemisinin. If you’re able to create new cells to specification in a couple weeks or months instead of years, you could really revolutionize what you can do with bioengineering.”
    Working with BSE data scientist Tijana Radivojevic and an international group of researchers, the team developed and demonstrated a patent-pending algorithm called the Automated Recommendation Tool (ART), described in a pair of papers recently published in the journal Nature Communications. Machine learning allows computers to make predictions after “learning” from substantial amounts of available “training” data.
    In “ART: A machine learning Automated Recommendation Tool for synthetic biology,” led by Radivojevic, the researchers presented the algorithm, which is tailored to the particularities of the synthetic biology field: small training data sets, the need to quantify uncertainty, and recursive cycles. The tool’s capabilities were demonstrated with simulated and historical data from previous metabolic engineering projects, such as improving the production of renewable biofuels.
    In “Combining mechanistic and machine learning models for predictive engineering and optimization of tryptophan metabolism,” the team used ART to guide the metabolic engineering process to increase the production of tryptophan, an amino acid with various uses, by a species of yeast called Saccharomyces cerevisiae, or baker’s yeast. The project was led by Jie Zhang and Soren Petersen of the Novo Nordisk Foundation Center for Biosustainability at the Technical University of Denmark, in collaboration with scientists at Berkeley Lab and Teselagen, a San Francisco-based startup company.

    advertisement

    To conduct the experiment, they selected five genes, each controlled by different gene promoters and other mechanisms within the cell and representing, in total, nearly 8,000 potential combinations of biological pathways. The researchers in Denmark then obtained experimental data on 250 of those pathways, representing just 3% of all possible combinations, and that data were used to train the algorithm. In other words, ART learned what output (amino acid production) is associated with what input (gene expression).
    Then, using statistical inference, the tool was able to extrapolate how each of the remaining 7,000-plus combinations would affect tryptophan production. The design it ultimately recommended increased tryptophan production by 106% over the state-of-the-art reference strain and by 17% over the best designs used for training the model.
    “This is a clear demonstration that bioengineering led by machine learning is feasible, and disruptive if scalable. We did it for five genes, but we believe it could be done for the full genome,” said Garcia Martin, who is a member of the Agile BioFoundry and also the Director of the Quantitative Metabolic Modeling team at the Joint BioEnergy Institute (JBEI), a DOE Bioenergy Research Center; both supported a portion of this work. “This is just the beginning. With this, we’ve shown that there’s an alternative way of doing metabolic engineering. Algorithms can automatically perform the routine parts of research while you devote your time to the more creative parts of the scientific endeavor: deciding on the important questions, designing the experiments, and consolidating the obtained knowledge.”
    More data needed
    The researchers say they were surprised by how little data was needed to obtain results. Yet to truly realize synthetic biology’s potential, they say the algorithms will need to be trained with much more data. Garcia Martin describes synthetic biology as being only in its infancy — the equivalent of where the Industrial Revolution was in the 1790s. “It’s only by investing in automation and high-throughput technologies that you’ll be able to leverage the data needed to really revolutionize bioengineering,” he said.

    advertisement

    Radivojevic added: “We provided the methodology and a demonstration on a small dataset; potential applications might be revolutionary given access to large amounts of data.”
    The unique capabilities of national labs
    Besides the dearth of experimental data, Garcia Martin says the other limitation is human capital — or machine learning experts. Given the explosion of data in our world today, many fields and companies are competing for a limited number of experts in machine learning and artificial intelligence.
    Garcia Martin notes that knowledge of biology is not an absolute prerequisite, if surrounded by the team environment provided by the national labs. Radivojevic, for example, has a doctorate in applied mathematics and no background in biology. “In two years here, she was able to productively collaborate with our multidisciplinary team of biologists, engineers, and computer scientists and make a difference in the synthetic biology field,” he said. “In the traditional ways of doing metabolic engineering, she would have had to spend five or six years just learning the needed biological knowledge before even starting her own independent experiments.”
    “The national labs provide the environment where specialization and standardization can prosper and combine in the large multidisciplinary teams that are their hallmark,” Garcia Martin said.
    Synthetic biology has the potential to make significant impacts in almost every sector: food, medicine, agriculture, climate, energy, and materials. The global synthetic biology market is currently estimated at around $4 billion and has been forecast to grow to more than $20 billion by 2025, according to various market reports.
    “If we could automate metabolic engineering, we could strive for more audacious goals. We could engineer microbiomes for therapeutic or bioremediation purposes. We could engineer microbiomes in our gut to produce drugs to treat autism, for example, or microbiomes in the environment that convert waste to biofuels,” Garcia Martin said. “The combination of machine learning and CRISPR-based gene editing enables much more efficient convergence to desired specifications.” More

  • in

    Spin clean-up method brings practical quantum computers closer to reality

    Quantum computers are the new frontier in advanced research technology, with potential applications such as performing critical calculations, protecting financial assets, or predicting molecular behavior in pharmaceuticals. Researchers from Osaka City University have now solved a major problem hindering large-scale quantum computers from practical use: precise and accurate predictions of atomic and molecular behavior.
    They published their method to remove extraneous information from quantum chemical calculations on Sept. 17 as an advanced online article in Physical Chemistry Chemical Physics, a journal of the Royal Society of Chemistry.
    “One of the most anticipated applications of quantum computers is electronic structure simulations of atoms and molecules,” said paper authors Kenji Sugisaki, Lecturer and Takeji Takui, Professor Emeritus in the Department of Chemistry and Molecular Materials Science in Osaka City University’s Graduate School of Science.
    Quantum chemical calculations are ubiquitous across scientific disciplines, including pharmaceutical therapy development and materials research. All of the calculations are based on solving physicist Erwin Schrödinger’s equation, which uses electronic and molecular interactions that result in a particular property to describe the state of a quantum-mechanical system.
    “Schrödinger equations govern any behavior of electrons in molecules, including all chemical properties of molecules and materials, including chemical reactions,” Sugisaki and Takui said.
    On classical computers, such precise equations would take exponential time. On quantum computers, this precision is possible in realistic time, but it requires “cleaning” during the calculations to obtain the true nature of the system, according to them.
    A quantum system at a specific moment in time, known as a wave function, has a property described as spin, which is the total of the spin of each electron in the system. Due to hardware faults or mathematical errors, there may be incorrect spins informing the system’s spin calculation. To remove these ‘spin contaminants,’ the researchers implemented an algorithm that allows them to select the desired spin quantum number. This purifies the spin, removing contaminants during each calculation — a first on quantum computers, according to them.
    “Quantum chemical calculations based on exactly solving Schrödinger equations for any behavior of atoms and molecules can afford predictions of their physical-chemical properties and complete interpretations on chemical reactions and processes,” they said, noting that this is not possible with currently available classical computers and algorithms. “The present paper has given a solution by implementing a quantum algorithm on quantum computers.”
    The researchers next plan to develop and implement algorithms designed to determine the state of electrons in molecules with the same accuracy for both excited- or ground-state electrons.

    Story Source:
    Materials provided by Osaka City University. Note: Content may be edited for style and length. More

  • in

    How deep learning can advance study of neural degeneration

    Researchers from North Carolina State University have demonstrated the utility of artificial intelligence (AI) in identifying and categorizing neural degeneration in the model organism C. elegans. The tool uses deep learning, a form of AI, and should facilitate and expedite research into neural degeneration.
    “Researchers want to study the mechanisms that drive neural degeneration, with the long-term goal of finding ways to slow or prevent the degeneration associated with age or disease,” says Adriana San Miguel, corresponding author of a paper on the work and an assistant professor of chemical and biomolecular engineering at NC State. “Our work here shows that deep learning can accurately identify physical symptoms of neural degeneration; can do it more quickly than humans; and can distinguish between neural degeneration caused by different factors.
    “Having tools that allow us to identify these patterns of neural degeneration will help us determine the role that different genes play in these processes,” San Miguel says. “It will also help us evaluate the effect of various pharmaceutical interventions on neural degeneration in the model organism. This is one way we can identify promising candidates for therapeutic drugs to address neurological disorders.”
    For this study, researchers focused on C. elegans, or roundworm, which is a model organism widely used to study aging and the development of the nervous system. Specifically, the researchers focused on PVD neurons, which are nerve cells that can detect both touch and temperature. The researchers chose the PVD neuron because it is found throughout the nervous system of C. elegans and it is known to degenerate due to aging.
    Roundworms are tiny and transparent — meaning that it is possible to see their nervous systems while they are still alive. Traditionally, identifying degeneration in C. elegans neurons requires researchers to look for microscopic changes in the cell, such as the appearance of bubbles that form on parts of individual neurons. Researchers can analyze the extent of neural degeneration by tracking the size, number and location of these bubbles.
    “Counting these bubbles is a time-consuming and labor-intensive process,” says Kevin Flores, co-author of the study and an assistant professor of mathematics at NC State. “We’ve demonstrated that we can collect all of the relevant data from an image in a matter of seconds, by combining the power of deep-learning with the advanced speed of so-called GPU computing. This enables a much faster quantitative assessment of neuronal degeneration than traditional techniques.”
    In addition to monitoring the effects of age on neural degeneration, the researchers also examined the effects of “cold shock,” or prolonged exposure to low temperatures. The researchers were surprised to learn that cold shock could also induce neural degeneration.
    “We also found that neural degeneration caused by cold shock had a different pattern of bubbles than the degeneration caused by aging,” San Miguel says. “It is difficult or impossible to distinguish the difference with the naked eye, but the deep learning program found it consistently.
    “This work tells us that deep learning tools are able to spot patterns we may be missing — and we may be just scratching the surface of their utility in advancing our understanding of neural degeneration.”

    Story Source:
    Materials provided by North Carolina State University. Note: Content may be edited for style and length. More

  • in

    Metal wires of carbon complete toolbox for carbon-based computers

    Transistors based on carbon rather than silicon could potentially boost computers’ speed and cut their power consumption more than a thousandfold — think of a mobile phone that holds its charge for months — but the set of tools needed to build working carbon circuits has remained incomplete until now.
    A team of chemists and physicists at the University of California, Berkeley, has finally created the last tool in the toolbox, a metallic wire made entirely of carbon, setting the stage for a ramp-up in research to build carbon-based transistors and, ultimately, computers.
    “Staying within the same material, within the realm of carbon-based materials, is what brings this technology together now,” said Felix Fischer, UC Berkeley professor of chemistry, noting that the ability to make all circuit elements from the same material makes fabrication easier. “That has been one of the key things that has been missing in the big picture of an all-carbon-based integrated circuit architecture.”
    Metal wires — like the metallic channels used to connect transistors in a computer chip — carry electricity from device to device and interconnect the semiconducting elements within transistors, the building blocks of computers.
    The UC Berkeley group has been working for several years on how to make semiconductors and insulators from graphene nanoribbons, which are narrow, one-dimensional strips of atom-thick graphene, a structure composed entirely of carbon atoms arranged in an interconnected hexagonal pattern resembling chicken wire.
    The new carbon-based metal is also a graphene nanoribbon, but designed with an eye toward conducting electrons between semiconducting nanoribbons in all-carbon transistors. The metallic nanoribbons were built by assembling them from smaller identical building blocks: a bottom-up approach, said Fischer’s colleague, Michael Crommie, a UC Berkeley professor of physics. Each building block contributes an electron that can flow freely along the nanoribbon.

    advertisement

    While other carbon-based materials — like extended 2D sheets of graphene and carbon nanotubes — can be metallic, they have their problems. Reshaping a 2D sheet of graphene into nanometer scale strips, for example, spontaneously turns them into semiconductors, or even insulators. Carbon nanotubes, which are excellent conductors, cannot be prepared with the same precision and reproducibility in large quantities as nanoribbons.
    “Nanoribbons allow us to chemically access a wide range of structures using bottom-up fabrication, something not yet possible with nanotubes,” Crommie said. “This has allowed us to basically stitch electrons together to create a metallic nanoribbon, something not done before. This is one of the grand challenges in the area of graphene nanoribbon technology and why we are so excited about it.”
    Metallic graphene nanoribbons — which feature a wide, partially-filled electronic band characteristic of metals — should be comparable in conductance to 2D graphene itself.
    “We think that the metallic wires are really a breakthrough; it is the first time that we can intentionally create an ultra-narrow metallic conductor — a good, intrinsic conductor — out of carbon-based materials, without the need for external doping,” Fischer added.
    Crommie, Fischer and their colleagues at UC Berkeley and Lawrence Berkeley National Laboratory (Berkeley Lab) will publish their findings in the Sept. 25 issue of the journal Science.

    advertisement

    Tweaking the topology
    Silicon-based integrated circuits have powered computers for decades with ever increasing speed and performance, per Moore’s Law, but they are reaching their speed limit — that is, how fast they can switch between zeros and ones. It’s also becoming harder to reduce power consumption; computers already use a substantial fraction of the world’s energy production. Carbon-based computers could potentially switch many times times faster than silicon computers and use only fractions of the power, Fischer said.
    Graphene, which is pure carbon, is a leading contender for these next-generation, carbon-based computers. Narrow strips of graphene are primarily semiconductors, however, and the challenge has been to make them also work as insulators and metals — opposite extremes, totally nonconducting and fully conducting, respectively — so as to construct transistors and processors entirely from carbon.
    Several years ago, Fischer and Crommie teamed up with theoretical materials scientist Steven Louie, a UC Berkeley professor of physics, to discover new ways of connecting small lengths of nanoribbon to reliably create the full gamut of conducting properties.
    Two years ago, the team demonstrated that by connecting short segments of nanoribbon in the right way, electrons in each segment could be arranged to create a new topological state — a special quantum wave function — leading to tunable semiconducting properties.
    In the new work, they use a similar technique to stitch together short segments of nanoribbons to create a conducting metal wire tens of nanometers long and barely a nanometer wide.
    The nanoribbons were created chemically and imaged on very flat surfaces using a scanning tunneling microscope. Simple heat was used to induce the molecules to chemically react and join together in just the right way. Fischer compares the assembly of daisy-chained building blocks to a set of Legos, but Legos designed to fit at the atomic scale.
    “They are all precisely engineered so that there is only one way they can fit together. It’s as if you take a bag of Legos, and you shake it, and out comes a fully assembled car,” he said. “That is the magic of controlling the self-assembly with chemistry.”
    Once assembled, the new nanoribbon’s electronic state was a metal — just as Louie predicted — with each segment contributing a single conducting electron.
    The final breakthrough can be attributed to a minute change in the nanoribbon structure.
    “Using chemistry, we created a tiny change, a change in just one chemical bond per about every 100 atoms, but which increased the metallicity of the nanoribbon by a factor of 20, and that is important, from a practical point of view, to make this a good metal,” Crommie said.
    The two researchers are working with electrical engineers at UC Berkeley to assemble their toolbox of semiconducting, insulating and metallic graphene nanoribbons into working transistors.
    “I believe this technology will revolutionize how we build integrated circuits in the future,” Fischer said. “It should take us a big step up from the best performance that can be expected from silicon right now. We now have a path to access faster switching speeds at much lower power consumption. That is what is driving the push toward a carbon-based electronics semiconductor industry in the future.”
    Co-lead authors of the paper are Daniel Rizzo and Jingwei Jiang from UC Berkeley’s Department of Physics and Gregory Veber from the Department of Chemistry. Other co-authors are Steven Louie, Ryan McCurdy, Ting Cao, Christopher Bronner and Ting Chen of UC Berkeley. Jiang, Cao, Louie, Fischer and Crommie are affiliated with Berkeley Lab, while Fischer and Crommie are members of the Kavli Energy NanoSciences Institute.
    The research was supported by the Office of Naval Research, the Department of Energy, the Center for Energy Efficient Electronics Science and the National Science Foundation. More

  • in

    New possibilities for working with quantum information

    Small particles can have an angular momentum that points in a certain direction — the spin. This spin can be manipulated by a magnetic field. This principle, for example, is the basic idea behind magnetic resonance imaging as used in hospitals. An international research team has now discovered a surprising effect in a system that is particularly well suited for processing quantum information: the spins of phosphorus atoms in a piece of silicon, coupled to a microwave resonator. If these spins are cleverly excited with microwave pulses, a so-called spin echo signal can be detected after a certain time — the injected pulse signal is re-emitted as a quantum echo. Surprisingly, this spin echo does not occur only once, but a whole series of echoes can be detected. This opens up new possibilities of how information can be processed with quantum systems.
    The experiments were carried out at the Walther-Meissner-Institute in Garching by researchers from the Bavarian Academy of Sciences and Humanities and the Technical University of Munich, the theoretical explanation was developed at TU Wien (Vienna). Now the joint work has been published in the journal Physical Review Letters.
    The echo of quantum spins
    “Spin echoes have been known for a long time, this is nothing unusual,” says Prof. Stefan Rotter from TU Wien (Vienna). First, a magnetic field is used to make sure that the spins of many atoms point in the same magnetic direction. Then the atoms are irradiated with an electromagnetic pulse, and suddenly their spins begin to change direction.
    However, the atoms are embedded in slightly different environments. It is therefore possible that slightly different forces act on their spins. “As a result, the spin does not change at the same speed for all atoms,” explains Dr. Hans Hübl from the Bavarian Academy of Sciences and Humanities. “Some particles change their spin direction faster than others, and soon you have a wild jumble of spins with completely different orientations.”
    But it is possible to rewind this apparent chaos — with the help of another electromagnetic pulse. A suitable pulse can reverse the previous spin rotation so that the spins all come together again. “You can imagine it’s a bit like running a marathon,” says Stefan Rotter. “At the start signal, all the runners are still together. As some runners are faster than others, the field of runners is pulled further and further apart over time. However, if all runners were now given the signal to return to the start, all runners would return to the start at about the same time, although faster runners have to cover a longer distance back than slower ones.”
    In the case of spins, this means that at a certain point in time all particles have exactly the same spin direction again — and this is called the “spin echo.” “Based on our experience in this field, we had already expected to be able to measure a spin echo in our experiments,” says Hans Hübl. “The remarkable thing is that we were not only able to measure a single echo, but a series of several echoes.”
    The spin that influences itself
    At first, it was unclear how this novel effect comes about. But a detailed theoretical analysis now made it possible to understand the phenomenon: It is due to the strong coupling between the two components of the experiment — the spins and the photons in a microwave resonator, an electrical circuit in which microwaves can only exist at certain wavelengths. “This coupling is the essence of our experiment: You can store information in the spins, and with the help of the microwave photons in the resonator you can modify it or read it out,” says Hans Hübl.
    The strong coupling between the atomic spins and the microwave resonator is also responsible for the multiple echoes: If the spins of the atoms all point in the same direction in the first echo, this produces an electromagnetic signal. “Thanks to the coupling to the microwave resonator, this signal acts back on the spins, and this leads to another echo — and on and on,” explains Stefan Rotter. “The spins themselves cause the electromagnetic pulse, which is responsible for the next echo.”
    The physics of the spin echo has great significance for technical applications — it is an important basic principle behind magnetic resonance imaging. The new possibilities offered by the multiple echo, such as the processing of quantum information, will now be examined in more detail. “For sure, multiple echos in spin ensembles coupled strongly to the photons of a resonator are an exciting new tool. It will not only find useful applications in quantum information technology, but also in spin-based spectroscopy methods,” says Rudolf Gross, co-author and director of the Walther-Meissner-Institute.

    Story Source:
    Materials provided by Vienna University of Technology. Original written by Florian Aigner. Note: Content may be edited for style and length. More

  • in

    A question of reality

    Physicist Reinhold Bertlmann of the University of Vienna, Austria has published a review of the work of his late long-term collaborator John Stewart Bell of CERN, Geneva in EPJ H. This review, ‘Real or Not Real: that is the question’, explores Bell’s inequalities and his concepts of reality and explains their relevance to quantum information and its applications.
    John Stewart Bell’s eponymous theorem and inequalities set out, mathematically, the contrast between quantum mechanical theories and local realism. They are used in quantum information, which has evolving applications in security, cryptography and quantum computing.
    The distinguished quantum physicist John Stewart Bell (1928-1990) is best known for the eponymous theorem that proved current understanding of quantum mechanics to be incompatible with local hidden variable theories. Thirty years after his death, his long-standing collaborator Reinhold Bertlmann of the University of Vienna, Austria, has reviewed his thinking in a paper for EPJ H, ‘Real or Not Real: That is the question’. In this historical and personal account, Bertlmann aims to introduce his readers to Bell’s concepts of reality and contrast them with some of his own ideas of virtuality.
    Bell spent most of his working life at CERN in Geneva, Switzerland, and Bertlmann first met him when he took up a short-term fellowship there in 1978. Bell had first presented his theorem in a seminal paper published in 1964, but this was largely neglected until the 1980s and the introduction of quantum information.
    Bertlmann discusses the concept of Bell inequalities, which arise through thought experiments in which a pair of spin-½ particles propagate in opposite directions and are measured by independent observers, Alice and Bob. The Bell inequality distinguishes between local realism — the ‘common sense’ view in which Alice’s observations do not depend on Bob’s, and vice versa — and quantum mechanics, or, specifically, quantum entanglement. Two quantum particles, such as those in the Alice-Bob situation, are entangled when the state measured by one observer instantaneously influences that of the other. This theory is the basis of quantum information.
    And quantum information is no longer just an abstruse theory. It is finding applications in fields as diverse as security protocols, cryptography and quantum computing. “Bell’s scientific legacy can be seen in these, as well as in his contributions to quantum field theory,” concludes Bertlmann. “And he will also be remembered for his critical thought, honesty, modesty and support for the underprivileged.”

    Story Source:
    Materials provided by Springer. Note: Content may be edited for style and length. More

  • in

    A self-erasing chip for security and anti-counterfeit tech

    Self-erasing chips developed at the University of Michigan could help stop counterfeit electronics or provide alerts if sensitive shipments are tampered with.
    They rely on a new material that temporarily stores energy, changing the color of the light it emits. It self-erases in a matter of days, or it can be erased on demand with a flash of blue light.
    “It’s very hard to detect whether a device has been tampered with. It may operate normally, but it may be doing more than it should, sending information to a third party,” said Parag Deotare, assistant professor of electrical engineering and computer science.
    With a self-erasing bar code printed on the chip inside the device, the owner could get a hint if someone had opened it to secretly install a listening device. Or a bar code could be written and placed on integrated circuit chips or circuit boards, for instance, to prove that they hadn’t been opened or replaced on their journeys. Likewise, if the lifespan of the bar codes was extended, they could be written into devices as hardware analogues of software authorization keys.
    The self-erasing chips are built from a three-atom-thick layer of semiconductor laid atop a thin film of molecules based on azobenzenes — a kind of molecule that shrinks in reaction to UV light. Those molecules tug on the semiconductor in turn, causing it to emit slightly longer wavelengths of light.
    To read the message, you have to be looking at it with the right kind of light. Che-Hsuan Cheng, a doctoral student in material science and engineering in Deotare’s group and the first author on the study in Advanced Optical Materials, is most interested in its application as self-erasing invisible ink for sending secret messages.

    advertisement

    The stretched azobenzene naturally gives up its stored energy over the course of about seven days in the dark — a time that can be shortened with exposure to heat and light, or lengthened if stored in a cold, dark place. Whatever was written on the chip, be it an authentication bar code or a secret message, would disappear when the azobenzene stopped stretching the semiconductor. Alternatively, it can be erased all at once with a flash of blue light. Once erased, the chip can record a new message or bar code.
    The semiconductor itself is a “beyond graphene” material, said Deotare, as it has many similarities with the Nobel Prize-winning nanomaterial. But it can also do something graphene can’t: It emits light in particular frequencies.
    The research team included the group of Jinsang Kim, professor of material science and engineering. Da Seul Yang, a doctoral student in macromolecular science and engineering, designed and made the molecules. Cheng then floated a single layer of the molecules on water and dipped a silicon wafer into the water to coat it with the molecules.
    Then, the chip went to Deotare’s lab to be layered with the semiconductor. Using the “Scotch tape” method, Cheng essentially put sticky tape on a chunk of the semiconductor, tungsten diselenide, and used it to draw off single layers of the material: a sandwich of a single layer of tungsten atoms between two layers of selenium atoms. He used a kind of stamp to transfer the semiconductor onto the azobenzene-coated chip.
    Next steps for the research include extending the amount of time that the material can keep the message intact for use as an anti-counterfeit measure.
    The research is funded by the Air Force Office of Scientific Research. Kim is also a professor of chemical engineering, biomedical engineering, macromolecular science and engineering, and chemistry.
    The University of Michigan has applied for patent protection and is seeking commercial partners to help bring the technology to market. More

  • in

    Bridging the gap between the magnetic and electronic properties of topological insulators

    Scientists at Tokyo Institute of Technology (Tokyo Tech) have shed light on the relationship between the magnetic properties of topological insulators and their electronic band structure. Their experimental results offer new insights into recent debates regarding the evolution of the band structure with temperature in these materials, which exhibit unusual quantum phenomena and are envisioned to be crucial in next-generation electronics, spintronics, and quantum computers.
    Topological insulators have the peculiar property of being electrically conductive on the surface but insulating on their interior. This seemingly simple, unique characteristic allows these materials to host of a plethora of exotic quantum phenomena that would be useful for quantum computers, spintronics, and advanced optoelectronic systems.
    To unlock some of the unusual quantum properties, however, it is necessary to induce magnetism in topological insulators. In other words, some sort of ‘order’ in how electrons in the material align with respect to each other needs to be achieved. In 2017, a novel method to achieve this feat was proposed. Termed “magnetic extension,” the technique involves inserting a monolayer of a magnetic material into the topmost layer of the topological insulator, which circumvents the problems caused by other available methods like doping with magnetic impurities.
    Unfortunately, the use of magnetic extension led to complex questions and conflicting answers regarding the electronic band structure of the resulting materials, which dictates the possible energy levels of electrons and ultimately determines the material’s conducting properties. Topological insulators are known to exhibit what is known as a “Dirac cone (DC)” in their electronic band structure that resembles two cones facing each other. In theory, the DC is ungapped for ordinary topological insulators, but becomes gapped by inducing magnetism. However, the scientific community has not agreed on the correlation between the gap between the two cone tips and the magnetic characteristics of the material experimentally.
    In a recent effort to settle this matter, scientists from multiple universities and research institutes carried out a collaborative study led by Assoc Prof Toru Hirahara from Tokyo Tech, Japan. They fabricated magnetic topological structures by depositing Mn and Te on Bi2Te3, a well-studied topological insulator. The scientists theorized that extra Mn layers would interact more strongly with Bi2Te3 and that emerging magnetic properties could be ascribed to changes in the DC gap, as Hirahara explains: “We hoped that strong interlayer magnetic interactions would lead to a situation where the correspondence between the magnetic properties and the DC gap were clear-cut compared with previous studies.”
    By examining the electronic band structures and photoemission characteristics of the samples, they demonstrated how the DC gap progressively closes as temperature increases. Additionally, they analyzed the atomic structure of their samples and found two possible configurations, MnBi2Te4/Bi2Te3 and Mn4Bi2Te7/Bi2Te3, the latter of which is responsible for the DC gap.
    However, a peculiarly puzzling finding was that the temperature at which the DC gap closes is well over the critical temperature (TC), above which materials lose their permanent magnetic ordering. This is in stark contrast with previous studies that indicated that the DC gap can still be open at a temperature higher than the TC of the material without closing. On this note, Hirahara remarks: “Our results show, for the first time, that the loss of long-range magnetic order above the TC and the DC gap closing are not correlated.”
    Though further efforts will be needed to clarify the relationship between the nature of the DC gap and magnetic properties, this study is a step in the right direction. Hopefully, a deeper understanding of these quantum phenomena will help us reap the power of topological insulators for next-generation electronics and quantum computing.

    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More