More stories

  • in

    'Virtual biopsies' could replace tissue biopsies in future thanks to new technique

    A new advanced computing technique using routine medical scans to enable doctors to take fewer, more accurate tumour biopsies, has been developed by cancer researchers at the University of Cambridge.
    This is an important step towards precision tissue sampling for cancer patients to help select the best treatment. In future the technique could even replace clinical biopsies with ‘virtual biopsies’, sparing patients invasive procedures.
    The research published in European Radiology shows that combining computed tomography (CT) scans with ultrasound images creates a visual guide for doctors to ensure they sample the full complexity of a tumour with fewer targeted biopsies.
    Capturing the patchwork of different types of cancer cell within a tumour — known as tumour heterogeneity — is critical for selecting the best treatment because genetically-different cells may respond differently to treatment.
    Most cancer patients undergo one or several biopsies to confirm diagnosis and plan their treatment. But because this is an invasive clinical procedure, there is an urgent need to reduce the number of biopsies taken and to make sure biopsies accurately sample the genetically-different cells in the tumour, particularly for ovarian cancer patients.
    High grade serous ovarian (HGSO) cancer, the most common type of ovarian cancer, is referred to as a ‘silent killer’ because early symptoms can be difficult to pick up. By the time the cancer is diagnosed, it is often at an advanced stage, and survival rates have not changed much over the last 20 years.

    advertisement

    But late diagnosis isn’t the only problem. HGSO tumours tend to have a high level of tumour heterogeneity and patients with more genetically-different patches of cancer cells tend to have a poorer response to treatment.
    Professor Evis Sala from the Department of Radiology, co-lead CRUK Cambridge Centre Advanced Cancer Imaging Programme, leads a multi-disciplinary team of radiologists, physicists, oncologists and computational scientists using innovative computing techniques to reveal tumour heterogeneity from standard medical images. This new study, led by Professor Sala, involved a small group of patients with advanced ovarian cancer who were due to have ultrasound-guided biopsies prior to starting chemotherapy.
    For the study, the patients first had a standard-of-care CT scan. A CT scanner uses x-rays and computing to create a 3D image of the tumour from multiple image ‘slices’ through the body.
    The researchers then used a process called radiomics — using high-powered computing methods to analyse and extract additional information from the data-rich images created by the CT scanner — to identify and map distinct areas and features of the tumour. The tumour map was then superimposed on the ultrasound image of the tumour and the combined image used to guide the biopsy procedure.
    By taking targeted biopsies using this method, the research team reported that the diversity of cancer cells within the tumour was successfully captured.

    advertisement

    Co-first author Dr Lucian Beer, from the Department of Radiology and CRUK Cambridge Centre Ovarian Cancer Programme, said of the results: “Our study is a step forward to non-invasively unravel tumour heterogeneity by using standard-of-care CT-based radiomic tumour habitats for ultrasound-guided targeted biopsies.”
    Co-first author Paula Martin-Gonzalez, from the Cancer Research UK Cambridge Institute and CRUK Cambridge Centre Ovarian Cancer Programme, added: “We will now be applying this method in a larger clinical study.”
    Professor Sala said: “This study provides an important milestone towards precision tissue sampling. We are truly pushing the boundaries in translating cutting edge research to routine clinical care.”
    Fiona Barve (56) is a science teacher who lives near Cambridge. She was diagnosed with ovarian cancer in 2017 after visiting her doctor with abdominal pain. She was diagnosed with stage 4 ovarian cancer and immediately underwent surgery and a course of chemotherapy. Since March 2019 she has been cancer free and is now back to teaching three days a week.
    “I was diagnosed at a late stage and I was fortunate my surgery, which I received within four weeks of being diagnosed, and chemotherapy worked for me. I feel lucky to be around,” said Barve.
    “When you are first undergoing the diagnosis of cancer, you feel as if you are on a conveyor belt, every part of the journey being extremely stressful. This new enhanced technique will reduce the need for several procedures and allow patients more time to adjust to their circumstances. It will enable more accurate diagnosis with less invasion of the body and mind. This can only be seen as positive progress.” More

  • in

    COVID-19 unmasked: Math model suggests optimal treatment strategies

    Getting control of COVID-19 will take more than widespread vaccination; it will also require better understanding of why the disease causes no apparent symptoms in some people but leads to rapid multi-organ failure and death in others, as well as better insight into what treatments work best and for which patients.
    To meet this unprecedented challenge, researchers at Massachusetts General Hospital (MGH), in collaboration with investigators from Brigham and Women’s Hospital and the University of Cyprus, have created a mathematical model based on biology that incorporates information about the known infectious machinery of SARS-CoV-2, the virus that causes COVID-19, and about the potential mechanisms of action of various treatments that have been tested in patients with COVID-19.
    The model and its important clinical applications are described in the journal Proceedings of the National Academy of Sciences (PNAS).
    “Our model predicts that antiviral and anti-inflammatory drugs that were first employed to treat COVID-19 might have limited efficacy, depending on the stage of the disease progression,” says corresponding author Rakesh K. Jain, PhD, from the Edwin L. Steele Laboratories in the Department of Radiation Oncology at MGH and Harvard Medical School (HMS).
    Jain and colleagues found that in all patients, the viral load (the level of SARS-CoV-2 particles in the bloodstream) increases during early lung infection, but then may go in different directions starting after Day 5, depending on levels of key immune guardian cells, called T cells. T cells are the first responders of the immune system that effectively coordinate other aspects of immunity. The T cell response is known as adaptive immunity because it is flexible and responds to immediate threats.
    In patients younger than 35 who have healthy immune systems, a sustained recruitment of T cells occurs, accompanied by a reduction in viral load and inflammation and a decrease in nonspecific immune cells (so-called “innate” immunity). All of these processes lead to lower risk for blood clot formation and to restoring oxygen levels in lung tissues, and these patients tend to recover.

    advertisement

    In contrast, people who have higher levels of inflammation at the time of infection — such as those with diabetes, obesity or high blood pressure — or whose immune systems are tilted toward more active innate immune responses but less effective adaptive immune responses tend to have poor outcomes.
    The investigators also sought to answer the question of why men tend have more severe COVID-19 compared with women, and found that although the adaptive immune response is not as vigorous in women as in men, women have lower levels of a protein called TMPRSS2 that allows SARS-CoV-2 to enter and infect normal cells.
    Based on their findings, Jain and colleagues propose that optimal treatment for older patients — who are likely to already have inflammation and impaired immune responses compared with younger patients — should include the clot-preventing drug heparin and/or the use of an immune response-modifying drug (checkpoint inhibitor) in early stages of the disease, and the anti-inflammatory drug dexamethasone at later stages.
    In patients with pre-existing conditions such as obesity, diabetes and high blood pressure or immune system abnormalities, treatment might also include drugs specifically targeted against inflammation-promoting substances (cytokines, such as interleukin-6) in the body, as well as drugs that can inhibit the renin-angiotensin system (the body’s main blood pressure control mechanism), thereby preventing activation of abnormal blood pressure and resistance to blood flow that can occur in response to viral infections.
    This work shows how tools originally developed for cancer research can be useful for understanding COVID-19: The model was first created to analyze involvement of the renin angiotensin system in the development of fibrous tissues in tumors, but was modified to include SARS-CoV-2 infection and COVID-19-specific mechanisms. The team is further developing the model and plans to use it to examine the dynamics of the immune system in response to different types of COVID-19 vaccines as well as cancer-specific comorbidities that might require special considerations for treatment.
    Co-corresponding authors are Lance L. Munn, MGH, and Triantafyllos Stylianopoulos, University of Cyprus. Other authors are Chrysovalantis Voutouri, U. Cyprus; Mohammad Reza Nikmaneshi, Sharif University of Technology, Iran; C. Corey Hardin, Melin J. Khandekar and Sayon Dutta, all from MGH; and Ankit B. Patel and Ashish Verma from Brigham and Women’s Hospital.
    Jain’s research is supported by an Investigator Award and grants from the National Foundation for Cancer Research, Jane’s Trust Foundation, American Medical Research Foundation and Harvard Ludwig Cancer Center. Munn’s research is supported by a National Institutes of Health grant. Stylianopoulos’s research is supported by the European Research Council and Cyprus Research and Innovation Foundation. Patel is support by an American Society of Nephrology Joseph A. Carlucci Research Fellowship. More

  • in

    Advanced materials in a snap

    If everything moved 40,000 times faster, you could eat a fresh tomato three minutes after planting a seed. You could fly from New York to L.A. in half a second. And you’d have waited in line at airport security for that flight for 30 milliseconds.
    Thanks to machine learning, designing materials for new, advanced technologies could accelerate that much.
    A research team at Sandia National Laboratories has successfully used machine learning — computer algorithms that improve themselves by learning patterns in data — to complete cumbersome materials science calculations more than 40,000 times faster than normal.
    Their results, published Jan. 4 in npj Computational Materials, could herald a dramatic acceleration in the creation of new technologies for optics, aerospace, energy storage and potentially medicine while simultaneously saving laboratories money on computing costs.
    “We’re shortening the design cycle,” said David Montes de Oca Zapiain, a computational materials scientist at Sandia who helped lead the research. “The design of components grossly outpaces the design of the materials you need to build them. We want to change that. Once you design a component, we’d like to be able to design a compatible material for that component without needing to wait for years, as it happens with the current process.”
    The research, funded by the U.S. Department of Energy’s Basic Energy Sciences program, was conducted at the Center for Integrated Nanotechnologies, a DOE user research facility jointly operated by Sandia and Los Alamos national labs.

    advertisement

    Machine learning speeds up computationally expensive simulations
    Sandia researchers used machine learning to accelerate a computer simulation that predicts how changing a design or fabrication process, such as tweaking the amounts of metals in an alloy, will affect a material. A project might require thousands of simulations, which can take weeks, months or even years to run.
    The team clocked a single, unaided simulation on a high-performance computing cluster with 128 processing cores (a typical home computer has two to six processing cores) at 12 minutes. With machine learning, the same simulation took 60 milliseconds using only 36 cores-equivalent to 42,000 times faster on equal computers. This means researchers can now learn in under 15 minutes what would normally take a year.
    Sandia’s new algorithm arrived at an answer that was 5% different from the standard simulation’s result, a very accurate prediction for the team’s purposes. Machine learning trades some accuracy for speed because it makes approximations to shortcut calculations.
    “Our machine-learning framework achieves essentially the same accuracy as the high-fidelity model but at a fraction of the computational cost,” said Sandia materials scientist Rémi Dingreville, who also worked on the project.
    Benefits could extend beyond materials
    Dingreville and Montes de Oca Zapiain are going to use their algorithm first to research ultrathin optical technologies for next-generation monitors and screens. Their research, though, could prove widely useful because the simulation they accelerated describes a common event — the change, or evolution, of a material’s microscopic building blocks over time.
    Machine learning previously has been used to shortcut simulations that calculate how interactions between atoms and molecules change over time. The published results, however, demonstrate the first use of machine learning to accelerate simulations of materials at relatively large, microscopic scales, which the Sandia team expects will be of greater practical value to scientists and engineers.
    For instance, scientists can now quickly simulate how miniscule droplets of melted metal will glob together when they cool and solidify, or conversely, how a mixture will separate into layers of its constituent parts when it melts. Many other natural phenomena, including the formation of proteins, follow similar patterns. And while the Sandia team has not tested the machine-learning algorithm on simulations of proteins, they are interested in exploring the possibility in the future. More

  • in

    Breaking through the resolution barrier with quantum-limited precision

    Researchers at Paderborn University have developed a new method of distance measurement for systems such as GPS, which achieves more precise results than ever before. Using quantum physics, the team led by Leibniz Prize winner Professor Christine Silberhorn has successfully overcome the so-called resolution limit, which causes the “noise” we may see in photos, for example. Their findings have just been published in the academic journal Physical Review X Quantum (PRX Quantum).
    Physicist Dr Benjamin Brecht explains the problem of the resolution limit: “In laser distance measurements a detector registers two light pulses of different intensities with a time difference. The more precise the time measurement is, the more accurately the distance can be determined. Providing the time separation between the pulses is greater than the length of the pulses, this works well.” Problems arise, however, as Brecht explains, if the pulses overlap: “Then you can no longer measure the time difference using conventional methods. This is known as the “resolution limit” and is a well-known effect in photos. Very small structures or textures can no longer be resolved. That’s the same problem — just with position rather than time.”
    A further challenge, according to Brecht, is to determine the different intensities of two light pulses, simultaneously with their time difference and the arrival time. But this is exactly what the researchers have managed to do — “with quantum-limited precision,” adds Brecht. Working with partners from the Czech Republic and Spain, the Paderborn physicists were even able to measure these values when the pulses overlapped by 90 per cent. Brecht says: “This is far beyond the resolution limit. The precision of the measurement is 10,000 times better. Using methods from quantum information theory, we can find new forms of measurement which overcome the limitations of established methods.”
    These findings could allow significant improvements in the future to the precision of applications such as LIDAR, a method of optical distance and speed measurement, and GPS. It will take some time, however, before this is ready for the market, points out Brecht.

    Story Source:
    Materials provided by Universität Paderborn. Note: Content may be edited for style and length. More

  • in

    Deep neural network predicts transcription factors

    A joint research team from KAIST and UCSD has developed a deep neural network named DeepTFactor that predicts transcription factors from protein sequences. DeepTFactor will serve as a useful tool for understanding the regulatory systems of organisms, accelerating the use of deep learning for solving biological problems.
    A transcription factor is a protein that specifically binds to DNA sequences to control the transcription initiation. Analyzing transcriptional regulation enables the understanding of how organisms control gene expression in response to genetic or environmental changes. In this regard, finding the transcription factor of an organism is the first step in the analysis of the transcriptional regulatory system of an organism.
    Previously, transcription factors have been predicted by analyzing sequence homology with already characterized transcription factors or by data-driven approaches such as machine learning. Conventional machine learning models require a rigorous feature selection process that relies on domain expertise such as calculating the physicochemical properties of molecules or analyzing the homology of biological sequences. Meanwhile, deep learning can inherently learn latent features for the specific task.
    A joint research team comprised of Ph.D. candidate Gi Bae Kim and Distinguished Professor Sang Yup Lee of the Department of Chemical and Biomolecular Engineering at KAIST, and Ye Gao and Professor Bernhard O. Palsson of the Department of Biochemical Engineering at UCSD reported a deep learning-based tool for the prediction of transcription factors. Their research paper “DeepTFactor: A deep learning-based tool for the prediction of transcription factors” was published online in PNAS.
    Their article reports the development of DeepTFactor, a deep learning-based tool that predicts whether a given protein sequence is a transcription factor using three parallel convolutional neural networks. The joint research team predicted 332 transcription factors of Escherichia coli K-12 MG1655 using DeepTFactor and the performance of DeepTFactor by experimentally confirming the genome-wide binding sites of three predicted transcription factors (YqhC, YiaU, and YahB).
    The joint research team further used a saliency method to understand the reasoning process of DeepTFactor. The researchers confirmed that even though information on the DNA binding domains of the transcription factor was not explicitly given the training process, DeepTFactor implicitly learned and used them for prediction. Unlike previous transcription factor prediction tools that were developed only for protein sequences of specific organisms, DeepTFactor is expected to be used in the analysis of the transcription systems of all organisms at a high level of performance.
    Distinguished Professor Sang Yup Lee said, “DeepTFactor can be used to discover unknown transcription factors from numerous protein sequences that have not yet been characterized. It is expected that DeepTFactor will serve as an important tool for analyzing the regulatory systems of organisms of interest.”

    Story Source:
    Materials provided by The Korea Advanced Institute of Science and Technology (KAIST). Note: Content may be edited for style and length. More

  • in

    Supercapacitors challenge batteries

    A team working with Roland Fischer, Professor of Inorganic and Metal-Organic Chemistry at the Technical University Munich (TUM) has developed a highly efficient supercapacitor. The basis of the energy storage device is a novel, powerful and also sustainable graphene hybrid material that has comparable performance data to currently utilized batteries.
    Usually, energy storage is associated with batteries and accumulators that provide energy for electronic devices. However, in laptops, cameras, cellphones or vehicles, so-called supercapacitors are increasingly installed these days.
    Unlike batteries they can quickly store large amounts of energy and put it out just as fast. If, for instance, a train brakes when entering the station, supercapacitors are storing the energy and provide it again when the train needs a lot of energy very quickly while starting up.
    However, one problem with supercapacitors to date was their lack of energy density. While lithium accumulators reach an energy density of up to 265 Kilowatt hours (KW/h), supercapacitors thus far have only been delivering a tenth thereof.
    Sustainable material provides high performance
    The team working with TUM chemist Roland Fischer has now developed a novel, powerful as well as sustainable graphene hybrid material for supercapacitors. It serves as the positive electrode in the energy storage device. The researchers are combining it with a proven negative electrode based on titan and carbon.

    advertisement

    The new energy storage device does not only attain an energy density of up to 73 Wh/kg, which is roughly equivalent to the energy density of an nickel metal hydride battery, but also performs much better than most other supercapacitors at a power density of 16 kW/kg. The secret of the new supercapacitor is the combination of different materials — hence, chemists refer to the supercapacitor as “asymmetrical.”
    Hybrid materials: Nature is the role model
    The researchers are betting on a new strategy to overcome the performance limits of standard materials — they utilize hybrid materials. “Nature is full of highly complex, evolutionarily optimized hybrid materials — bones and teeth are examples. Their mechanical properties, such as hardness and elasticity were optimized through the combination of various materials by nature,” says Roland Fischer.
    The abstract idea of combining basic materials was transferred to supercapacitors by the research team. As a basis, they used the novel positive electrode of the storage unit with chemically modified graphene and combined it with a nano-structured metal organic framework, a so-called MOF.
    Powerful and stable
    Decisive for the performance of graphene hybrids are on the one hand a large specific surface and controllable pore sizes and on the other hand a high electrical conductivity. “The high performance capabilities of the material is based on the combination of the microporous MOFs with the conductive graphene acid,” explains first author Jayaramulu Kolleboyina, a former guest scientist working with Roland Fischer.

    advertisement

    A large surface is important for good supercapacitors. It allows for the collection of a respectively large number of charge carriers within the material — this is the basic principle for the storage of electrical energy.
    Through skillful material design, the researchers achieved the feat of linking the graphene acid with the MOFs. The resulting hybrid MOFs have a very large inner surface of up to 900 square meters per gram and are highly performant as positive electrodes in a supercapacitor.
    Long stability
    However, that is not the only advantage of the new material. To achieve a chemically stable hybrid, one needs strong chemical bonds between the components. The bonds are apparently the same as those between amino acids in proteins, according to Fischer: “In fact, we have connected the graphene acid with a MOF-amino acid, which creates a type of peptide bond.”
    The stable connection between the nano-structured components has huge advantages in terms of long term stability: The more stable the bonds, the more charging and discharging cycles are possible without significant performance impairment.
    For comparison: A classic lithium accumulator has a useful life of around 5,000 cycles. The new cell developed by the TUM researchers retains close to 90 percent capacity even after 10,000 cycles.
    International network of experts
    Fischer emphasizes how important the unfettered international cooperation the researchers controlled themselves was when it came to the development of the new supercapacitor. Accordingly, Jayaramulu Kolleboyina built the team. He was a guest scientist from India invited by the Alexander von Humboldt Foundation and who by now is the head of the chemistry department at the newly established Indian Institute of Technology in Jammu.
    “Our team also networked with electro-chemistry and battery research experts in Barcelona as well as graphene derivate experts from the Czech Republic,” reports Fischer. “Furthermore, we have integrated partners from the USA and Australia. This wonderful, international co-operation promises much for the future.”
    The research was supported by the Deutsche Forschungsgemeinschaft (DFG) within the cluster of excellence e-conversion, the Alexander von Humboldt Foundation, the Indian Institute of Technology Jammu, the Queensland University of Technology and the Australian Research Council (ARC). Further funding came from the European Regional Development Fund provided by the Ministry of Education, Youth and Sports of the Czech Republic. More

  • in

    A robotic revolution for urban nature

    Drones, robots and autonomous systems can transform the natural world in and around cities for people and wildlife.
    International research, involving over 170 experts and led by the University of Leeds, assessed the opportunities and challenges that this cutting-edge technology could have for urban nature and green spaces.
    The researchers highlighted opportunities to improve how we monitor nature, such as identifying emerging pests and ensuring plants are cared for, and helping people engage with and appreciate the natural world around them.
    As robotics, autonomous vehicles and drones become more widely used across cities, pollution and traffic congestion may reduce, making towns and cities more pleasant places to spend time outside.
    But the researchers also warned that advances in robotics and automation could be damaging to the environment.
    For instance, robots and drones might generate new sources of waste and pollution themselves, with potentially substantial negative implications for urban nature. Cities might have to be re-planned to provide enough room for robots and drones to operate, potentially leading to a loss of green space. And they could also increase existing social inequalities, such as unequal access to green space.

    advertisement

    Lead author Dr Martin Dallimer, from the School of Earth and Environment at the University of Leeds, said: “Technology, such as robotics, has the potential to change almost every aspect of our lives. As a society, it is vital that we proactively try to understand any possible side effects and risks of our growing use of robots and automated systems.
    “Although the future impacts on urban green spaces and nature are hard to predict, we need to make sure that the public, policy makers and robotics developers are aware of the potential pros and cons, so we can avoid detrimental consequences and fully realise the benefits.”
    The research, published today in Nature Ecology & Evolution, is authored by a team of 77 academics and practitioners.
    The researchers conducted an online survey of 170 experts from 35 countries, which they say provides a current best guess of what the future could hold.
    Participants gave their views on the potential opportunities and challenges for urban biodiversity and ecosystems, from the growing use of robotics and autonomous systems. These are defined as technologies that can sense, analyse, interact with and manipulate their physical environment. This includes unmanned aerial vehicles (drones), self-driving cars, robots able to repair infrastructure, and wireless sensor networks used for monitoring.

    advertisement

    These technologies have a large range of potential applications, such as autonomous transport, waste collection, infrastructure maintenance and repair, policing and precision agriculture.
    The research was conducted as part of Leeds’ Self Repairing Cities project, which aims to enable robots and autonomous systems to maintain urban infrastructure without causing disruption to citizens.
    First author Dr Mark Goddard conducted the work whilst at the University of Leeds and is now based at the Northumbria University. He said: “Spending time in urban green spaces and interacting with nature brings a range of human health and well-being benefits, and robots are likely to transform many of the ways in which we experience and gain benefits from urban nature.
    “Understanding how robotics and autonomous systems will affect our interaction with nature is vital for ensuring that our future cities support wildlife that is accessible to all.”
    This work was funded by the Engineering and Physical Sciences Research Council (EPSRC). More

  • in

    A high order for a low dimension

    Spintronics refers to a suite of physical systems which may one day replace many electronic systems. To realize this generational leap, material components that confine electrons in one dimension are highly sought after. For the first time, researchers created such a material in the form of a special bismuth-based crystal known as a high-order topological insulator.
    To create spintronic devices, new materials need to be designed that take advantage of quantum behaviors not seen in everyday life. You are probably familiar with conductors and insulators, which permit and restrict the flow of electrons, respectively. Semiconductors are common but less familiar to some; these usually insulate, but conduct under certain circumstances, making them ideal miniature switches.
    For spintronic applications, a new kind of electronic material is required and it’s called a topological insulator. It differs from these other three materials by insulating throughout its bulk, but conducting only along its surface. And what it conducts is not the flow of electrons themselves, but a property of them known as their spin or angular momentum. This spin current, as it’s known, could open up a world of ultrahigh-speed and low-power devices.
    However, not all topological insulators are equal: Two kinds, so-called strong and weak, have already been created, but have some drawbacks. As they conduct spin along their entire surface, the electrons present tend to scatter, which weakens their ability to convey a spin current. But since 2017, a third kind of topological insulator called a higher-order topological insulator has been theorized. Now, for the first time, one has been created by a team at the Institute for Solid State Physics at the University of Tokyo.
    “We created a higher-order topological insulator using the element bismuth,” said Associate Professor Takeshi Kondo. “It has the novel ability of being able to conduct a spin current along only its corner edges, essentially one-dimensional lines. As the spin current is bound to one dimension instead of two, the electrons do not scatter so the spin current remains stable.”
    To create this three-dimensional crystal, Kondo and his team stacked two-dimensional slices of crystal one atom thick in a certain way. For strong or weak topological insulators, crystal slices in the stack are all oriented the same way, like playing cards face down in a deck. But to create the higher-order topological insulator, the orientation of the slices was alternated, the metaphorical playing cards were faced up then down repeatedly throughout the stack. This subtle change in arrangement makes a huge change in the behavior of the resultant three-dimensional crystal.
    The crystal layers in the stack are held together by a quantum mechanical force called the van der Waals force. This is one of the rare kinds of quantum phenomena that you actually do see in daily life, as it is partly responsible for the way that powdered materials clump together and flow the way they do. In the crystal, it adheres the layers together.
    “It was exciting to see that the topological properties appear and disappear depending only on the way the two-dimensional atomic sheets were stacked,” said Kondo. “Such a degree of freedom in material design will bring new ideas, leading toward applications including fast and efficient spintronic devices, and things we have yet to envisage.”

    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More