More stories

  • in

    For next-generation semiconductors, 2D tops 3D

    Netflix, which provides an online streaming service around the world, has 42 million videos and about 160 million subscribers in total. It takes just a few seconds to download a 30-minute video clip and you can watch a show within 15 minutes after it airs. As distribution and transmission of high-quality contents are growing rapidly, it is critical to develop reliable and stable semiconductor memories.
    To this end, POSTECH research team has developed a memory device using a two-dimensional layered-structure material, unlocking the possibility of commercializing the next-generation memory device that can be stably operated at a low power.
    POSTECH research team consisting of Professor Jang-Sik Lee of the Department of Materials Science and Engineering, Professor Donghwa Lee of the Division of Advanced Materials Science, Youngjun Park, and Seong Hun Kim in the PhD course succeeded in designing an optimal halide perovskite material (CsPb2Br5) that can be applied to a ReRAM*1 device by applying the first-principles calculation*2 based on quantum mechanics. The findings were published in Advanced Science.
    The ideal next-generation memory device should process information at high speeds, store large amounts of information with non-volatile characteristics where the information does not disappear when power is off, and operate at low power for mobile devices.
    The recent discovery of the resistive switching property in halide perovskite materials has led to worldwide active research to apply them to ReRAM devices. However, the poor stability of halide perovskite materials when they are exposed to the atmosphere have been raised as an issue.
    The research team compared the relative stability and properties of halide perovskites with various structures using the first principles calculation2. DFT calculations predicted that CsPb2Br5, a two-dimensional layered structure in the form of AB2X5, may have better stability than the three-dimensional structure of ABX3 or other structures (A3B2X7, A2BX4), and that this structure could show improved performance in memory devices.
    To verify this result, CsPb2Br5, an inorganic perovskite material with a two-dimensional layered structure, was synthesized and applied to memory devices for the first time. The memory devices with a three-dimensional structure of CsPbBr3 lost their memory characteristics at temperatures higher than 100 °C. However, the memory devices using a two-dimensional layered-structure of CsPb2Br5 maintained their memory characteristics over 140 °C and could be operated at voltages lower than 1V.
    Professor Jang-Sik Lee who led the research commented, “Using this materials-designing technique based on the first-principles screening and experimental verification, the development of memory devices can be accelerated by reducing the time spent on searching for new materials. By designing an optimal new material for memory devices through computer calculations and applying it to actually producing them, the material can be applied to memory devices of various electronic devices such as mobile devices that require low power consumption or servers that require reliable operation. This is expected to accelerate the commercialization of next-generation data storage devices.”

    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    Electron cryo-microscopy: Using inexpensive technology to produce high-resolution images

    Biochemists at Martin Luther University Halle-Wittenberg (MLU) have used a standard electron cryo-microscope to achieve surprisingly good images that are on par with those taken by far more sophisticated equipment. They have succeeded in determining the structure of ferritin almost at the atomic level. Their results were published in the journal PLOS ONE.
    Electron cryo-microscopy has become increasingly important in recent years, especially in shedding light on protein structures. The developers of the new technology were awarded the Nobel Prize for Chemistry in 2017. The trick: the samples are flash frozen and then bombarded with electrons. In the case of traditional electron microscopy, all of the water is first extracted from the sample. This is necessary because the investigation takes place in a vacuum, which means water would evaporate immediately and make imaging impossible. However, because water molecules play such an important role in biomolecules, especially in proteins, they cannot be examined using traditional electron microscopy. Proteins are among the most important building blocks of cells and perform a variety of tasks. In-depth knowledge of their structure is necessary in order to understand how they work.
    The research group led by Dr Panagiotis Kastritis, who is a group leader at the Centre for Innovation Competence HALOmem and a junior professor at the Institute of Biochemistry and Biotechnology at MLU, acquired a state-of-the-art electron cryo-microscope in 2019. “There is no other microscope like it in Halle,” says Kastritis. The new “Thermo Fisher Glacios 200 kV,” financed by the Federal Ministry of Education and Research, is not the best and most expensive microscope of its kind. Nevertheless, Kastritis and his colleagues succeeded in determining the structure of the iron storage protein apoferritin down to 2.7 ångströms (Å), in other words, almost down to the individual atom. One ångström equals one-tenth of a nanometre. This puts the research group in a similar league to departments with far more expensive equipment. Apoferritin is often used as a reference protein to determine the performance levels of corresponding microscopes. Just recently, two research groups broke a new record with a resolution of about 1.2 Å. “Such values can only be achieved using very powerful instruments, which only a handful of research groups around the world have at their disposal. Our method is designed for microscopes found in many laboratories,” explains Kastritis.
    Electron cryo-microscopes are very complex devices. “Even tiny misalignments can render the images useless,” says Kastritis. It is important to programme them correctly and Halle has the technical expertise to do this. But the analysis that is conducted after the data has been collected is just as important. “The microscope produces several thousand images,” explains Kastritis. Image processing programmes are used to create a 3D structure of the molecule. In cooperation with Professor Milton T. Stubbs from the Institute of Biochemistry and Biotechnology at MLU, the researchers have developed a new method to create a high-resolution model of a protein. Stubbs’ research group uses X-ray crystallography, another technique for determining the structure of proteins, which requires the proteins to be crystallised. They were able to combine a modified form of an image analysis technique with the images taken with the electron cryo-microscope. This made charge states and individual water molecules visible.
    “It’s an attractive method,” says Kastritis. Instead of needing very expensive microscopes, a lot of computing capacity is required, which MLU has. Now, in addition to using X-ray crystallography, electron cryo-microscopy can be used to produce images of proteins — especially those that are difficult to crystallise. This enables collaboration, both inside and outside the university, on the structural analysis of samples with medical and biotechnological potential.

    Story Source:
    Materials provided by Martin-Luther-Universität Halle-Wittenberg. Note: Content may be edited for style and length. More

  • in

    New materials for extra thin computer chips

    Ever smaller and ever more compact — this is the direction in which computer chips are developing, driven by industry. This is why so-called 2D materials are considered to be the great hope: they are as thin as a material can possibly be, in extreme cases they consist of only one single layer of atoms. This makes it possible to produce novel electronic components with tiny dimensions, high speed and optimal efficiency.
    However, there is one problem: electronic components always consist of more than one material. 2D materials can only be used effectively if they can be combined with suitable material systems — such as special insulating crystals. If this is not considered, the advantage that 2D materials are supposed to offer is nullified. A team from the Faculty of Electrical Engineering at the TU Wien (Vienna) is now presenting these findings in the journal Nature Communications.
    Reaching the End of the Line on the Atomic Scale
    “The semiconductor industry today uses silicon and silicon oxide,” says Prof. Tibor Grasser from the Institute of Microelectronics at the TU Wien. “These are materials with very good electronic properties. For a long time, ever thinner layers of these materials were used to miniaturize electronic components. This worked well for a long time — but at some point we reach a natural limit.”
    When the silicon layer is only a few nanometers thick, so that it only consists of a few atomic layers, then the electronic properties of the material deteriorate very significantly. “The surface of a material behaves differently from the bulk of the material — and if the entire object is practically only made up of surfaces and no longer has a bulk at all, it can have completely different material properties.”
    Therefore, one has to switch to other materials in order to create ultra-thin electronic components. And this is where the so-called 2D materials come into play: they combine excellent electronic properties with minimal thickness.
    Thin layers need Thin Insulators
    “As it turns out, however, these 2D materials are only the first half of the story,” says Tibor Grasser. “The materials have to be placed on the appropriate substrate, and an insulator layer is also needed on top of it — and this insulator also hast to be extremely thin and of extremely good quality, otherwise you have gained nothing from the 2D materials. It’s like driving a Ferrari on muddy ground and wondering why you don’t set a speed record.”
    A team at the TU Wien around Tibor Grasser and Yury Illarionov has therefore analysed how to solve this problem. “Silicon dioxide, which is normally used in industry as an insulator, is not suitable in this case,” says Tibor Grasser. “It has a very disordered surface and many free, unsaturated bonds that interfere with the electronic properties in the 2D material.”
    It is better to look for a well-ordered structure: The team has already achieved excellent results with special crystals containing fluorine atoms. A transistor prototype with a calcium fluoride insulator has already provided convincing data, and other materials are still being analysed.
    “New 2D materials are currently being discovered. That’s nice, but with our results we want to show that this alone is not enough,” says Tibor Grasser. “These new electrically conductive 2D materials must also be combined with new types of insulators. Only then can we really succeed in producing a new generation of efficient and powerful electronic components in miniature format.”

    Story Source:
    Materials provided by Vienna University of Technology. Original written by Florian Aigner. Note: Content may be edited for style and length. More

  • in

    A micro-lab on a chip detects blood type within minutes

    Blood transfusion, if performed promptly, is a potentially life-saving intervention for someone losing a lot of blood. However, blood comes in several types, some of which are incompatible with others. Transfusing an incompatible blood type can severely harm a patient. It is, therefore, critical for medical staff to know a patient’s blood type before they perform a transfusion.
    There are four major blood types — O, A, B, and AB. These types differ based on the presence or absence of structures called A antigens and B antigens on the surfaces of red blood cells. Blood can be further divided into positive and negative types based on the presence or absence of D antigens on red blood cells. Medical professionals usually tell a patient’s blood type with tests involving antibodies against the A and B antigens. When antibodies recognize the corresponding antigens, they bind to them, causing the blood cells to clump together and the blood to coagulate. Thus, specific antigen-antibody combinations tell us what the blood type of a blood sample is.
    Yet, while the concept sounds straightforward, the equipment and techniques required are often very specialized. Tests, therefore, are non-portable, have high personnel cost, and can take over half an hour to yield results. This can prove problematic in several types of emergency situations.
    Aiming to solve these problems, a team of scientists at Japan’s Tokyo University of Science, led by Dr Ken Yamamoto and Dr Masahiro Motosuke, has developed a fully automated chip that can quickly and reliably determine a patient’s blood type. In the words of Dr Motosuke, he and his colleagues “have developed a compact and rapid blood-typing chip which also dilutes whole blood automatically.”
    The chip contains a micro-sized “laboratory” with various compartments through which the blood sample travels in sequence and is processed until results are obtained. To start the process, a user simply inserts a small amount of blood, presses a button, and waits for the result. Inside the chip, the blood is first diluted with a saline solution and air bubbles are introduced to promote mixing. The diluted blood is transported to a homogenizer where further mixing, driven by more intensely moving bubbles, yields a uniform solution. Portions of the homogenized blood solution are introduced into four different detector chambers. Two chambers each contain reagents that can detect either A antigens or B antigens. A third chamber contains reagents that detect D antigens and a fourth chamber contains only saline solution, with no reagent, and serves as a negative control chamber in which the user should not observe any results. Antigen-antibody reaction will cause blood to coagulate, and by looking at which chambers have coagulated blood, the user can tell the blood type and whether the blood is positive or negative.
    Further, the user does not require specialized optical equipment to read the results. The design of the detector chambers allows the easy identification of coagulated blood with the naked eye. The device is also highly sensitive and can even detect weak coagulation.
    During testing, the research team screened blood samples from 10 donors and obtained accurate results for all 10 samples. The time needed to determine a single sample’s blood type was only five minutes.
    Reflecting on the potential benefits of his team’s invention, Dr Motosuke remarks, “The advancement of simple and quick blood test chip technologies will lead to the simplification of medical care in emergency situations and will greatly reduce costs and the necessary labor on parts of medical staff.” Given the highly portable nature of the chip, Professor Motosuke also speculates that it could be used during aerial medical transport and in disaster response settings. This is a chip that has the potential to change the way emergency medical support is given.

    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    Janggu makes deep learning a breeze

    Researchers from the MDC have developed a new tool that makes it easier to maximize the power of deep learning for studying genomics. They describe the new approach, Janggu, in the journal Nature Communications.
    Imagine that before you could make dinner, you first had to rebuild the kitchen, specifically designed for each recipe. You’d spend way more time on preparation, than actually cooking. For computational biologists, it’s been a similar time-consuming process for analyzing genomics data. Before they can even begin their analysis, they spend a lot of valuable time formatting and preparing huge data sets to feed into deep learning models.
    To streamline this process, researchers from the Max Delbrueck Center for Molecular Medicine in the Helmholtz Association (MDC) developed a universal programming tool that converts a wide variety of genomics data into the required format for analysis by deep learning models. “Before, you ended up wasting a lot of time on the technical aspect, rather than focusing on the biological question you were trying to answer,” says Dr. Wolfgang Kopp, a scientist in the Bioinformatics and Omics Data Science research group at MDC’s Berlin Institute of Medical Systems Biology (BIMSB), and first author of the paper. “With Janggu, we are aiming to relieve some of that technical burden and make it accessible to as many people as possible.”
    Unique name, universal solution
    Janggu is named after a traditional Korean drum shaped like an hourglass turned on its side. The two large sections of the hourglass represent the areas Janggu is focused: pre-processing of genomics data, results visualization and model evaluation. The narrow connector in the middle represents a placeholder for any type of deep learning model researchers wish to use.
    Deep learning models involve algorithms sorting through massive amounts data and finding relevant features or patterns. While deep learning is a very powerful tool, its use in genomics has been limited. Most published models tend to only work with fixed types of data, able to answer only one specific question. Swapping out or adding new data often requires starting over from scratch and extensive programming efforts.

    advertisement

    Janggu converts different genomics data types into a universal format that can be plugged into any machine learning or deep learning model that uses python, a widely-used programming language.
    “What makes our approach special is that you can easily use any genomic data set for your deep learning problem, anything goes in any format,” Dr. Altuna Akalin, who heads the Bioinformatics and Omics Data Science research group.
    Separation is key
    Akalin’s research group has a dual mission: developing new machine learning tools, and using them to investigate questions in biology and medicine. During their own research efforts, they were continually frustrated by how much time was spent formatting data. They realized part of the problem was each deep learning model included its own data pre-processing. By separating the data extraction and formatting from the analysis, it provides a much easier way to interchange, combine or reuse sections of data. It’s kind of like having all the kitchen tools and ingredients at your fingertips ready to try out a new recipe.
    “The difficulty was finding the right balance between flexibility and usability,” Kopp says. “If it is too flexible, people will be drowned in different options and it will be difficult to get started.”
    Kopp has prepared several tutorials to help others begin using Janggu, along with example datasets and case studies. The Nature Communications paper demonstrates Janggu’s versatility in handling very large volumes of data, combining data streams, and answering different types of questions, such as predicting binding sites from DNA sequences and/or chromatin accessibility, as well as for classification and regression tasks.
    Endless applications
    While most of Janggu’s benefit is on the front end, the researchers wanted to provide a complete solution for deep learning. Janggu also includes visualization of results after the deep learning analysis, and evaluates what the model has learned. Notably, the team incorporated “higher-order sequence encoding” into the package, which allows to capture correlations between neighboring nucleotides. This helped to increase accuracy of some analyses. By making deep learning easier and more user-friendly, Janggu helps throw open the door to answering all kinds of biological questions.
    “One of the most interesting applications is predicting the effect of mutations on gene regulation,” Akalin says. “This is exciting because now we can start understanding individual genomes, for instance, we can pinpoint genetic variants that cause regulatory changes, or we can interpret regulatory mutations occurring in tumors.” More

  • in

    New research shows that laser spectral linewidth is classical-physics phenomenon

    New ground-breaking research from the University of Surrey could change the way scientists understand and describe lasers — establishing a new relationship between classical and quantum physics.
    In a comprehensive study published by the journal Progress in Quantum Electronics, a researcher from Surrey, in partnership with a colleague from Karlsruhe Institute of Technology and Fraunhofer IOSB in Germany, calls into question 60 years of orthodoxy surrounding the principles of lasers and the laser spectral linewidth — the foundation for controlling and measuring wavelengths of light.
    In the new study, the researchers find that a fundamental principle of lasers, that the amplification of light compensates for the losses of the laser, is only an approximation. The team quantify and explain that a tiny excess loss, which is not balanced by the amplified light but by normal luminescence inside the laser, provides the answer to the spectral linewidth of the laser.
    One of these loss mechanisms, the outcoupling of light from the laser, produces the laser beam used in vehicle manufacturing, telecommunications, laser surgery, GPS and so much more.
    Markus Pollnau, Professor in Photonics at the University of Surrey, said: “Since the laser was invented in 1960, the laser spectral linewidth has been treated as the stepchild in the descriptions of lasers in textbooks and university teaching worldwide, because its quantum-physical explanation has placed extraordinary challenges even for the lecturers.
    “As we have explained in this study, there is a simple, easy-to-understand derivation of the laser spectral linewidth, and the underlying classical physics proves the quantum-physics attempt of explaining the laser spectral linewidth hopelessly incorrect. This result has fundamental consequences for quantum physics.”

    Story Source:
    Materials provided by University of Surrey. Original written by Dalitso Njolinjo. Note: Content may be edited for style and length. More

  • in

    Robust high-performance data storage through magnetic anisotropy

    The latest generation of magnetic hard drives is made of magnetic thin films, which are invar materials. They allow extremely robust and high data storage density by local heating of ultrasmall nano-domains with a laser, so called heat assisted magnetic recording or HAMR. The volume in such invar materials hardly expands despite heating. A technologically relevant material for such HAMR data memories are thin films of iron-platinum nanograins. An international team led by the joint research group of Prof. Dr. Matias Bargheer at HZB and the University of Potsdam has now observed experimentally for the first time how a special spin-lattice interaction in these iron-platinum thin films cancels out the thermal expansion of the crystal lattice.
    In thermal equilibrium, iron-platinum (FePt) belongs to the class of invar materials, which hardly expand at all when heated. This phenomenon was observed as early as 1897 in the nickel-iron alloy “Invar,” but it is only in recent years that experts have been able to understand which mechanism are driving it: Normally, heating of solids leads to lattice vibrations which cause expansion because the vibrating atoms need more space. Surprisingly, however, heating the spins in FePt leads to the opposite effect: the warmer the spins are, the more the material contracts along the direction of magnetisation. The result is the property known from Invar: minimal expansion.
    A team led by Prof. Matias Bargheer has now experimentally compared this fascinating phenomenon for the first time on different iron-platinum thin films. Bargheer heads a joint research group at Helmholtz-Zentrum Berlin and the University of Potsdam. Together with colleagues from Lyon, Brno and Chemnitz, he wanted to investigate how the behavior of perfectly crystalline FePt layers differs from the FePt thin films used for HAMR memories. These consist of crystalline nanograins of stacked monatomic layers of iron and platinum embedded in a carbon matrix.
    The samples were locally heated and excited with two laser pulses in quick succession and then measured by X-ray diffraction to determine how strongly the crystal lattice expands or contracts locally.
    “We were surprised to find that the continuous crystalline layers expand when heated briefly with laser light, while loosely arranged nano grains contract in the same crystal orientation,” explains Bargheer. “HAMR data memories, on the other hand, whose nano-grains are embedded in a carbon matrix and grown on a substrate react much weaker to laser excitation: They first contract slightly and then expand slightly.”
    “Through these experiments with ultrashort X-ray pulses, we have been able to determine how important the morphology of such thin films is,” says Alexander von Reppert, first author of the study and PhD student in Bargheer’s group. The secret is transverse contraction, also known as the Poisson effect. “Everyone who has ever pressed firmly on an eraser knows this,” says Bargheer. “The rubber gets thicker in the middle.” And Reppert adds: “The nanoparticles can do that too, whereas in the perfect film there is no room for expansion in the plane, which would have to go along with the spin driven contraction perpendicular to the film.”
    So FePt, embedded in a carbon matrix, is a very special material. It not only has exceptionally robust magnetic properties. Its thermomechanical properties also prevent excessive tension from being created when heated, which would destroy the material — and that is important for HAMR!

    Story Source:
    Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length. More

  • in

    Magnetic memory states go exponential

    A newly-discovered ability to stabilize and control exponential number of discrete magnetic states in a relatively simple structure may pave the way to multi-level magnetic memory with extremely large number of states per cell.
    Spintronics is a thriving branch of nano-electronics which utilizes the spin of the electron and its associated magnetic moment in addition to the electron charge used in traditional electronics. The main current practical contributions of spintronics are in magnetic sensing and non-volatile magnetic data storage, and additional breakthroughs in developing magnetic based processing and novel types of magnetic memory are expected.
    Spintronics devices commonly consist of magnetic elements manipulated by spin-polarized currents between stable magnetic states. When spintronic devices are used for storing data, the number of stable states sets an upper limit on memory capacity. While current commercial magnetic memory cells have two stable magnetic states corresponding to two memory states, there are clear advantages to increasing this number, as it will potentially allow increasing the memory density and enable the design of novel types of memory.
    Now, a group of researchers led by Prof. Lior Klein, from the physics department and the Institute of Nanotechnology and Advanced Materials at Bar-Ilan University, has shown that relatively simple structures can support exponential number of magnetic states — much greater than previously thought. The studied structures are magnetic thin films patterned in the form of N crossing ellipses which have two to the power of 2N magnetization states. Furthermore, the researchers demonstrated switching between the states by generating spin currents. Their research appears as a featured article on the cover of a June issue of Applied Physics Letters.
    The ability to stabilize and control exponential number of discrete magnetic states in a relatively simple structure constitutes a major contribution to spintronics. “This finding may pave the way to multi-level magnetic memory with extremely large number of states per cell (e.g., 256 states when N=4), be used for neuromorphic computing, and more,” says Prof. Klein, whose research group includes Dr. Shubhankar Das, Ariel Zaig, and Dr. Moty Schultz.

    Story Source:
    Materials provided by Bar-Ilan University. Note: Content may be edited for style and length. More