More stories

  • in

    New materials for extra thin computer chips

    Ever smaller and ever more compact — this is the direction in which computer chips are developing, driven by industry. This is why so-called 2D materials are considered to be the great hope: they are as thin as a material can possibly be, in extreme cases they consist of only one single layer of atoms. This makes it possible to produce novel electronic components with tiny dimensions, high speed and optimal efficiency.
    However, there is one problem: electronic components always consist of more than one material. 2D materials can only be used effectively if they can be combined with suitable material systems — such as special insulating crystals. If this is not considered, the advantage that 2D materials are supposed to offer is nullified. A team from the Faculty of Electrical Engineering at the TU Wien (Vienna) is now presenting these findings in the journal Nature Communications.
    Reaching the End of the Line on the Atomic Scale
    “The semiconductor industry today uses silicon and silicon oxide,” says Prof. Tibor Grasser from the Institute of Microelectronics at the TU Wien. “These are materials with very good electronic properties. For a long time, ever thinner layers of these materials were used to miniaturize electronic components. This worked well for a long time — but at some point we reach a natural limit.”
    When the silicon layer is only a few nanometers thick, so that it only consists of a few atomic layers, then the electronic properties of the material deteriorate very significantly. “The surface of a material behaves differently from the bulk of the material — and if the entire object is practically only made up of surfaces and no longer has a bulk at all, it can have completely different material properties.”
    Therefore, one has to switch to other materials in order to create ultra-thin electronic components. And this is where the so-called 2D materials come into play: they combine excellent electronic properties with minimal thickness.
    Thin layers need Thin Insulators
    “As it turns out, however, these 2D materials are only the first half of the story,” says Tibor Grasser. “The materials have to be placed on the appropriate substrate, and an insulator layer is also needed on top of it — and this insulator also hast to be extremely thin and of extremely good quality, otherwise you have gained nothing from the 2D materials. It’s like driving a Ferrari on muddy ground and wondering why you don’t set a speed record.”
    A team at the TU Wien around Tibor Grasser and Yury Illarionov has therefore analysed how to solve this problem. “Silicon dioxide, which is normally used in industry as an insulator, is not suitable in this case,” says Tibor Grasser. “It has a very disordered surface and many free, unsaturated bonds that interfere with the electronic properties in the 2D material.”
    It is better to look for a well-ordered structure: The team has already achieved excellent results with special crystals containing fluorine atoms. A transistor prototype with a calcium fluoride insulator has already provided convincing data, and other materials are still being analysed.
    “New 2D materials are currently being discovered. That’s nice, but with our results we want to show that this alone is not enough,” says Tibor Grasser. “These new electrically conductive 2D materials must also be combined with new types of insulators. Only then can we really succeed in producing a new generation of efficient and powerful electronic components in miniature format.”

    Story Source:
    Materials provided by Vienna University of Technology. Original written by Florian Aigner. Note: Content may be edited for style and length. More

  • in

    A micro-lab on a chip detects blood type within minutes

    Blood transfusion, if performed promptly, is a potentially life-saving intervention for someone losing a lot of blood. However, blood comes in several types, some of which are incompatible with others. Transfusing an incompatible blood type can severely harm a patient. It is, therefore, critical for medical staff to know a patient’s blood type before they perform a transfusion.
    There are four major blood types — O, A, B, and AB. These types differ based on the presence or absence of structures called A antigens and B antigens on the surfaces of red blood cells. Blood can be further divided into positive and negative types based on the presence or absence of D antigens on red blood cells. Medical professionals usually tell a patient’s blood type with tests involving antibodies against the A and B antigens. When antibodies recognize the corresponding antigens, they bind to them, causing the blood cells to clump together and the blood to coagulate. Thus, specific antigen-antibody combinations tell us what the blood type of a blood sample is.
    Yet, while the concept sounds straightforward, the equipment and techniques required are often very specialized. Tests, therefore, are non-portable, have high personnel cost, and can take over half an hour to yield results. This can prove problematic in several types of emergency situations.
    Aiming to solve these problems, a team of scientists at Japan’s Tokyo University of Science, led by Dr Ken Yamamoto and Dr Masahiro Motosuke, has developed a fully automated chip that can quickly and reliably determine a patient’s blood type. In the words of Dr Motosuke, he and his colleagues “have developed a compact and rapid blood-typing chip which also dilutes whole blood automatically.”
    The chip contains a micro-sized “laboratory” with various compartments through which the blood sample travels in sequence and is processed until results are obtained. To start the process, a user simply inserts a small amount of blood, presses a button, and waits for the result. Inside the chip, the blood is first diluted with a saline solution and air bubbles are introduced to promote mixing. The diluted blood is transported to a homogenizer where further mixing, driven by more intensely moving bubbles, yields a uniform solution. Portions of the homogenized blood solution are introduced into four different detector chambers. Two chambers each contain reagents that can detect either A antigens or B antigens. A third chamber contains reagents that detect D antigens and a fourth chamber contains only saline solution, with no reagent, and serves as a negative control chamber in which the user should not observe any results. Antigen-antibody reaction will cause blood to coagulate, and by looking at which chambers have coagulated blood, the user can tell the blood type and whether the blood is positive or negative.
    Further, the user does not require specialized optical equipment to read the results. The design of the detector chambers allows the easy identification of coagulated blood with the naked eye. The device is also highly sensitive and can even detect weak coagulation.
    During testing, the research team screened blood samples from 10 donors and obtained accurate results for all 10 samples. The time needed to determine a single sample’s blood type was only five minutes.
    Reflecting on the potential benefits of his team’s invention, Dr Motosuke remarks, “The advancement of simple and quick blood test chip technologies will lead to the simplification of medical care in emergency situations and will greatly reduce costs and the necessary labor on parts of medical staff.” Given the highly portable nature of the chip, Professor Motosuke also speculates that it could be used during aerial medical transport and in disaster response settings. This is a chip that has the potential to change the way emergency medical support is given.

    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    Janggu makes deep learning a breeze

    Researchers from the MDC have developed a new tool that makes it easier to maximize the power of deep learning for studying genomics. They describe the new approach, Janggu, in the journal Nature Communications.
    Imagine that before you could make dinner, you first had to rebuild the kitchen, specifically designed for each recipe. You’d spend way more time on preparation, than actually cooking. For computational biologists, it’s been a similar time-consuming process for analyzing genomics data. Before they can even begin their analysis, they spend a lot of valuable time formatting and preparing huge data sets to feed into deep learning models.
    To streamline this process, researchers from the Max Delbrueck Center for Molecular Medicine in the Helmholtz Association (MDC) developed a universal programming tool that converts a wide variety of genomics data into the required format for analysis by deep learning models. “Before, you ended up wasting a lot of time on the technical aspect, rather than focusing on the biological question you were trying to answer,” says Dr. Wolfgang Kopp, a scientist in the Bioinformatics and Omics Data Science research group at MDC’s Berlin Institute of Medical Systems Biology (BIMSB), and first author of the paper. “With Janggu, we are aiming to relieve some of that technical burden and make it accessible to as many people as possible.”
    Unique name, universal solution
    Janggu is named after a traditional Korean drum shaped like an hourglass turned on its side. The two large sections of the hourglass represent the areas Janggu is focused: pre-processing of genomics data, results visualization and model evaluation. The narrow connector in the middle represents a placeholder for any type of deep learning model researchers wish to use.
    Deep learning models involve algorithms sorting through massive amounts data and finding relevant features or patterns. While deep learning is a very powerful tool, its use in genomics has been limited. Most published models tend to only work with fixed types of data, able to answer only one specific question. Swapping out or adding new data often requires starting over from scratch and extensive programming efforts.

    advertisement

    Janggu converts different genomics data types into a universal format that can be plugged into any machine learning or deep learning model that uses python, a widely-used programming language.
    “What makes our approach special is that you can easily use any genomic data set for your deep learning problem, anything goes in any format,” Dr. Altuna Akalin, who heads the Bioinformatics and Omics Data Science research group.
    Separation is key
    Akalin’s research group has a dual mission: developing new machine learning tools, and using them to investigate questions in biology and medicine. During their own research efforts, they were continually frustrated by how much time was spent formatting data. They realized part of the problem was each deep learning model included its own data pre-processing. By separating the data extraction and formatting from the analysis, it provides a much easier way to interchange, combine or reuse sections of data. It’s kind of like having all the kitchen tools and ingredients at your fingertips ready to try out a new recipe.
    “The difficulty was finding the right balance between flexibility and usability,” Kopp says. “If it is too flexible, people will be drowned in different options and it will be difficult to get started.”
    Kopp has prepared several tutorials to help others begin using Janggu, along with example datasets and case studies. The Nature Communications paper demonstrates Janggu’s versatility in handling very large volumes of data, combining data streams, and answering different types of questions, such as predicting binding sites from DNA sequences and/or chromatin accessibility, as well as for classification and regression tasks.
    Endless applications
    While most of Janggu’s benefit is on the front end, the researchers wanted to provide a complete solution for deep learning. Janggu also includes visualization of results after the deep learning analysis, and evaluates what the model has learned. Notably, the team incorporated “higher-order sequence encoding” into the package, which allows to capture correlations between neighboring nucleotides. This helped to increase accuracy of some analyses. By making deep learning easier and more user-friendly, Janggu helps throw open the door to answering all kinds of biological questions.
    “One of the most interesting applications is predicting the effect of mutations on gene regulation,” Akalin says. “This is exciting because now we can start understanding individual genomes, for instance, we can pinpoint genetic variants that cause regulatory changes, or we can interpret regulatory mutations occurring in tumors.” More

  • in

    New research shows that laser spectral linewidth is classical-physics phenomenon

    New ground-breaking research from the University of Surrey could change the way scientists understand and describe lasers — establishing a new relationship between classical and quantum physics.
    In a comprehensive study published by the journal Progress in Quantum Electronics, a researcher from Surrey, in partnership with a colleague from Karlsruhe Institute of Technology and Fraunhofer IOSB in Germany, calls into question 60 years of orthodoxy surrounding the principles of lasers and the laser spectral linewidth — the foundation for controlling and measuring wavelengths of light.
    In the new study, the researchers find that a fundamental principle of lasers, that the amplification of light compensates for the losses of the laser, is only an approximation. The team quantify and explain that a tiny excess loss, which is not balanced by the amplified light but by normal luminescence inside the laser, provides the answer to the spectral linewidth of the laser.
    One of these loss mechanisms, the outcoupling of light from the laser, produces the laser beam used in vehicle manufacturing, telecommunications, laser surgery, GPS and so much more.
    Markus Pollnau, Professor in Photonics at the University of Surrey, said: “Since the laser was invented in 1960, the laser spectral linewidth has been treated as the stepchild in the descriptions of lasers in textbooks and university teaching worldwide, because its quantum-physical explanation has placed extraordinary challenges even for the lecturers.
    “As we have explained in this study, there is a simple, easy-to-understand derivation of the laser spectral linewidth, and the underlying classical physics proves the quantum-physics attempt of explaining the laser spectral linewidth hopelessly incorrect. This result has fundamental consequences for quantum physics.”

    Story Source:
    Materials provided by University of Surrey. Original written by Dalitso Njolinjo. Note: Content may be edited for style and length. More

  • in

    Robust high-performance data storage through magnetic anisotropy

    The latest generation of magnetic hard drives is made of magnetic thin films, which are invar materials. They allow extremely robust and high data storage density by local heating of ultrasmall nano-domains with a laser, so called heat assisted magnetic recording or HAMR. The volume in such invar materials hardly expands despite heating. A technologically relevant material for such HAMR data memories are thin films of iron-platinum nanograins. An international team led by the joint research group of Prof. Dr. Matias Bargheer at HZB and the University of Potsdam has now observed experimentally for the first time how a special spin-lattice interaction in these iron-platinum thin films cancels out the thermal expansion of the crystal lattice.
    In thermal equilibrium, iron-platinum (FePt) belongs to the class of invar materials, which hardly expand at all when heated. This phenomenon was observed as early as 1897 in the nickel-iron alloy “Invar,” but it is only in recent years that experts have been able to understand which mechanism are driving it: Normally, heating of solids leads to lattice vibrations which cause expansion because the vibrating atoms need more space. Surprisingly, however, heating the spins in FePt leads to the opposite effect: the warmer the spins are, the more the material contracts along the direction of magnetisation. The result is the property known from Invar: minimal expansion.
    A team led by Prof. Matias Bargheer has now experimentally compared this fascinating phenomenon for the first time on different iron-platinum thin films. Bargheer heads a joint research group at Helmholtz-Zentrum Berlin and the University of Potsdam. Together with colleagues from Lyon, Brno and Chemnitz, he wanted to investigate how the behavior of perfectly crystalline FePt layers differs from the FePt thin films used for HAMR memories. These consist of crystalline nanograins of stacked monatomic layers of iron and platinum embedded in a carbon matrix.
    The samples were locally heated and excited with two laser pulses in quick succession and then measured by X-ray diffraction to determine how strongly the crystal lattice expands or contracts locally.
    “We were surprised to find that the continuous crystalline layers expand when heated briefly with laser light, while loosely arranged nano grains contract in the same crystal orientation,” explains Bargheer. “HAMR data memories, on the other hand, whose nano-grains are embedded in a carbon matrix and grown on a substrate react much weaker to laser excitation: They first contract slightly and then expand slightly.”
    “Through these experiments with ultrashort X-ray pulses, we have been able to determine how important the morphology of such thin films is,” says Alexander von Reppert, first author of the study and PhD student in Bargheer’s group. The secret is transverse contraction, also known as the Poisson effect. “Everyone who has ever pressed firmly on an eraser knows this,” says Bargheer. “The rubber gets thicker in the middle.” And Reppert adds: “The nanoparticles can do that too, whereas in the perfect film there is no room for expansion in the plane, which would have to go along with the spin driven contraction perpendicular to the film.”
    So FePt, embedded in a carbon matrix, is a very special material. It not only has exceptionally robust magnetic properties. Its thermomechanical properties also prevent excessive tension from being created when heated, which would destroy the material — and that is important for HAMR!

    Story Source:
    Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length. More

  • in

    Magnetic memory states go exponential

    A newly-discovered ability to stabilize and control exponential number of discrete magnetic states in a relatively simple structure may pave the way to multi-level magnetic memory with extremely large number of states per cell.
    Spintronics is a thriving branch of nano-electronics which utilizes the spin of the electron and its associated magnetic moment in addition to the electron charge used in traditional electronics. The main current practical contributions of spintronics are in magnetic sensing and non-volatile magnetic data storage, and additional breakthroughs in developing magnetic based processing and novel types of magnetic memory are expected.
    Spintronics devices commonly consist of magnetic elements manipulated by spin-polarized currents between stable magnetic states. When spintronic devices are used for storing data, the number of stable states sets an upper limit on memory capacity. While current commercial magnetic memory cells have two stable magnetic states corresponding to two memory states, there are clear advantages to increasing this number, as it will potentially allow increasing the memory density and enable the design of novel types of memory.
    Now, a group of researchers led by Prof. Lior Klein, from the physics department and the Institute of Nanotechnology and Advanced Materials at Bar-Ilan University, has shown that relatively simple structures can support exponential number of magnetic states — much greater than previously thought. The studied structures are magnetic thin films patterned in the form of N crossing ellipses which have two to the power of 2N magnetization states. Furthermore, the researchers demonstrated switching between the states by generating spin currents. Their research appears as a featured article on the cover of a June issue of Applied Physics Letters.
    The ability to stabilize and control exponential number of discrete magnetic states in a relatively simple structure constitutes a major contribution to spintronics. “This finding may pave the way to multi-level magnetic memory with extremely large number of states per cell (e.g., 256 states when N=4), be used for neuromorphic computing, and more,” says Prof. Klein, whose research group includes Dr. Shubhankar Das, Ariel Zaig, and Dr. Moty Schultz.

    Story Source:
    Materials provided by Bar-Ilan University. Note: Content may be edited for style and length. More

  • in

    Fair justice systems need open data access

    Although U.S. court documents are publicly available online, they sit behind expensive paywalls inside a difficult-to-navigate database.
    A Northwestern University-led team says these barriers prevent the transparency needed to establish a fair and equal justice system. Making all court records open and available will allow researchers to systematically study and evaluate the U.S. justice system, yielding information with potential to direct policy.
    “In principle, litigation is supposed to be open to the public,” said Northwestern data scientist Luís A. Nunes Amaral. “In reality, the lack of access to court records seemingly undercuts any claim that the courts are truly ‘open.'”
    The new insights will be published on Friday, July 10 in the journal Science. Amaral is the corresponding author of the paper. His co-authors include computer and data scientists, legal scholars, journalists and policy experts.
    Northwestern artificial intelligence (A.I) researcher Kristian Hammond and the C3 Lab are developing an A.I. platform that provides users with access to the information and insights hidden inside federal court records, regardless of their data and analytic skills.
    “The problem with court data is the same problem with a lot of datasets,” Hammond said. “The data cost money, and the technical skills to use them cost money. That means very few people have access — not just to the data — but the information that we all need that’s hidden inside of it.”
    With this tool, the researchers can link courtroom data to other public data to explore questions such as: How do different judges affect the outcomes of similar cases? Does it make a difference to be defended by a big law firm compared to a smaller one? And how many cases settle?

    advertisement

    “We really can ask the broadest questions,” Amaral said. “The ultimate goal is to ask if the court system is acting fairly.”
    Amaral is the Erastus Otis Haven Professor of Chemical and Biological Engineering in Northwestern’s McCormick School of Engineering and the director of the Northwestern Institute on Complex Systems. Hammond is the Bill and Cathy Osborn Professor of Computer Science at McCormick and the director of Northwestern’s Master of Science in Artificial Intelligence program.
    Northwestern co-authors include data scientist Adam Pah from the Kellogg School of Management; legal scholars David Schwartz, Sarath Sanga, Zachary Clopton and Peter DiCola from the Northwestern Pritzker School of Law and journalism researcher Rachel Davis Mersey from the Medill School of Journalism.
    Evaluating access to justice
    To help quantify and evaluate citizens’ access to justice, the researchers examined judicial waiver decisions. Anyone who files a lawsuit in a federal court must pay a $400 filing fee, which is unaffordable for many Americans. To waive these fees, litigants can file an application. Because there is no uniform standard to reviewing these requests, the Northwestern team found judges’ decisions varied widely. In one federal district alone, judges approved waivers anywhere from less than 20% to more than 80% of the time.

    advertisement

    “If all judges reviewed fee waiver applications under the same standard, then grant rates should not systematically differ within districts,” the authors wrote. “We find, however, that they do.”
    The research team believes these types of variations can be fixed if the public can access and analyze court records, in order to give the justice system quantitative feedback. To do this, the researchers recommend a three-pronged approach:

    1. Make court records free to dismantle the barrier to access;
    2. Link courtroom data to external data — such as information on judges, litigants and lawyers — to build a collaborative knowledge network;
    3. Empower the public by providing access to the information that flows from the analysis of the federal court data.

    Transforming study and journalistic coverage
    To help with this approach, the researchers are developing SCALES-OKN (Systematic Content Analysis of Litigation Events Open Knowledge Network), an A.I.-powered platform that makes the federal courtroom data and insights available to the public. The team believes the tool has potential to transform the ways academics, scientists and researchers approach legal study, as well as how journalists cover the justice system.
    “Our ability to understand and improve the law — everything from employment discrimination to intellectual property to securities regulation — depends critically on our ability to access legal data,” said Sanga, an associate professor at Northwestern Law. “By opening up court records, SCALES will finally enable researchers to systematically examine the court system and the practice of law. Social scientists will use this resource in much the same way that they use the U.S. Census. It will provide both a detailed and big picture view of the process by which litigants navigate the justice system, as well as the process by which judges administer justice.”
    “SCALES will transform the way journalists are able to cover the American justice system,” said Mersey, associate dean of research at Medill. “The interface will allow reporters, both with and without data analytics skills, to quickly and easily access judicial information and court records to cover uses of social justice, equity and due process. At a time when media organizations have trimmed newsroom staffs and decreased the amount of money that can be spent gathering information, SCALES will prove to be a powerful partner in ensuring the justice systems operates in an open and accessible way.” More

  • in

    Topological materials 'cherned' up to the maximum

    In topological materials, electrons can display behaviour that is fundamentally different from that in ‘conventional’ matter, and the magnitude of many such ‘exotic’ phenomena is directly proportional to an entity known as the Chern number. New experiments establish for the first time that the theoretically predicted maximum Chern number can be reached — and controlled — in a real material.
    When the Royal Swedish Academy of Sciences awarded the Nobel Prize in Physics 2016 to David Thouless, Duncan Haldane and Michael Kosterlitz, they lauded the trio for having “opened the door on an unknown world where matter can assume strange states.” Far from being an oddity, the discoveries of topological phase transitions and topological phases of matter, to which the three theoreticians have contributed so crucially, has grown into one of the most active fields of research in condensed matter physics today. Topological materials hold the promise, for instance, to lead to novel types of electronic components and superconductors, and they harbour deep connections across areas of physics and mathematics. While new phenomena are discovered routinely, there are fundamental aspects yet to be settled. One of those is just how ‘strong’ topological phenomena can be in a real material. Addressing that question, an international team of researchers led by PSI postdoctoral researcher Niels Schröter provide now an important benchmark. Writing in Science, they report experiments in which they observed that in the topological semimetal palladium gallium (PdGa) one of the most common classifiers of topological phenomena, the Chern number, can reach the maximum value that is allowed in any metallic crystal. That this is possible in a real material has never been shown before. Moreover, the team has established ways to control the sign of the Chern number, which might bring new opportunities for exploring, and exploiting, topological phenomena.
    Developed to the maximum
    In theoretical works it had been predicted that in topological semimetals the Chern number cannot exceed a magnitude of four. As candidate systems displaying phenomena with such maximal Chern numbers, chiral crystals were proposed. These are materials whose lattice structures have a well-defined handedness, in the sense that they cannot transformed into their mirror image by any combination of rotations and translations. Several candidate structures have been studied. A conclusive experimental observation of a Chern number of plus or minus four, however, remained elusive. The previous efforts have been hindered by two factors in particular. First, a prerequisite for realizing a maximal Chern number is the presence of spin-orbit coupling, and at least in some of the materials studied so far, that coupling is relatively low, making it difficult to resolve the splittings of interest. Second, preparing clean and flat surfaces of relevant crystals has been highly challenging, and as a consequence spectroscopic signatures tended to be washed out.
    Schröter et al. have overcome both of these limitations by working with PdGa crystals. The material displays strong spin-orbit coupling, and well-established methods exist for producing immaculate surfaces. In addition, at the Advanced Resonant Spectroscopies (ADRESS) beamline of the Swiss Light Source at PSI, they had unique capabilities at their disposal for high-resolution ARPES experiments and thus to resolve the predicted tell-tale spectroscopic patterns. In combination with further measurements at the Diamond Light Source (UK) and with dedicated ab initio calculations, these data revealed hard and fast signatures in the electronic structure of PdGa that left no doubt that the maximal Chern number has been realized.
    A hand on the Chern number
    The team went one step further, beyond the observation of a maximal Chern number. They showed that the chiral nature of the PdGa crystals offers a possibility to control the sign of that number as well. To demonstrate such control, they grew samples that were either leftor right-handed (see the figure). When they looked then at the electronic structures of the two enantiomers, they found that the chirality of the crystals is reflected in the chirality of the electronic wave function. Taken together, this means that in chiral semimetals the handedness, which can be determined during crystal growth, can used to control topological phenomena emerging from the behaviour of the electrons in the material. This sort of control opens a trove of new experiments. For example, novel effects can be expected to arise at the interface between different enantiomers, one with Chern number +4 and the other one with -4. And there are real prospects for applications, too. Chiral topological semimetals can host fascinating phenomena such as quantized photocurrents. Intriguingly, PdGa is known for its catalytic properties, inviting the question about the role of topological phenomena in such processes.
    Finally, the findings now obtained for PdGa emerge from electronic band properties that are shared by many other chiral compounds — meaning that the corner of the “unknown world where matter can assume strange states” into which Schröter and colleagues have now ventured is likely to have a lot more to offer.

    Story Source:
    Materials provided by Paul Scherrer Institute. Note: Content may be edited for style and length. More