More stories

  • in

    Tuning the energy gap: A novel approach for organic semiconductors

    Organic semiconductors have earned a reputation as energy efficient materials in organic light emitting diodes (OLEDs) that are employed in large area displays. In these and in other applications, such as solar cells, a key parameter is the energy gap between electronic states. It determines the wavelength of the light that is emitted or absorbed. The continuous adjustability of this energy gap is desirable. Indeed, for inorganic materials an appropriate method already exists — the so-called blending. It is based on engineering the band gap by substituting atoms in the material. This allows for a continuous tunability as, for example in aluminum gallium arsenide semiconductors. Unfortunately, this is not transferable to organic semiconductors because of their different physical characteristics and their molecule-based construction paradigm, thus making continuous band gap tuning much more difficult.
    However, with their latest publication scientists at the Center for Advancing Electronics Dresden (cfaed, TU Dresden) and at the Cluster of Excellence “e-conversion” at TU Munich together with partners from University of Würzburg, HU Berlin, and Ulm University for the first time realized energy-gap engineering for organic semiconductors by blending.
    For inorganic semiconductors, the energy levels can be shifted towards one another by atomic substitutions, thus reducing the band gap (“band-gap engineering”). In contrast, band structure modifications by blending organic materials can only shift the energy levels concertedly either up or down. This is due to the strong Coulomb effects that can be exploited in organic materials, but this has no effect on the gap. “It would be very interesting to also change the gap of organic materials by blending, to avoid the lengthy synthesis of new molecules,” says Prof. Karl Leo from TU Dresden.
    The researchers now found an unconventional way by blending the material with mixtures of similar molecules that are different in size. “The key finding is that all molecules arrange in specific patterns that are allowed by their molecular shape and size,” explains Frank Ortmann, a professor at TU Munich and group leader at the Center for Advancing Electronics Dresden (cfaed, TU Dresden). “This induces the desired change in the material´s dielectric constant and gap energy.”
    The group of Frank Ortmann was able to clarify the mechanism by simulating the structures of the blended films and their electronic and dielectric properties. A corresponding change in the molecular packing depending on the shape of the blended molecules was confirmed by X-ray scattering measurements, performed by the Organic Devices Group of Prof. Stefan Mannsfeld at cfaed. The core experimental and device work was done by Katrin Ortstein and her colleagues at the group of Prof. Karl Leo, TU Dresden.
    The results of this study have just been published in the journal Nature Materials. While this proves the feasibility of this type of energy-level engineering strategy, its employment will be explored for optoelectronic devices in the future.
    Story Source:
    Materials provided by Technische Universität Dresden. Note: Content may be edited for style and length. More

  • in

    Humans are ready to take advantage of benevolent AI

    Humans expect that AI is Benevolent and trustworthy. A new study reveals that at the same time humans are unwilling to cooperate and compromise with machines. They even exploit them.
    Picture yourself driving on a narrow road in the near future when suddenly another car emerges from a bend ahead. It is a self-driving car with no passengers inside. Will you push forth and assert your right of way, or give way to let it pass? At present, most of us behave kindly in such situations involving other humans. Will we show that same kindness towards autonomous vehicles?
    Using methods from behavioural game theory, an international team of researchers at LMU and the University of London have conducted large-scale online studies to see whether people would behave as cooperatively with artificial intelligence (AI) systems as they do with fellow humans.
    Cooperation holds a society together. It often requires us to compromise with others and to accept the risk that they let us down. Traffic is a good example. We lose a bit of time when we let other people pass in front of us and are outraged when others fail to reciprocate our kindness. Will we do the same with machines?
    Exploiting the machine without guilt
    The study which is published in the journal iScience found that, upon first encounter, people have the same level of trust toward AI as for human: most expect to meet someone who is ready to cooperate. More

  • in

    Research uncovers broadband gaps in US to help close digital divide

    High-speed internet access has gone from an amenity to a necessity for working and learning from home, and the COVID-19 pandemic has more clearly revealed the disadvantages for American households that lack a broadband connection.
    To tackle this problem, Michigan State University researchers have developed a new tool to smooth the collection of federal broadband access data that helps pinpoint coverage gaps across the U.S. The research was published May 26 In the journal PLOS ONE.
    “Nearly 21% of students in urban areas are without at-home broadband, while 25% and 37% lack at-home broadband in suburban and rural areas,” said Elizabeth A. Mack, associate professor in the Department of Geography, Environment, and Spatial Sciences in the College of Social Science.
    “As more of our day-to-day activities continue to move online, including education, commerce and health care, it’s essential that we understand where gaps in digital infrastructure exist. This is especially important if we want to address disparities in access related to demographics, socioeconomic status, and educational attainment,” she said.
    When the U.S. Congress first passed the Telecommunications Act of 1996, the goal was to encourage competition in the telecommunications industry while improving the quality of service and lowering customer prices. To determine the act’s effectiveness, the Federal Communications Commission created a standardized form (Form 477) where twice a year, internet service providers are required to report where they provide service to residential and business customers.
    “To date, Form 477 data remains the best publicly available data source regarding broadband deployment,” said Scott Loveridge, a professor in the Department of Agricultural, Food, and Resource Economics (AFRE). “Unfortunately, there are a lot of nuances to these data which to this point have prevented us from conducting useful analyses over time.”
    One of these nuances is that the data collected from 2008 to 2018 spans the two census reporting periods of 2000 and 2010. This has made it difficult to look at the data overall and align it with the shifting census geographies, which do change each census year.
    Loveridge, Mack and John Mann, an assistant professor with the Center for Economic Analysis, and several other researchers at the University of Texas and Arizona State University, worked together to produce a new dataset that resolves some of these issues by linking the breaks in the Form 477 data into a continuous timeline and aligning the data to the 2010 census.
    “We developed a procedure for using the data to produce an integrated broadband time series,” Mann said. “The team has labeled the dataset BITS, which stands for a Broadband Integrated Time Series.”
    “We hope these (BITS) data will be a tool to diagnosing gaps in broadband availability to help close the digital divide and enhance the participation of all people in online activities,” Mack said. “With shrinking public budgets and a need to pinpoint locations suffering from a chronic shortage of broadband, it is critical for policymakers to efficiently allocate the human, infrastructural and policy resources required to improve local conditions.”
    This work was funded in part by a grant from the U.S. Department of Agriculture to examine broadband availability and its impact on business in rural and tribal areas.
    Story Source:
    Materials provided by Michigan State University. Original written by Emilie Lorditch. Note: Content may be edited for style and length. More

  • in

    Cloud computing expands brain sciences

    People often think about human behavior in terms of what is happening in the present — reading a newspaper, driving a car, or catching a football. But other dimensions of behavior extend over weeks, months, and years.
    Examples include a child learning how to read; an athlete recovering from a concussion; or a person turning 50 and wondering where all the time has gone. These are not changes that people perceive on a day-to-day basis. They just suddenly realize they’re older, healed, or have a new development skill.
    “The field of neuroscience looks at the brain in multiple ways,” says Franco Pestilli, a neuroscientist at The University of Texas at Austin (UT Austin). “For example, we’re interested in how neurons compute and allow us to quickly react — it’s a fast response requiring visual attention and motor control. Understanding the brain needs big data to capture all dimensions of human behavior.”
    As an expert in vision science, neuroinformatics, brain imaging, computational neuroscience, and data science, Pestilli’s research has advanced the understanding of human cognition and brain networks over the last 15 years.
    Pestilli likes to compare the brain to the Internet, a powerful set of computers connected by cables simultaneously keeping many windows open and programs running. If the computer is healthy but the cables are not, long range communication between computers in different parts of the brain begins to fail. This in turn creates problems for our long-term behavior.
    Pestilli and team are also interested in how biological computations change over longer time periods — such as how does our brain change as we lose our vision? More

  • in

    New twist on DNA data storage lets users preview stored files

    Researchers from North Carolina State University have turned a longstanding challenge in DNA data storage into a tool, using it to offer users previews of stored data files — such as thumbnail versions of image files.
    DNA data storage is an attractive technology because it has the potential to store a tremendous amount of data in a small package, it can store that data for a long time, and it does so in an energy-efficient way. However, until now, it wasn’t possible to preview the data in a file stored as DNA — if you wanted to know what a file was, you had to “open” the entire file.
    “The advantage to our technique is that it is more efficient in terms of time and money,” says Kyle Tomek, lead author of a paper on the work and a Ph.D. student at NC State. “If you are not sure which file has the data you want, you don’t have to sequence all of the DNA in all of the potential files. Instead, you can sequence much smaller portions of the DNA files to serve as previews.”
    Here’s a quick overview of how this works.
    Users “name” their data files by attaching sequences of DNA called primer-binding sequences to the ends of DNA strands that are storing information. To identify and extract a given file, most systems use polymerase chain reaction (PCR). Specifically, they use a small DNA primer that matches the corresponding primer-binding sequence to identify the DNA strands containing the file you want. The system then uses PCR to make lots of copies of the relevant DNA strands, then sequences the entire sample. Because the process makes numerous copies of the targeted DNA strands, the signal of the targeted strands is stronger than the rest of the sample, making it possible to identify the targeted DNA sequence and read the file.
    However, one challenge that DNA data storage researchers have grappled with is that if two or more files have similar file names, the PCR will inadvertently copy pieces of multiple data files. As a result, users have to give files very distinct names to avoid getting messy data. More

  • in

    Researchers create quantum microscope that can see the impossible

    In a major scientific leap, University of Queensland researchers have created a quantum microscope that can reveal biological structures that would otherwise be impossible to see.
    This paves the way for applications in biotechnology, and could extend far beyond this into areas ranging from navigation to medical imaging.
    The microscope is powered by the science of quantum entanglement, an effect Einstein described as “spooky interactions at a distance.”
    Professor Warwick Bowen, from UQ’s Quantum Optics Lab and the ARC Centre of Excellence for Engineered Quantum Systems (EQUS), said it was the first entanglement-based sensor with performance beyond the best possible existing technology.
    “This breakthrough will spark all sorts of new technologies — from better navigation systems to better MRI machines, you name it,” Professor Bowen said.
    “Entanglement is thought to lie at the heart of a quantum revolution. More

  • in

    Important contribution to spintronics has received little consideration until now

    The movement of electrons can have a significantly greater influence on spintronic effects than previously assumed. This discovery was made by an international team of researchers led by physicists from the Martin Luther University Halle-Wittenberg (MLU). Until now, a calculation of these effects took, above all, the spin of electrons into consideration. The study was published in the journal “Physical Review Research” and offers a new approach in developing spintronic components.
    Many technical devices are based on conventional semiconductor electronics. Charge currents are used to store and process information in these components. However, this electric current generates heat and energy is lost. To get around this problem, spintronics uses a fundamental property of electrons known as spin. “This is an intrinsic angular momentum, which can be imagined as a rotational movement of the electron around its own axis,” explains Dr Annika Johansson, a physicist at MLU. The spin is linked to a magnetic moment that, in addition to the charge of the electrons, could be used in a new generation of fast and energy-efficient components.
    Achieving this requires an efficient conversion between charge and spin currents. This conversion is made possible by the Edelstein effect: by applying an electric field, a charge current is generated in an originally non-magnetic material. In addition, the electron spins align, and the material becomes magnetic. “Previous papers on the Edelstein effect primarily focused on how electron spin contributes to magnetisation, but electrons can also carry an orbital moment that also contributes to magnetisation. If the spin is the intrinsic rotation of the electron, then the orbital moment is the motion around the nucleus of the atom,” says Johansson. This is similar to the earth, which rotates both on its own axis and around the sun. Like spin, this orbital moment generates a magnetic moment.
    In this latest study, the researchers used simulations to investigate the interface between two oxide materials commonly used in spintronics. “Although both materials are insulators, a metallic electron gas is present at their interface which is known for its efficient charge-to-spin conversion,” says Johansson. The team also factored the orbital moment into the calculation of the Edelstein effect and found that its contribution to the Edelstein effect is at least one order of magnitude greater than that of spin. These findings could help to increase the efficiency of spintronic components.
    Story Source:
    Materials provided by Martin-Luther-Universität Halle-Wittenberg. Note: Content may be edited for style and length. More

  • in

    Machine learning speeds up simulations in material science

    Research, development, and production of novel materials depend heavily on the availability of fast and at the same time accurate simulation methods. Machine learning, in which artificial intelligence (AI) autonomously acquires and applies new knowledge, will soon enable researchers to develop complex material systems in a purely virtual environment. How does this work, and which applications will benefit? In an article published in the Nature Materials journal, a researcher from Karlsruhe Institute of Technology (KIT) and his colleagues from Göttingen and Toronto explain it all.
    Digitization and virtualization are becoming increasingly important in a wide range of scientific disciplines. One of these disciplines is materials science: research, development, and production of novel materials depend heavily on the availability of fast and at the same time accurate simulation methods. This, in turn, is beneficial for a wide range of different applications — from efficient energy storage systems, such as those indispensable for the use of renewable energies, to new medicines, for whose development an understanding of complex biological processes is required. AI and machine learning methods can take simulations in material sciences to the next level. “Compared to conventional simulation methods based on classical or quantum mechanical calculations, the use of neural networks specifically tailored to material simulations enables us to achieve a significant speed advantage,” explains physicist and AI expert Professor Pascal Friederich, Head of the AiMat — Artificial Intelligence for Materials Sciences research group at KIT’s Institute of Theoretical Informatics (ITI). “With faster simulation systems, scientists will be able to develop larger and more complex material systems in a purely virtual environment, and to understand and optimize them down to the atomic level.”
    High Precision from the Atom to the Material
    In an article published in Nature Materials, Pascal Friederich, who is also associate group leader of the Nanomaterials by Information-Guided Design division at KIT’s Institute of Nanotechnology (INT), presents, together with researchers from the University of Göttingen and the University of Toronto, an overview of the basic principles of machine learning used for simulations in material sciences. This also includes the data acquisition process and active learning methods. Machine learning algorithms not only enable artificial intelligence to process the input data, but also to find patterns and correlations in large data sets, learn from them, and make autonomous predictions and decisions. For simulations in materials science, it is important to achieve high accuracy over different time and size scales, ranging from the atom to the material, while limiting computational costs. In their article, the scientists also discuss various current applications, such as small organic molecules and large biomolecules, structurally disordered solid, liquid, and gaseous materials, as well as complex crystalline systems — for example, metal-organic frameworks that can be used for gas storage or for separation, for sensors or for catalysts.
    Even More Speed with Hybrid Methods
    To further extend the possibilities of material simulations in the future, the researchers from Karlsruhe, Göttingen, and Toronto suggest the development of hybrid methods: these combine machine learning (ML) and molecular mechanics (MM) methods. MM simulations use so-called force fields in order to calculate the forces acting on each individual particle and thus predict motions. As the potentials of the ML and MM methods are quite similar, a tight integration with variable transition areas is possible. These hybrid methods could significantly accelerate the simulation of large biomolecules or enzymatic reactions in the future, for example.
    Story Source:
    Materials provided by Karlsruher Institut für Technologie (KIT). Note: Content may be edited for style and length. More