More stories

  • in

    World's smallest atom-memory unit created

    Faster, smaller, smarter and more energy-efficient chips for everything from consumer electronics to big data to brain-inspired computing could soon be on the way after engineers at The University of Texas at Austin created the smallest memory device yet. And in the process, they figured out the physics dynamic that unlocks dense memory storage capabilities for these tiny devices.
    The research published recently in Nature Nanotechnology builds on a discovery from two years ago, when the researchers created what was then the thinnest memory storage device. In this new work, the researchers reduced the size even further, shrinking the cross section area down to just a single square nanometer.
    Getting a handle on the physics that pack dense memory storage capability into these devices enabled the ability to make them much smaller. Defects, or holes in the material, provide the key to unlocking the high-density memory storage capability.
    “When a single additional metal atom goes into that nanoscale hole and fills it, it confers some of its conductivity into the material, and this leads to a change or memory effect,” said Deji Akinwande, professor in the Department of Electrical and Computer Engineering.
    Though they used molybdenum disulfide — also known as MoS2 — as the primary nanomaterial in their study, the researchers think the discovery could apply to hundreds of related atomically thin materials.
    The race to make smaller chips and components is all about power and convenience. With smaller processors, you can make more compact computers and phones. But shrinking down chips also decreases their energy demands and increases capacity, which means faster, smarter devices that take less power to operate.
    “The results obtained in this work pave the way for developing future generation applications that are of interest to the Department of Defense, such as ultra-dense storage, neuromorphic computing systems, radio-frequency communication systems and more,” said Pani Varanasi, program manager for the U.S. Army Research Office, which funded the research.
    The original device — dubbed “atomristor” by the research team — was at the time the thinnest memory storage device ever recorded, with a single atomic layer of thickness. But shrinking a memory device is not just about making it thinner but also building it with a smaller cross-sectional area.
    “The scientific holy grail for scaling is going down to a level where a single atom controls the memory function, and this is what we accomplished in the new study,” Akinwande said.
    Akinwande’s device falls under the category of memristors, a popular area of memory research, centered around electrical components with the ability to modify resistance between its two terminals without a need for a third terminal in the middle known as the gate. That means they can be smaller than today’s memory devices and boast more storage capacity.
    This version of the memristor — developed using the advanced facilities at the Oak Ridge National Laboratory — promises capacity of about 25 terabits per square centimeter. That is 100 times higher memory density per layer compared with commercially available flash memory devices.

    Story Source:
    Materials provided by University of Texas at Austin. Note: Content may be edited for style and length. More

  • in

    Direct visualization of quantum dots reveals shape of quantum wave function

    Trapping and controlling electrons in bilayer graphene quantum dots yields a promising platform for quantum information technologies. Researchers at UC Santa Cruz have now achieved the first direct visualization of quantum dots in bilayer graphene, revealing the shape of the quantum wave function of the trapped electrons.
    The results, published November 23 in Nano Letters, provide important fundamental knowledge needed to develop quantum information technologies based on bilayer graphene quantum dots.
    “There has been a lot of work to develop this system for quantum information science, but we’ve been missing an understanding of what the electrons look like in these quantum dots,” said corresponding author Jairo Velasco Jr., assistant professor of physics at UC Santa Cruz.
    While conventional digital technologies encode information in bits represented as either 0 or 1, a quantum bit, or qubit, can represent both states at the same time due to quantum superposition. In theory, technologies based on qubits will enable a massive increase in computing speed and capacity for certain types of calculations.
    A variety of systems, based on materials ranging from diamond to gallium arsenide, are being explored as platforms for creating and manipulating qubits. Bilayer graphene (two layers of graphene, which is a two-dimensional arrangement of carbon atoms in a honeycomb lattice) is an attractive material because it is easy to produce and work with, and quantum dots in bilayer graphene have desirable properties.
    “These quantum dots are an emergent and promising platform for quantum information technology because of their suppressed spin decoherence, controllable quantum degrees of freedom, and tunability with external control voltages,” Velasco said.

    advertisement

    Understanding the nature of the quantum dot wave function in bilayer graphene is important because this basic property determines several relevant features for quantum information processing, such as the electron energy spectrum, the interactions between electrons, and the coupling of electrons to their environment.
    Velasco’s team used a method he had developed previously to create quantum dots in monolayer graphene using a scanning tunneling microscope (STM). With the graphene resting on an insulating hexagonal boron nitride crystal, a large voltage applied with the STM tip creates charges in the boron nitride that serve to electrostatically confine electrons in the bilayer graphene.
    “The electric field creates a corral, like an invisible electric fence, that traps the electrons in the quantum dot,” Velasco explained.
    The researchers then used the scanning tunneling microscope to image the electronic states inside and outside of the corral. In contrast to theoretical predictions, the resulting images showed a broken rotational symmetry, with three peaks instead of the expected concentric rings.
    “We see circularly symmetric rings in monolayer graphene, but in bilayer graphene the quantum dot states have a three-fold symmetry,” Velasco said. “The peaks represent sites of high amplitude in the wave function. Electrons have a dual wave-particle nature, and we are visualizing the wave properties of the electron in the quantum dot.”
    This work provides crucial information, such as the energy spectrum of the electrons, needed to develop quantum devices based on this system. “It is advancing the fundamental understanding of the system and its potential for quantum information technologies,” Velasco said. “It’s a missing piece of the puzzle, and taken together with the work of others, I think we’re moving toward making this a useful system.”
    In addition to Velasco, the authors of the paper include co-first authors Zhehao Ge, Frederic Joucken, and Eberth Quezada-Lopez at UC Santa Cruz, along with coauthors at the Federal University of Ceara, Brazil, the National Institute for Materials Science in Japan, University of Minnesota, and UCSC’s Baskin School of Engineering. This work was funded by the National Science Foundation and the Army Research Office. More

  • in

    Misinformation or artifact: A new way to think about machine learning

    Deep neural networks, multilayered systems built to process images and other data through the use of mathematical modeling, are a cornerstone of artificial intelligence.
    They are capable of seemingly sophisticated results, but they can also be fooled in ways that range from relatively harmless — misidentifying one animal as another — to potentially deadly if the network guiding a self-driving car misinterprets a stop sign as one indicating it is safe to proceed.
    A philosopher with the University of Houston suggests in a paper published in Nature Machine Intelligence that common assumptions about the cause behind these supposed malfunctions may be mistaken, information that is crucial for evaluating the reliability of these networks.
    As machine learning and other forms of artificial intelligence become more embedded in society, used in everything from automated teller machines to cybersecurity systems, Cameron Buckner, associate professor of philosophy at UH, said it is critical to understand the source of apparent failures caused by what researchers call “adversarial examples,” when a deep neural network system misjudges images or other data when confronted with information outside the training inputs used to build the network. They’re rare and are called “adversarial” because they are often created or discovered by another machine learning network — a sort of brinksmanship in the machine learning world between more sophisticated methods to create adversarial examples and more sophisticated methods to detect and avoid them.
    “Some of these adversarial events could instead be artifacts, and we need to better know what they are in order to know how reliable these networks are,” Buckner said.
    In other words, the misfire could be caused by the interaction between what the network is asked to process and the actual patterns involved. That’s not quite the same thing as being completely mistaken.

    advertisement

    “Understanding the implications of adversarial examples requires exploring a third possibility: that at least some of these patterns are artifacts,” Buckner wrote. ” … Thus, there are presently both costs in simply discarding these patterns and dangers in using them naively.”
    Adversarial events that cause these machine learning systems to make mistakes aren’t necessarily caused by intentional malfeasance, but that’s where the highest risk comes in.
    “It means malicious actors could fool systems that rely on an otherwise reliable network,” Buckner said. “That has security applications.”
    A security system based upon facial recognition technology could be hacked to allow a breach, for example, or decals could be placed on traffic signs that cause self-driving cars to misinterpret the sign, even though they appear harmless to the human observer.
    Previous research has found that, counter to previous assumptions, there are some naturally occurring adversarial examples — times when a machine learning system misinterprets data through an unanticipated interaction rather than through an error in the data. They are rare and can be discovered only through the use of artificial intelligence.

    advertisement

    But they are real, and Buckner said that suggests the need to rethink how researchers approach the anomalies, or artifacts.
    These artifacts haven’t been well understood; Buckner offers the analogy of a lens flare in a photograph — a phenomenon that isn’t caused by a defect in the camera lens but is instead produced by the interaction of light with the camera.
    The lens flare potentially offers useful information — the location of the sun, for example — if you know how to interpret it. That, he said, raises the question of whether adverse events in machine learning that are caused by an artifact also have useful information to offer.
    Equally important, Buckner said, is that this new way of thinking about the way in which artifacts can affect deep neural networks suggests a misreading by the network shouldn’t be automatically considered evidence that deep learning isn’t valid.
    “Some of these adversarial events could be artifacts,” he said. “We have to know what these artifacts are so we can know how reliable the networks are.”

    Story Source:
    Materials provided by University of Houston. Original written by Jeannie Kever. Note: Content may be edited for style and length. More

  • in

    Algorithm accurately predicts COVID-19 patient outcomes

    With communities across the nation experiencing a wave of COVID-19 infections, clinicians need effective tools that will enable them to aggressively and accurately treat each patient based on their specific disease presentation, health history, and medical risks.
    In research recently published online in Medical Image Analysis, a team of engineers demonstrated how a new algorithm they developed was able to successfully predict whether or not a COVID-19 patient would need ICU intervention. This artificial intelligence-based approach could be a valuable tool in determining a proper course of treatment for individual patients.
    The research team, led by Pingkun Yan, an assistant professor of biomedical engineering at Rensselaer Polytechnic Institute, developed this method by combining chest computed tomography (CT) images that assess the severity of a patient’s lung infection with non-imaging data, such as demographic information, vital signs, and laboratory blood test results. By combining these data points, the algorithm is able to predict patient outcomes, specifically whether or not a patient will need ICU intervention.
    The algorithm was tested on datasets collected from a total of 295 patients from three different hospitals — one in the United States, one in Iran, and one in Italy. Researchers were able to compare the algorithm’s predictions to what kind of treatment a patient actually ended up needing.
    “As a practitioner of AI, I do believe in its power,” said Yan, who is a member of the Center for Biotechnology and Interdisciplinary Studies (CBIS) at Rensselaer. “It really enables us to analyze a large quantity of data and also extract the features that may not be that obvious to the human eye.”
    This development is the result of research supported by a recent National Institutes of Health grant, which was awarded to provide solutions during this worldwide pandemic. As the team continues its work, Yan said, researchers will integrate their new algorithm with another that Yan had previously developed to assess a patient’s risk of cardiovascular disease using chest CT scans.
    “We know that a key factor in COVID mortality is whether a patient has underlying conditions and heart disease is a significant comorbidity,” Yan said. “How much this contributes to their disease progress is, right now, fairly subjective. So, we have to have a quantification of their heart condition and then determine how we factor that into this prediction.”
    “This critical work, led by Professor Yan, is offering an actionable solution for clinicians who are in the middle of a worldwide pandemic,” said Deepak Vashishth, the director of CBIS. “This project highlights the capabilities of Rensselaer expertise in bioimaging combined with important partnerships with medical institutions.”
    Yan is joined at Rensselaer by Ge Wang, an endowed chair professor of biomedical engineering and member of CBIS, as well as graduate students Hanqing Chao, Xi Fang, and Jiajin Zhang. The Rensselaer team is working in collaboration with Massachusetts General Hospital. When this work is complete, Yan said, the team hopes to translate its algorithm into a method that doctors at Massachusetts General can use to assess their patients.
    “We actually are seeing that the impact could go well beyond COVID diseases. For example, patients with other lung diseases,” Yan said. “Assessing their heart disease condition, together with their lung condition, could better predict their mortality risk so that we can help them to manage their condition.”

    Story Source:
    Materials provided by Rensselaer Polytechnic Institute. Original written by Torie Wells. Note: Content may be edited for style and length. More

  • in

    After more than a decade, ChIP-seq may be quantitative after all

    For more than a decade, scientists studying epigenetics have used a powerful method called ChIP-seq to map changes in proteins and other critical regulatory factors across the genome. While ChIP-seq provides invaluable insights into the underpinnings of health and disease, it also faces a frustrating challenge: its results are often viewed as qualitative rather than quantitative, making interpretation difficult.
    But, it turns out, ChIP-seq may have been quantitative all along, according to a recent report selected as an Editors’ Pick by and featured on the cover of the Journal of Biological Chemistry.
    “ChIP-seq is the backbone of epigenetics research. Our findings challenge the belief that additional steps are required to make it quantitative,” said Brad Dickson, Ph.D., a staff scientist at Van Andel Institute and the study’s corresponding author. “Our new approach provides a way to quantify results, thereby making ChIP-seq more precise, while leaving standard protocols untouched.”
    Previous attempts to quantify ChIP-seq results have led to additional steps being added to the protocol, including the use of “spike-ins,” which are additives designed to normalize ChIP-seq results and reveal histone changes that otherwise may be obscured. These extra steps increase the complexity of experiments while also adding variables that could interfere with reproducibility. Importantly, the study also identifies a sensitivity issue in spike-in normalization that has not previously been discussed.
    Using a predictive physical model, Dickson and his colleagues developed a novel approach called the sans-spike-in method for Quantitative ChIP-sequencing, or siQ-ChIP. It allows researchers to follow the standard ChIP-seq protocol, eliminating the need for spike-ins, and also outlines a set of common measurements that should be reported for all ChIP-seq experiments to ensure reproducibility as well as quantification.
    By leveraging the binding reaction at the immunoprecipitation step, siQ-ChIP defines a physical scale for sequencing results that allows comparison between experiments. The quantitative scale is based on the binding isotherm of the immunoprecipitation products.

    Story Source:
    Materials provided by Van Andel Research Institute. Note: Content may be edited for style and length. More

  • in

    Biophysics: Geometry supersedes simulations

    Ludwig-Maximilians-Universitaet (LMU) in Munich physicists have introduced a new method that allows biological pattern-forming systems to be systematically characterized with the aid of mathematical analysis. The trick lies in the use of geometry to characterize the dynamics.
    Many vital processes that take place in biological cells depend on the formation of self-organizing molecular patterns. For example, defined spatial distributions of specific proteins regulate cell division, cell migration and cell growth. These patterns result from the concerted interactions of many individual macromolecules. Like the collective motions of bird flocks, these processes do not need a central coordinator. Hitherto, mathematical modelling of protein pattern formation in cells has been carried out largely by means of elaborate computer-based simulations. Now, LMU physicists led by Professor Erwin Frey report the development of a new method which provides for the systematic mathematical analysis of pattern formation processes, and uncovers the their underlying physical principles. The new approach is described and validated in a paper that appears in the journal Physical Review X.
    The study focuses on what are called ‘mass-conserving’ systems, in which the interactions affect the states of the particles involved, but do not alter the total number of particles present in the system. This condition is fulfilled in systems in which proteins can switch between different conformational states that allow them to bind to a cell membrane or to form different multicomponent complexes, for example. Owing to the complexity of the nonlinear dynamics in these systems, pattern formation has so far been studied with the aid of time-consuming numerical simulations. “Now we can understand the salient features of pattern formation independently of simulations using simple calculations and geometrical constructions,” explains Fridtjof Brauns, lead author of the new paper. “The theory that we present in this report essentially provides a bridge between the mathematical models and the collective behavior of the system’s components.”
    The key insight that led to the theory was the recognition that alterations in the local number density of particles will also shift the positions of local chemical equilibria. These shifts in turn generate concentration gradients that drive the diffusive motions of the particles. The authors capture this dynamic interplay with the aid of geometrical structures that characterize the global dynamics in a multidimensional ‘phase space’. The collective properties of systems can be directly derived from the topological relationships between these geometric constructs, because these objects have concrete physical meanings — as representations of the trajectories of shifting chemical equilibria, for instance. “This is the reason why our geometrical description allows us to understand why the patterns we observe in cells arise. In other words, they reveal the physical mechanisms that determine the interplay between the molecular species involved,” says Frey. “Furthermore, the fundamental elements of our theory can be generalized to deal with a wide range of systems, which in turn paves the way to a comprehensive theoretical framework for self-organizing systems.”

    Story Source:
    Materials provided by Ludwig-Maximilians-Universität München. Note: Content may be edited for style and length. More

  • in

    A biochemical random number

    True random numbers are required in fields as diverse as slot machines and data encryption. These numbers need to be truly random, such that they cannot even be predicted by people with detailed knowledge of the method used to generate them.
    As a rule, they are generated using physical methods. For instance, thanks to the tiniest high-frequency electron movements, the electrical resistance of a wire is not constant but instead fluctuates slightly in an unpredictable way. That means measurements of this background noise can be used to generate true random numbers.
    Now, for the first time, a research team led by Robert Grass, Professor at the Institute of Chemical and Bioengineering, has described a non-physical method of generating such numbers: one that uses biochemical signals and actually works in practice. In the past, the ideas put forward by other scientists for generating random numbers by chemical means tended to be largely theoretical.
    DNA synthesis with random building blocks
    For this new approach, the ETH Zurich researchers apply the synthesis of DNA molecules, an established chemical research method frequently employed over many years. It is traditionally used to produce a precisely defined DNA sequence. In this case, however, the research team built DNA molecules with 64 building block positions, in which one of the four DNA bases A, C, G and T was randomly located at each position. The scientists achieved this by using a mixture of the four building blocks, rather than just one, at every step of the synthesis.
    As a result, a relatively simple synthesis produced a combination of approximately three quadrillion individual molecules. The scientists subsequently used an effective method to determine the DNA sequence of five million of these molecules. This resulted in 12 megabytes of data, which the researchers stored as zeros and ones on a computer.
    Huge quantities of randomness in a small space
    However, an analysis showed that the distribution of the four building blocks A, C, G and T was not completely even. Either the intricacies of nature or the synthesis method deployed led to the bases G and T being integrated more frequently in the molecules than A and C. Nonetheless, the scientists were able to correct this bias with a simple algorithm, thereby generating perfect random numbers.
    The main aim of ETH Professor Grass and his team was to show that random occurrences in chemical reaction can be exploited to generate perfect random numbers. Translating the finding into a direct application was not a prime concern at first. “Compared with other methods, however, ours has the advantage of being able to generate huge quantities of randomness that can be stored in an extremely small space, a single test tube,” Grass says. “We can read out the information and reinterpret it in digital form at a later date. This is impossible with the previous methods.”

    Story Source:
    Materials provided by ETH Zurich. Original written by Fabio Bergamin. Note: Content may be edited for style and length. More

  • in

    Light-controlled nanomachine controls catalysis

    The vision of the future of miniaturisation has produced a series of synthetic molecular motors that are driven by a range of energy sources and can carry out various movements. A research group at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) has now managed to control a catalysis reaction using a light-controlled motor. This takes us one step closer to realising the vision of a nano factory in which combinations of various machines work together, as is the case in biological cells. The results have been published in the Journal of the American Chemical Society.
    Laws of mechanics cannot always be applied
    Per definition, a motor converts energy into a specific type of kinetic energy. On a molecular level, for example, the protein myosin can produce muscle contractions using chemical energy. Such nanomachines can now be synthetically produced. However, the molecules used are much smaller than proteins and significantly less complex.
    ‘The laws of mechanical physics cannot simply be applied to the molecular level,’ says Prof. Dr. Henry Dube, Chair of Organic Chemistry I at FAU. Inertia, for example, does not exist at this level, he explains. Triggered by Brownian motion, particles are constantly in motion. ‘Activating a rotating motor is not enough, you need to incorporate a type of ratchet mechanism that prevents it from turning backwards,’ he explains.
    In 2015 while at LMU in Munich, Prof. Dube and his team developed a particularly fast molecular motor driven by visible light. In 2018, they developed the first molecular motor that is driven solely by light and functions regardless of the ambient temperature. A year later, they developed a variant capable not only of rotation but also of performing a figure of eight motion. All motors are based on the hemithioindigo molecule, an asymmetric variant of the naturally occurring dye indigo where a sulphur atom takes the place of the nitrogen atom. One part of the molecule rotates in several steps in the opposite direction to the other part of the molecule. The energy-driven steps are triggered by visible light and modify the molecules so that reverse reactions are blocked.
    Standard catalysts in use
    After coming to FAU, Henry Dube used the rotating motor developed in 2015 to control a separate chemical process for the first time. It moves in four steps around the carbon double bond of the hemithioindigo. Two of the four steps triggered by a photo reaction can be used to control a catalysis reaction. ‘Green light generates a molecular structure that binds a catalyst to the hemithioindigo and blue light releases the catalyst,’ explains the chemist.
    A standard catalyst is used that does not have any metal atoms. Using electrostatic forces, the catalyst docks via a hydrogen bond onto an oxygen atom in the ‘motor molecule’. All catalysts that use a hydrogen bond could be used, in principle. ‘The great advantage of hemithioindigo is that its innate structure has a bonding mechanism for catalysts,’ explains Prof. Dube. It would otherwise have to be added using chemical synthesis.
    The rotation of the hemithioindigo motor is controlled by visible light. At the same time, the system allows the targeted release and bonding of a catalyst that accelerates or decelerates desired chemical reactions. ‘This project is an important step towards integrating molecular motors in chemical processes simply and in a variety of ways,’ says Prof. Dube. ‘This will let us synthesise complex medication at a high level of precision using molecular machines like a production line in future.’

    Story Source:
    Materials provided by University of Erlangen-Nuremberg. Note: Content may be edited for style and length. More