More stories

  • in

    Surprise result for solid state physicists hints at unusual electron behavior

    While studying the behavior of electrons in iron-based superconducting materials, researchers at the University of Tokyo observed a strange signal relating to the way electrons are arranged. The signal implies a new arrangement of electrons the researchers call a nematicity wave, and they hope to collaborate with theoretical physicists to better understand it. The nematicity wave could help researchers understand the way electrons interact with each other in superconductors.
    A long-standing dream of solid state physicists is to fully understand the phenomenon of superconductivity — essentially electronic conduction without the resistance that creates heat and drains power. It would usher in a whole new world of incredibly efficient or powerful devices and is already being used on Japan’s experimental magnetic levitation bullet train. But there is much to explore in this complex topic, and it often surprises researchers with unexpected results and observations.
    Professor Shik Shin from the Institute for Solid State Physics at the University of Tokyo and his team study the way electrons behave in iron-based superconducting materials, or IBSCs. These materials show a lot of promise as they could work at higher temperatures than some other superconducting materials which is an important concern. They also use less exotic material components so can be easier and cheaper to work with. To activate a sample’s superconducting ability, the material needs to be cooled down to several hundreds of degrees below zero. And interesting things happen during this cooling process.
    “As IBSCs cool down to a certain level, they express a state we call electronic nematicity,” said Shin. “This is where the crystal lattice of the material and the electrons within it appear to be arranged differently depending on the angle you look at them, otherwise known as anisotropy. We expect the way electrons are arranged to be tightly coupled to the way the surrounding crystal lattice is arranged. But our recent observation shows something very different and actually quite surprising.”
    Shin and his team used a special technique developed by their group called laser-PEEM (photoemission electron microscopy) to visualize their IBSC sample on the microscopic scale. They expected to see a familiar pattern that repeats every few nanometers (billionths of a meter). And sure enough the crystal lattice did show this pattern. But to their surprise, the team found that the pattern of electrons was repeating every few hundred nanometers instead.
    This disparity between the electron nematicity wave and the crystalline structure of the IBSC was unexpected, so its implications are still under investigation. But the result could open the door to theoretical and experimental explorations into something fundamental to the phenomenon of superconductivity, and that is the way that electrons form pairs at low temperatures. Knowledge of this process could be crucial to the development of high-temperature superconductivity. So if nematicity waves are related, it is important to know how.
    “Next, I hope we can work with theoretical physicists to further our understanding of nematicity waves,” said Shin. “We also wish to use laser-PEEM to study other related materials such as metal oxides like copper oxide. It may not always be obvious where the applications lie, but working on problems of fundamental physics really fascinates me.”
    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Putting a new theory of many-particle quantum systems to the test

    New experiments using trapped one-dimensional gases — atoms cooled to the coldest temperatures in the universe and confined so that they can only move in a line — fit with the predictions of the recently developed theory of “generalized hydrodynamics.” Quantum mechanics is necessary to describe the novel properties of these gases. Achieving a better understanding of how such systems with many particles evolve in time is a frontier of quantum physics. The result could greatly simplify the study of quantum systems that have been excited out of equilibrium. Besides its fundamental importance, it could eventually inform the development of quantum-based technologies, which include quantum computers and simulators, quantum communication, and quantum sensors. A paper describing the experiments by a team led by Penn State physicists appears September 2, 2021 in the journal Science.
    Even within classical physics, where the additional complexities of quantum mechanics can be ignored, it is impossible to simulate the motion of all the atoms in a moving fluid. To approximate these systems of particles, physicists use hydrodynamics descriptions.
    “The basic idea behind hydrodynamics is to forget about the atoms and consider the fluid as a continuum,” said Marcos Rigol, professor of physics at Penn State and one of the leaders of the research team. “To simulate the fluid, one ends up writing coupled equations that result from imposing a few constraints, such as the conservation of mass and energy. These are the same types of equations solved, for example, to simulate how air flows when you open windows to improve ventilation in a room.”
    Matter becomes more complicated if quantum mechanics is involved, as is the case when one wants to simulate quantum many-body systems that are out of equilibrium.
    “Quantum many body systems — which are composed of many interacting particles, such as atoms — are at the heart of atomic, nuclear, and particle physics,” said David Weiss, Distinguished Professor of Physics at Penn State and one of the leaders of the research team. “It used to be that except in extreme limits you couldn’t do a calculation to describe out-of-equilibrium quantum many-body systems. That recently changed.”
    The change was motivated by the development of a theoretical framework known as generalized hydrodynamics.
    “The problem with those quantum many-body systems in one dimension is that they have so many constraints on their motion that regular hydrodynamics descriptions cannot be used,” said Rigol. “Generalized hydrodynamics was developed to keep track of all those constraints.”
    Until now, generalized hydrodynamics had only previously been experimentally tested under conditions where the strength of interactions among particles was weak.
    “We set out to test the theory further, by looking at the dynamics of one dimensional gases with a wide range of interaction strengths,” said Weiss. “The experiments are extremely well controlled, so the results can be precisely compared to the predictions of this theory.
    The research team uses one dimensional gases of interacting atoms that are initially confined in a very shallow trap in equilibrium. They then very suddenly increase the depth of the trap by 100 times, which forces the particles to collapse into the center of the trap, causing their collective properties to change. Throughout the collapse, the team precisely measures their properties, which they can then compare to the predictions of generalized hydrodynamics.
    “Our measurements matched the prediction of theory across dozens of trap oscillations,” said Weiss. “There currently aren’t other ways to study out-of-equilibrium quantum systems for long periods of time with reasonable accuracy, especially with a lot of particles. Generalized hydrodynamics allow us to do this for some systems like the one we tested, but how generally applicable it is still needs to be determined.”
    Story Source:
    Materials provided by Penn State. Original written by Sam Sholtis. Note: Content may be edited for style and length. More

  • in

    Scientists create a labor-saving automated method for studying electronic health records

    In an article published in the journal Patterns, scientists at the Icahn School of Medicine at Mount Sinai described the creation of a new, automated, artificial intelligence-based algorithm that can learn to read patient data from electronic health records. In a side-by-side comparison, they showed that their method, called Phe2vec (FEE-to-vek), accurately identified patients with certain diseases as well as the traditional, “gold-standard” method, which requires much more manual labor to develop and perform.
    “There continues to be an explosion in the amount and types of data electronically stored in a patient’s medical record. Disentangling this complex web of data can be highly burdensome, thus slowing advancements in clinical research,” said Benjamin S. Glicksberg, PhD, Assistant Professor of Genetics and Genomic Sciences, a member of the Hasso Plattner Institute for Digital Health at Mount Sinai (HPIMS), and a senior author of the study. “In this study, we created a new method for mining data from electronic health records with machine learning that is faster and less labor intensive than the industry standard. We hope that this will be a valuable tool that will facilitate further, and less biased, research in clinical informatics.”
    The study was led by Jessica K. De Freitas, a graduate student in Dr. Glicksberg lab.
    Currently, scientists rely on a set of established computer programs, or algorithms, to mine medical records for new information. The development and storage of these algorithms is managed by a system called the Phenotype Knowledgebase (PheKB). Although the system is highly effective at correctly identifying a patient diagnosis, the process of developing an algorithm can be very time-consuming and inflexible. To study a disease, researchers first have to comb through reams of medical records looking for pieces of data, such as certain lab tests or prescriptions, which are uniquely associated with the disease. They then program the algorithm that guides the computer to search for patients who have those disease-specific pieces of data, which constitute a “phenotype.” In turn, the list of patients identified by the computer needs to be manually double-checked by researchers. Each time researchers want to study a new disease, they have to restart the process from scratch.
    In this study, the researchers tried a different approach — one in which the computer learns, on its own, how to spot disease phenotypes and thus save researchers time and effort. This new, Phe2vec method was based on studies the team had already conducted.
    “Previously, we showed that unsupervised machine learning could be a highly efficient and effective strategy for mining electronic health records,” said Riccardo Miotto, PhD, a former Assistant Professor at the HPIMS and a senior author of the study. “The potential advantage of our approach is that it learns representations of diseases from the data itself. Therefore, the machine does much of the work experts would normally do to define the combination of data elements from health records that best describes a particular disease.”
    Essentially, a computer was programmed to scour through millions of electronic health records and learn how to find connections between data and diseases. This programming relied on “embedding” algorithms that had been previously developed by other researchers, such as linguists, to study word networks in various languages. One of the algorithms, called word2vec, was particularly effective. Then, the computer was programmed to use what it learned to identify the diagnoses of nearly 2 million patients whose data was stored in the Mount Sinai Health System.
    Finally, the researchers compared the effectiveness between the new and the old systems. For nine out of ten diseases tested, they found that the new Phe2vec system was as effective as, or performed slightly better than, the gold standard phenotyping process at correctly identifying a diagnoses from electronic health records. A few examples of the diseases included dementia, multiple sclerosis, and sickle cell anemia.
    “Overall our results are encouraging and suggest that Phe2vec is a promising technique for large-scale phenotyping of diseases in electronic health record data,” Dr. Glicksberg said. “With further testing and refinement, we hope that it could be used to automate many of the initial steps of clinical informatics research, thus allowing scientists to focus their efforts on downstream analyses like predictive modeling.”
    This study was supported by the Hasso Plattner Foundation, the Alzheimer’s Drug Discovery Foundation, and a courtesy graphics processing unit donation from the NVIDIA Corporation. More

  • in

    These geckos crash-land on rainforest trees but don't fall, thanks to their tails

    A gecko’s tail is a wondrous and versatile thing.
    In more than 15 years of research on geckos, scientists at the University of California, Berkeley, and, more recently, the Max Planck Institute for Intelligent Systems in Stuttgart, Germany, have shown that geckos use their tails to maneuver in midair when gliding between trees, to right themselves when falling, to keep from falling off a tree when they lose their grip and even to propel themselves across the surface of a pond, as if walking on water.
    Many of these techniques have been implemented in agile, gecko-like robots.
    But Robert Full, UC Berkeley professor of integrative biology, and Ardian Jusufi, faculty member at the Max Planck Research School for Intelligent Systems and former UC Berkeley doctoral student, were blown away by a recent discovery: Geckos also use their tails to help recover when they take a header into a tree.
    Those head-first crashes are probably not the geckos’ preferred landing, but Jusufi documented many such hard landings in 37 glides over several field seasons in a Singapore rainforest, using high-speed video cameras to record their trajectories and wince-inducing landings. He clocked their speed upon impact at about 6 meters per second, or 21 kilometers per hour — more than 200 feet per second, or about 120 gecko body lengths per second.
    “Observing the geckos from elevation in the rainforest canopy was eye-opening. Before take-off, they would move their head up-and-down, and side-to-side to view the landing target prior to jumping off, as if to estimate the travel distance,” Jusufi said. More

  • in

    Nano ‘camera’ made using molecular glue allows real-time monitoring of chemical reactions

    Researchers have made a tiny camera, held together with ‘molecular glue’ that allows them to observe chemical reactions in real time.
    The device, made by a team from the University of Cambridge, combines tiny semiconductor nanocrystals called quantum dots and gold nanoparticles using molecular glue called cucurbituril (CB). When added to water with the molecule to be studied, the components self-assemble in seconds into a stable, powerful tool that allows the real-time monitoring of chemical reactions.
    The camera harvests light within the semiconductors, inducing electron transfer processes like those that occur in photosynthesis, which can be monitored using incorporated gold nanoparticle sensors and spectroscopic techniques. They were able to use the camera to observe chemical species which had been previously theorised but not directly observed.
    The platform could be used to study a wide range of molecules for a variety of potential applications, such as the improvement of photocatalysis and photovoltaics for renewable energy. The results are reported in the journal Nature Nanotechnology.
    Nature controls the assemblies of complex structures at the molecular scale through self-limiting processes. However, mimicking these processes in the lab is usually time-consuming, expensive and reliant on complex procedures.
    “In order to develop new materials with superior properties, we often combine different chemical species together to come up with a hybrid material that has the properties we want,” said Professor Oren Scherman from Cambridge’s Yusuf Hamied Department of Chemistry, who led the research. “But making these hybrid nanostructures is difficult, and you often end up with uncontrolled growth or materials that are unstable.”
    The new method that Scherman and his colleagues from Cambridge’s Cavendish Laboratory and University College London developed uses cucurbituril — a molecular glue which interacts strongly with both semiconductor quantum dots and gold nanoparticles. The researchers used small semiconductor nanocrystals to control the assembly of larger nanoparticles through a process they coined interfacial self-limiting aggregation. The process leads to permeable and stable hybrid materials that interact with light. The camera was used to observe photocatalysis and track light-induced electron transfer. More

  • in

    Brain-inspired memory device

    Many electronic devices today are dependent on semiconductor logic circuits based on switches hard-wired to perform predefined logic functions. Physicists from the National University of Singapore (NUS), together with an international team of researchers, have developed a novel molecular memristor, or an electronic memory device, that has exceptional memory reconfigurability.
    Unlike hard-wired standard circuits, the molecular device can be reconfigured using voltage to embed different computational tasks. The energy-efficient new technology, which is capable of enhanced computational power and speed, can potentially be used in edge computing, as well as handheld devices and applications with limited power resource.
    “This work is a significant breakthrough in our quest to design low-energy computing. The idea of using multiple switching in a single element draws inspiration from how the brain works and fundamentally reimagines the design strategy of a logic circuit,” said Associate Professor Ariando from the NUS Department of Physics who led the research.
    The research was first published in the journal Nature on 1 September 2021, and carried out in collaboration with the Indian Association for the Cultivation of Science, Hewlett Packard Enterprise, the University of Limerick, the University of Oklahoma, and Texas A&M University.
    Brain-inspired technology
    “This new discovery can contribute to developments in edge computing as a sophisticated in-memory computing approach to overcome the von Neumann bottleneck, a delay in computational processing seen in many digital technologies due to the physical separation of memory storage from a device’s processor,” said Assoc Prof Ariando. The new molecular device also has the potential to contribute to designing next generation processing chips with enhanced computational power and speed. More

  • in

    Quantum emitters: Beyond crystal clear to single-photon pure

    Photons — fundamental particles of light — are carrying these words to your eyes via the light from your computer screen or phone. Photons play a key role in the next-generation quantum information technology, such as quantum computing and communications. A quantum emitter, capable of producing a single, pure photon, is the crux of such technology but has many issues that have yet to be solved, according to KAIST researchers.
    A research team under Professor Yong-Hoon Cho has developed a technique that can isolate the desired quality emitter by reducing the noise surrounding the target with what they have dubbed a ‘nanoscale focus pinspot.’ They published their results on June 24 in ACS Nano.
    “The nanoscale focus pinspot is a structurally nondestructive technique under an extremely low dose ion beam and is generally applicable for various platforms to improve their single-photon purity while retaining the integrated photonic structures,” said lead author Yong-Hoon Cho from the Department of Physics at KAIST.
    To produce single photons from solid state materials, the researchers used wide-bandgap semiconductor quantum dots — fabricated nanoparticles with specialized potential properties, such as the ability to directly inject current into a small chip and to operate at room temperature for practical applications. By making a quantum dot in a photonic structure that propagates light, and then irradiating it with helium ions, researchers theorized that they could develop a quantum emitter that could reduce the unwanted noisy background and produce a single, pure photon on demand.
    Professor Cho explained, “Despite its high resolution and versatility, a focused ion beam typically suppresses the optical properties around the bombarded area due to the accelerated ion beam’s high momentum. We focused on the fact that, if the focused ion beam is well controlled, only the background noise can be selectively quenched with high spatial resolution without destroying the structure.”
    In other words, the researchers focused the ion beam on a mere pin prick, effectively cutting off the interactions around the quantum dot and removing the physical properties that could negatively interact with and degrade the photon purity emitted from the quantum dot.
    “It is the first developed technique that can quench the background noise without changing the optical properties of the quantum emitter and the built-in photonic structure,” Professor Cho asserted.
    Professor Cho compared it to stimulated emission depletion microscopy, a technique used to decrease the light around the area of focus, but leaving the focal point illuminated. The result is increased resolution of the desired visual target.
    “By adjusting the focused ion beam-irradiated region, we can select the target emitter with nanoscale resolution by quenching the surrounding emitter,” Professor Cho said. “This nanoscale selective-quenching technique can be applied to various material and structural platforms and further extended for applications such as optical memory and high-resolution micro displays.”
    Korea’s National Research Foundation and the Samsung Science and Technology Foundation supported this work.
    Story Source:
    Materials provided by The Korea Advanced Institute of Science and Technology (KAIST). Note: Content may be edited for style and length. More

  • in

    New molecular device has unprecedented reconfigurability reminiscent of brain plasticity

    In a discovery published in the journal Nature, an international team of researchers has described a novel molecular device with exceptional computing prowess.
    Reminiscent of the plasticity of connections in the human brain, the device can be reconfigured on the fly for different computational tasks by simply changing applied voltages. Furthermore, like nerve cells can store memories, the same device can also retain information for future retrieval and processing.
    “The brain has the remarkable ability to change its wiring around by making and breaking connections between nerve cells. Achieving something comparable in a physical system has been extremely challenging,” said Dr. R. Stanley Williams, professor in the Department of Electrical and Computer Engineering at Texas A&M University. “We have now created a molecular device with dramatic reconfigurability, which is achieved not by changing physical connections like in the brain, but by reprogramming its logic.”
    Dr. T. Venkatesan, director of the Center for Quantum Research and Technology (CQRT) at the University of Oklahoma, Scientific Affiliate at National Institute of Standards and Technology, Gaithersburg, and adjunct professor of electrical and computer engineering at the National University of Singapore, added that their molecular device might in the future help design next-generation processing chips with enhanced computational power and speed, but consuming significantly reduced energy.
    Whether it is the familiar laptop or a sophisticated supercomputer, digital technologies face a common nemesis, the von Neumann bottleneck. This delay in computational processing is a consequence of current computer architectures, wherein the memory, containing data and programs, is physically separated from the processor. As a result, computers spend a significant amount of time shuttling information between the two systems, causing the bottleneck. Also, despite extremely fast processor speeds, these units can be idling for extended amounts of time during periods of information exchange.
    As an alternative to conventional electronic parts used for designing memory units and processors, devices called memristors offer a way to circumvent the von Neumann bottleneck. Memristors, such as those made of niobium dioxide and vanadium dioxide, transition from being an insulator to a conductor at a set temperature. This property gives these types of memristors the ability to perform computations and store data. More