More stories

  • in

    Putting a new theory of many-particle quantum systems to the test

    New experiments using trapped one-dimensional gases — atoms cooled to the coldest temperatures in the universe and confined so that they can only move in a line — fit with the predictions of the recently developed theory of “generalized hydrodynamics.” Quantum mechanics is necessary to describe the novel properties of these gases. Achieving a better understanding of how such systems with many particles evolve in time is a frontier of quantum physics. The result could greatly simplify the study of quantum systems that have been excited out of equilibrium. Besides its fundamental importance, it could eventually inform the development of quantum-based technologies, which include quantum computers and simulators, quantum communication, and quantum sensors. A paper describing the experiments by a team led by Penn State physicists appears September 2, 2021 in the journal Science.
    Even within classical physics, where the additional complexities of quantum mechanics can be ignored, it is impossible to simulate the motion of all the atoms in a moving fluid. To approximate these systems of particles, physicists use hydrodynamics descriptions.
    “The basic idea behind hydrodynamics is to forget about the atoms and consider the fluid as a continuum,” said Marcos Rigol, professor of physics at Penn State and one of the leaders of the research team. “To simulate the fluid, one ends up writing coupled equations that result from imposing a few constraints, such as the conservation of mass and energy. These are the same types of equations solved, for example, to simulate how air flows when you open windows to improve ventilation in a room.”
    Matter becomes more complicated if quantum mechanics is involved, as is the case when one wants to simulate quantum many-body systems that are out of equilibrium.
    “Quantum many body systems — which are composed of many interacting particles, such as atoms — are at the heart of atomic, nuclear, and particle physics,” said David Weiss, Distinguished Professor of Physics at Penn State and one of the leaders of the research team. “It used to be that except in extreme limits you couldn’t do a calculation to describe out-of-equilibrium quantum many-body systems. That recently changed.”
    The change was motivated by the development of a theoretical framework known as generalized hydrodynamics.
    “The problem with those quantum many-body systems in one dimension is that they have so many constraints on their motion that regular hydrodynamics descriptions cannot be used,” said Rigol. “Generalized hydrodynamics was developed to keep track of all those constraints.”
    Until now, generalized hydrodynamics had only previously been experimentally tested under conditions where the strength of interactions among particles was weak.
    “We set out to test the theory further, by looking at the dynamics of one dimensional gases with a wide range of interaction strengths,” said Weiss. “The experiments are extremely well controlled, so the results can be precisely compared to the predictions of this theory.
    The research team uses one dimensional gases of interacting atoms that are initially confined in a very shallow trap in equilibrium. They then very suddenly increase the depth of the trap by 100 times, which forces the particles to collapse into the center of the trap, causing their collective properties to change. Throughout the collapse, the team precisely measures their properties, which they can then compare to the predictions of generalized hydrodynamics.
    “Our measurements matched the prediction of theory across dozens of trap oscillations,” said Weiss. “There currently aren’t other ways to study out-of-equilibrium quantum systems for long periods of time with reasonable accuracy, especially with a lot of particles. Generalized hydrodynamics allow us to do this for some systems like the one we tested, but how generally applicable it is still needs to be determined.”
    Story Source:
    Materials provided by Penn State. Original written by Sam Sholtis. Note: Content may be edited for style and length. More

  • in

    Scientists create a labor-saving automated method for studying electronic health records

    In an article published in the journal Patterns, scientists at the Icahn School of Medicine at Mount Sinai described the creation of a new, automated, artificial intelligence-based algorithm that can learn to read patient data from electronic health records. In a side-by-side comparison, they showed that their method, called Phe2vec (FEE-to-vek), accurately identified patients with certain diseases as well as the traditional, “gold-standard” method, which requires much more manual labor to develop and perform.
    “There continues to be an explosion in the amount and types of data electronically stored in a patient’s medical record. Disentangling this complex web of data can be highly burdensome, thus slowing advancements in clinical research,” said Benjamin S. Glicksberg, PhD, Assistant Professor of Genetics and Genomic Sciences, a member of the Hasso Plattner Institute for Digital Health at Mount Sinai (HPIMS), and a senior author of the study. “In this study, we created a new method for mining data from electronic health records with machine learning that is faster and less labor intensive than the industry standard. We hope that this will be a valuable tool that will facilitate further, and less biased, research in clinical informatics.”
    The study was led by Jessica K. De Freitas, a graduate student in Dr. Glicksberg lab.
    Currently, scientists rely on a set of established computer programs, or algorithms, to mine medical records for new information. The development and storage of these algorithms is managed by a system called the Phenotype Knowledgebase (PheKB). Although the system is highly effective at correctly identifying a patient diagnosis, the process of developing an algorithm can be very time-consuming and inflexible. To study a disease, researchers first have to comb through reams of medical records looking for pieces of data, such as certain lab tests or prescriptions, which are uniquely associated with the disease. They then program the algorithm that guides the computer to search for patients who have those disease-specific pieces of data, which constitute a “phenotype.” In turn, the list of patients identified by the computer needs to be manually double-checked by researchers. Each time researchers want to study a new disease, they have to restart the process from scratch.
    In this study, the researchers tried a different approach — one in which the computer learns, on its own, how to spot disease phenotypes and thus save researchers time and effort. This new, Phe2vec method was based on studies the team had already conducted.
    “Previously, we showed that unsupervised machine learning could be a highly efficient and effective strategy for mining electronic health records,” said Riccardo Miotto, PhD, a former Assistant Professor at the HPIMS and a senior author of the study. “The potential advantage of our approach is that it learns representations of diseases from the data itself. Therefore, the machine does much of the work experts would normally do to define the combination of data elements from health records that best describes a particular disease.”
    Essentially, a computer was programmed to scour through millions of electronic health records and learn how to find connections between data and diseases. This programming relied on “embedding” algorithms that had been previously developed by other researchers, such as linguists, to study word networks in various languages. One of the algorithms, called word2vec, was particularly effective. Then, the computer was programmed to use what it learned to identify the diagnoses of nearly 2 million patients whose data was stored in the Mount Sinai Health System.
    Finally, the researchers compared the effectiveness between the new and the old systems. For nine out of ten diseases tested, they found that the new Phe2vec system was as effective as, or performed slightly better than, the gold standard phenotyping process at correctly identifying a diagnoses from electronic health records. A few examples of the diseases included dementia, multiple sclerosis, and sickle cell anemia.
    “Overall our results are encouraging and suggest that Phe2vec is a promising technique for large-scale phenotyping of diseases in electronic health record data,” Dr. Glicksberg said. “With further testing and refinement, we hope that it could be used to automate many of the initial steps of clinical informatics research, thus allowing scientists to focus their efforts on downstream analyses like predictive modeling.”
    This study was supported by the Hasso Plattner Foundation, the Alzheimer’s Drug Discovery Foundation, and a courtesy graphics processing unit donation from the NVIDIA Corporation. More

  • in

    These geckos crash-land on rainforest trees but don't fall, thanks to their tails

    A gecko’s tail is a wondrous and versatile thing.
    In more than 15 years of research on geckos, scientists at the University of California, Berkeley, and, more recently, the Max Planck Institute for Intelligent Systems in Stuttgart, Germany, have shown that geckos use their tails to maneuver in midair when gliding between trees, to right themselves when falling, to keep from falling off a tree when they lose their grip and even to propel themselves across the surface of a pond, as if walking on water.
    Many of these techniques have been implemented in agile, gecko-like robots.
    But Robert Full, UC Berkeley professor of integrative biology, and Ardian Jusufi, faculty member at the Max Planck Research School for Intelligent Systems and former UC Berkeley doctoral student, were blown away by a recent discovery: Geckos also use their tails to help recover when they take a header into a tree.
    Those head-first crashes are probably not the geckos’ preferred landing, but Jusufi documented many such hard landings in 37 glides over several field seasons in a Singapore rainforest, using high-speed video cameras to record their trajectories and wince-inducing landings. He clocked their speed upon impact at about 6 meters per second, or 21 kilometers per hour — more than 200 feet per second, or about 120 gecko body lengths per second.
    “Observing the geckos from elevation in the rainforest canopy was eye-opening. Before take-off, they would move their head up-and-down, and side-to-side to view the landing target prior to jumping off, as if to estimate the travel distance,” Jusufi said. More

  • in

    Nano ‘camera’ made using molecular glue allows real-time monitoring of chemical reactions

    Researchers have made a tiny camera, held together with ‘molecular glue’ that allows them to observe chemical reactions in real time.
    The device, made by a team from the University of Cambridge, combines tiny semiconductor nanocrystals called quantum dots and gold nanoparticles using molecular glue called cucurbituril (CB). When added to water with the molecule to be studied, the components self-assemble in seconds into a stable, powerful tool that allows the real-time monitoring of chemical reactions.
    The camera harvests light within the semiconductors, inducing electron transfer processes like those that occur in photosynthesis, which can be monitored using incorporated gold nanoparticle sensors and spectroscopic techniques. They were able to use the camera to observe chemical species which had been previously theorised but not directly observed.
    The platform could be used to study a wide range of molecules for a variety of potential applications, such as the improvement of photocatalysis and photovoltaics for renewable energy. The results are reported in the journal Nature Nanotechnology.
    Nature controls the assemblies of complex structures at the molecular scale through self-limiting processes. However, mimicking these processes in the lab is usually time-consuming, expensive and reliant on complex procedures.
    “In order to develop new materials with superior properties, we often combine different chemical species together to come up with a hybrid material that has the properties we want,” said Professor Oren Scherman from Cambridge’s Yusuf Hamied Department of Chemistry, who led the research. “But making these hybrid nanostructures is difficult, and you often end up with uncontrolled growth or materials that are unstable.”
    The new method that Scherman and his colleagues from Cambridge’s Cavendish Laboratory and University College London developed uses cucurbituril — a molecular glue which interacts strongly with both semiconductor quantum dots and gold nanoparticles. The researchers used small semiconductor nanocrystals to control the assembly of larger nanoparticles through a process they coined interfacial self-limiting aggregation. The process leads to permeable and stable hybrid materials that interact with light. The camera was used to observe photocatalysis and track light-induced electron transfer. More

  • in

    Brain-inspired memory device

    Many electronic devices today are dependent on semiconductor logic circuits based on switches hard-wired to perform predefined logic functions. Physicists from the National University of Singapore (NUS), together with an international team of researchers, have developed a novel molecular memristor, or an electronic memory device, that has exceptional memory reconfigurability.
    Unlike hard-wired standard circuits, the molecular device can be reconfigured using voltage to embed different computational tasks. The energy-efficient new technology, which is capable of enhanced computational power and speed, can potentially be used in edge computing, as well as handheld devices and applications with limited power resource.
    “This work is a significant breakthrough in our quest to design low-energy computing. The idea of using multiple switching in a single element draws inspiration from how the brain works and fundamentally reimagines the design strategy of a logic circuit,” said Associate Professor Ariando from the NUS Department of Physics who led the research.
    The research was first published in the journal Nature on 1 September 2021, and carried out in collaboration with the Indian Association for the Cultivation of Science, Hewlett Packard Enterprise, the University of Limerick, the University of Oklahoma, and Texas A&M University.
    Brain-inspired technology
    “This new discovery can contribute to developments in edge computing as a sophisticated in-memory computing approach to overcome the von Neumann bottleneck, a delay in computational processing seen in many digital technologies due to the physical separation of memory storage from a device’s processor,” said Assoc Prof Ariando. The new molecular device also has the potential to contribute to designing next generation processing chips with enhanced computational power and speed. More

  • in

    Quantum emitters: Beyond crystal clear to single-photon pure

    Photons — fundamental particles of light — are carrying these words to your eyes via the light from your computer screen or phone. Photons play a key role in the next-generation quantum information technology, such as quantum computing and communications. A quantum emitter, capable of producing a single, pure photon, is the crux of such technology but has many issues that have yet to be solved, according to KAIST researchers.
    A research team under Professor Yong-Hoon Cho has developed a technique that can isolate the desired quality emitter by reducing the noise surrounding the target with what they have dubbed a ‘nanoscale focus pinspot.’ They published their results on June 24 in ACS Nano.
    “The nanoscale focus pinspot is a structurally nondestructive technique under an extremely low dose ion beam and is generally applicable for various platforms to improve their single-photon purity while retaining the integrated photonic structures,” said lead author Yong-Hoon Cho from the Department of Physics at KAIST.
    To produce single photons from solid state materials, the researchers used wide-bandgap semiconductor quantum dots — fabricated nanoparticles with specialized potential properties, such as the ability to directly inject current into a small chip and to operate at room temperature for practical applications. By making a quantum dot in a photonic structure that propagates light, and then irradiating it with helium ions, researchers theorized that they could develop a quantum emitter that could reduce the unwanted noisy background and produce a single, pure photon on demand.
    Professor Cho explained, “Despite its high resolution and versatility, a focused ion beam typically suppresses the optical properties around the bombarded area due to the accelerated ion beam’s high momentum. We focused on the fact that, if the focused ion beam is well controlled, only the background noise can be selectively quenched with high spatial resolution without destroying the structure.”
    In other words, the researchers focused the ion beam on a mere pin prick, effectively cutting off the interactions around the quantum dot and removing the physical properties that could negatively interact with and degrade the photon purity emitted from the quantum dot.
    “It is the first developed technique that can quench the background noise without changing the optical properties of the quantum emitter and the built-in photonic structure,” Professor Cho asserted.
    Professor Cho compared it to stimulated emission depletion microscopy, a technique used to decrease the light around the area of focus, but leaving the focal point illuminated. The result is increased resolution of the desired visual target.
    “By adjusting the focused ion beam-irradiated region, we can select the target emitter with nanoscale resolution by quenching the surrounding emitter,” Professor Cho said. “This nanoscale selective-quenching technique can be applied to various material and structural platforms and further extended for applications such as optical memory and high-resolution micro displays.”
    Korea’s National Research Foundation and the Samsung Science and Technology Foundation supported this work.
    Story Source:
    Materials provided by The Korea Advanced Institute of Science and Technology (KAIST). Note: Content may be edited for style and length. More

  • in

    New molecular device has unprecedented reconfigurability reminiscent of brain plasticity

    In a discovery published in the journal Nature, an international team of researchers has described a novel molecular device with exceptional computing prowess.
    Reminiscent of the plasticity of connections in the human brain, the device can be reconfigured on the fly for different computational tasks by simply changing applied voltages. Furthermore, like nerve cells can store memories, the same device can also retain information for future retrieval and processing.
    “The brain has the remarkable ability to change its wiring around by making and breaking connections between nerve cells. Achieving something comparable in a physical system has been extremely challenging,” said Dr. R. Stanley Williams, professor in the Department of Electrical and Computer Engineering at Texas A&M University. “We have now created a molecular device with dramatic reconfigurability, which is achieved not by changing physical connections like in the brain, but by reprogramming its logic.”
    Dr. T. Venkatesan, director of the Center for Quantum Research and Technology (CQRT) at the University of Oklahoma, Scientific Affiliate at National Institute of Standards and Technology, Gaithersburg, and adjunct professor of electrical and computer engineering at the National University of Singapore, added that their molecular device might in the future help design next-generation processing chips with enhanced computational power and speed, but consuming significantly reduced energy.
    Whether it is the familiar laptop or a sophisticated supercomputer, digital technologies face a common nemesis, the von Neumann bottleneck. This delay in computational processing is a consequence of current computer architectures, wherein the memory, containing data and programs, is physically separated from the processor. As a result, computers spend a significant amount of time shuttling information between the two systems, causing the bottleneck. Also, despite extremely fast processor speeds, these units can be idling for extended amounts of time during periods of information exchange.
    As an alternative to conventional electronic parts used for designing memory units and processors, devices called memristors offer a way to circumvent the von Neumann bottleneck. Memristors, such as those made of niobium dioxide and vanadium dioxide, transition from being an insulator to a conductor at a set temperature. This property gives these types of memristors the ability to perform computations and store data. More

  • in

    Machine learning tool detects the risk of genetic syndromes in children with diverse backgrounds

    With an average accuracy of 88%, a deep learning technology offers rapid genetic screening that could accelerate the diagnosis of genetic syndromes, recommending further investigation or referral to a specialist in seconds, according to a study published in The Lancet Digital Health. Trained with data from 2,800 pediatric patients from 28 countries, the technology also considers the face variability related to sex, age, racial and ethnic background, according to the study led by Children’s National Hospital researchers.
    “We built a software device to increase access to care and a machine learning technology to identify the disease patterns not immediately obvious to the human eye or intuition, and to help physicians non-specialized in genetics,” said Marius George Linguraru, D.Phil., M.A., M.Sc., principal investigator in the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National Hospital and senior author of the study. “This technological innovation can help children without access to specialized clinics, which are unavailable in most of the world. Ultimately, it can help reduce health inequality in under-resourced societies.”
    This machine learning technology indicates the presence of a genetic syndrome from a facial photograph captured at the point-of-care, such as in pediatrician offices, maternity wards and general practitioner clinics.
    “Unlike other technologies, the strength of this program is distinguishing ‘normal’ from ‘not-normal,’ which makes it an effective screening tool in the hands of community caregivers,” said Marshall L. Summar, M.D., director of the Rare Disease Institute at Children’s National. “This can substantially accelerate the time to diagnosis by providing a robust indicator for patients that need further workup. This first step is often the greatest barrier to moving towards a diagnosis. Once a patient is in the workup system, then the likelihood of diagnosis (by many means) is significantly increased.”
    Every year, millions of children are born with genetic disorders — including Down syndrome, a condition in which a child is born with an extra copy of their 21st chromosome causing developmental delays and disabilities, Williams-Beuren syndrome, a rare multisystem condition caused by a submicroscopic deletion from a region of chromosome 7, and Noonan syndrome, a genetic disorder caused by a faulty gene that prevents normal development in various parts of the body.
    Most children with genetic syndromes live in regions with limited resources and access to genetic services. The genetic screening may come with a hefty price tag. There are also insufficient specialists to help identify genetic syndromes early in life when preventive care can save lives, especially in areas of low income, limited resources and isolated communities. More