More stories

  • in

    Exploring the bounds of room-temperature superconductivity

    In the simplest terms, superconductivity between two or more objects means zero wasted electricity. It means electricity is being transferred between these objects with no loss of energy.
    Many naturally occurring elements and minerals like lead and mercury have superconducting properties. And there are modern applications that currently use materials with superconducting properties, including MRI machines, maglev trains, electric motors and generators. Usually, superconductivity in materials happens at low-temperature environments or at high temperatures at very high pressures. The holy grail of superconductivity today is to find or create materials that can transfer energy between each other in a non-pressurized room-temperature environment.
    If the efficiency of superconductors at room temperature could be applied at scale to create highly efficient electric power transmission systems for industry, commerce, and transportation, it would be revolutionary. The deployment of the technology of room temperature superconductors at atmospheric pressure would accelerate the electrification of our world for its sustainable development. The technology allows us to do more work and use less natural resources with lower waste to preserve the environment.
    There are a few superconducting material systems for electric transmission in various stages of development. In the meantime, researchers at the University of Houston are conducting experiments to look for superconductivity in a room-temperature and atmospheric pressure environment.
    Paul Chu, founding director and chief scientist at the Texas Center for Superconductivity at UH and Liangzi Deng, research assistant professor, chose FeSe (Iron (II) Selenide) for their experiments because it has a simple structure and also great Tc (superconducting critical temperature) enhancement under pressure.
    Chu and Deng have developed a pressure-quench process (PQP), in which they first apply pressure to their samples at room-temperature to enhance superconductivity, cool them to a chosen lower temperature, and then completely release the applied pressure, while still retaining the enhanced superconducting properties.
    The concept of the PQP is not new, but Chu and Deng’s PQP is the first time it’s been used to retain the high-pressure-enhanced superconductivity in a high-temperature superconductor (HTS) at atmospheric pressure. The findings are published in the Journal of Superconductivity and Novel Magnetism.
    “We waste about 10% of our electricity during transmission, that’s a huge number. If we had superconductors to transmit electricity with zero energy wasted, we would basically change the world, transportation and electricity transmission would be revolutionized, “Chu said. “If this process can be used, we can create materials that could transmit electricity from the place where you produce it all the way to places thousands of miles away without the loss of energy.”
    Their process was inspired by the late Pol Duwez, a prominent material scientist, engineer and metallurgist at the California Institute of Technology who pointed out that most of the alloys used in industrial applications are metastable or chemically unstable at atmospheric pressure and room temperature, and these metastable phases possess desired and/or enhanced properties that their stable counterparts lack, Chu and Deng noted in their study.
    Examples of these materials include diamonds, high-temperature 3D-printing materials, black phosphorus and even beryllium copper, which is notably used to make tools for use in high explosive environments like oil rigs and grain elevators.
    “The ultimate goal of this experiment was to raise the temperature to above room temperature while keeping the material’s superconducting properties,” Chu said. “If that can be achieved, cryogenics will no longer be needed to operate machines that used superconducting material like an MRI machine and that’s why we’re excited about this.”
    Story Source:
    Materials provided by University of Houston. Note: Content may be edited for style and length. More

  • in

    New insight into machine-learning error estimation

    Omar Maddouri, a doctoral student in the Department of Electrical and Computer Engineering at Texas A&M University, is working with Dr. Byung-Jun Yoon, professor, and Dr. Edward Dougherty, Robert M. Kennedy ’26 Chair Professor, to evaluate machine-learning models using transfer learning principles. Dr. Francis “Frank” Alexander with Brookhaven National Labs and Dr. Xiaoning Qian from the Department of Electrical and Computer Engineering at Texas A&M University are also involved with the project.
    In data-driven machine learning, models are built to make predictions and estimations for what’s to come in any given data set. One important field within machine learning is classification, which allows a data set to be assessed by an algorithm and then classified or broken down into classes or categories. When the data sets provided are very small, it can be very challenging to not only build a classification model based on this data but also to evaluate the performance of this model, ensuring its accuracy. This is where transfer learning comes into play.
    “In transfer learning, we try to transfer knowledge or bring data from another domain to see whether we can enhance the task that we are doing in the domain of interest, or target domain,” Maddouri explained.
    The target domain is where the models are built, and their performance is evaluated. The source domain is a separate domain that is still relevant to the target domain from which knowledge is transferred to make the analysis within the target domain easier.
    Maddouri’s project utilizes a joint prior density to model the relatedness between the source and target domains and offers a Bayesian approach to apply the transfer learning principles to provide an overall error estimator of the models. An error estimator will deliver an estimate of how accurate these machine-learning models are at classifying the data sets at hand.
    What this means is that before any data is observed, the team creates a model using their initial inferences about the model parameters in the target and source domains and then updates this model with enhanced accuracy as more evidence or information about the data sets becomes available.
    This technique of transfer learning has been used to build models in previous works; however, no one has ever before used this transfer learning technique to propose novel error estimators to evaluate the performance of these models. For an efficient utilization, the devised estimator has been implemented using advanced statistical methods that enabled a fast screening of source data sets which enhances the computational complexity of the transfer learning process by 10 to 20 times.
    This technique can help serve as a benchmark for future research within academia to build upon. In addition, it can help with identifying or classifying different medical issues that would otherwise be very difficult. For example, Maddouri utilized this technique to classify patients with schizophrenia using transcriptomic data from brain tissue samples originally acquired by invasive brain biopsies. Because of the nature and the location of the brain region that can be analyzed for this disorder, the data collected is very limited. However, using a stringent feature selection procedure that comprises differential gene expression analysis and statistical testing for assumptions validity, the research team identified transcriptomic profiles of three genes from an additional brain region found to be highly relevant to the desired brain tissue as reported by independent research studies from other literature.
    This knowledge allowed them to utilize the transfer learning technique to leverage samples collected from the second brain region (source domain) to help with the analysis and significantly boost the accuracy of diagnosis within the original brain region (target domain). The data gathered from the source domain can be exploratory in the absence of information from the target domain, allowing the research team to enhance the quality of their conclusion.
    This research has been funded by the Department of Energy and the National Science Foundation.
    Story Source:
    Materials provided by Texas A&M University. Original written by Rachel Rose. Note: Content may be edited for style and length. More

  • in

    Introducing organs-on-chips to the lymph system

    Currently, there is little research focused on understanding mechanisms and drug discovery of lymphatic vascular diseases. However, conditions such as lymphedema, a buildup of fluid in the body when the lymph system is damaged, impact more than 200,000 people every year in the United States alone.
    Dr. Abhishek Jain, assistant professor in the Department of Biomedical Engineering at Texas A&M University, has taken his expertise in organ-on-chip models and applied them to a field they’ve never been used in before, creating the first lymphangion-chip.
    To engineer this new device, Jain’s team first developed a new technique to create microfluidic cylindrical blood or lymphatic vessels consisting of endothelial cells, which line blood vessels. It could then use this technique to create a co-cultured multicellular lymphangion, the functional unit of a lymph vessel, and successfully recreate a typical section of a lymphatic transport vessel in vitro, or outside the body.
    “We can now better understand how mechanical forces regulate lymphatic physiology and pathophysiology,” Jain said. “We can also understand what are the mechanisms that result in lymphedema, and then we can find new targets for drug discovery with this platform.”
    The project is in collaboration with Dr. David Zawieja from the Texas A&M College of Medicine. Their research was published in the Jan. 7 issue of the journal Lab on a Chip.
    “Collaborations with Dr. Zawieja and others in the department played a crucial role,” Jain said. “They introduced me to this topic and provide their longstanding expertise that has made it possible for us to create this new organ-on-chip platform and now advance it in these exciting directions using contemporary experimental models.”
    Jain said the impact of this work is far-reaching because there is a new hope for patients with lymphatic diseases. They can now learn about the biology of these diseases and reach a point where they can be treated.
    “The most exciting part of this research is that it is allowing us to now push the organ-on-chip in directions where finding cures for rare and orphan (understudied) diseases is possible with less effort and money,” Jain said. “We can help the pharma industry to invest in this platform and find a cure for lymphedema that impacts millions of people.”
    Story Source:
    Materials provided by Texas A&M University. Original written by Jennifer Reiley. Note: Content may be edited for style and length. More

  • in

    Molecules, rare earths, and light: Innovative platform for quantum computers and communications

    The ability to interact with light provides important functionalities for quantum systems, such as communicating over large distances, a key ability for future quantum computers. However, it is very difficult to find a material that can fully exploit the quantum properties of light. A research team from the CNRS and l’Université de Strasbourg, with support from Chimie ParisTech-PSL1 and in collaboration with German teams from KIT2, has demonstrated the potential of a new material based on rare earths as a photonic quantum system. The results, which were published on 9 March 2022 in Nature, show the interest of europium molecular crystals for quantum memories and computers.
    While quantum technologies promise a revolution in the future, they still remain complex to put in place. For example, quantum systems that can interact with light to create processing functionalities for information and communication through fibre optics in particular, remain rare. Such a platform3 must ideally include an interface with light as well as information storage units, which is to say a memory. Information processing must also be possible within these units, which take the form of spin4. Developing materials that enable a link between spins and light on the quantum level has proven especially difficult.
    A team of scientists from the CNRS and l’Université de Strasbourg, with support from Chimie ParisTech-PSL and in collaboration with German teams from KIT, has successfully demonstrated the value of europium molecular crystals5 for quantum communications and processors, thanks to their ultra-narrow optical transitions enabling optimal interactions with light.
    These crystals are the combined product of two systems already used in quantum technology: rare earth ions (such as europium), and molecular systems. Rare-earth crystals are known for their excellent optical and spin properties, but their integration in photonic devices is complex. Molecular systems generally lack spins (a storage or computing unit), or on the contrary present optical lines that are too broad to establish a reliable link between spins and light.
    Europium molecular crystals represent a major advance, as they have ultra-narrow linewidths. This translates into long-lived quantum states, which were used to demonstrate the storage of a light pulse inside these molecular crystals. Moreover, a first building block for a quantum computer controlled by light has been obtained. This new material for quantum technologies offers previously unseen properties, and paves the way for new architectures for computers and quantum memories in which light will play a central role.
    The results also open broad prospects for research thanks to the many molecular compounds that can be synthesized.
    Notes
    1 — Scientists from the following laboratories participated: the Institut de recherche de chimie Paris (CNRS/Chimie ParisTech-PSL), the Institut de physique et chimie des matériaux de Strasbourg (CNRS/Université de Strasbourg), Institut de science et d’ingénierie supramoléculaire (CNRS/Université de Strasbourg), and the Centre européen de sciences quantiques.
    2 — Karlsruher Institute of Technology (KIT) including the Institute of Quantum Materials and Technology (IQMT), the Institute of Nanotechnology (INT) and the Physikalishes Institut (PHI) in Germany, also participated
    3 — A platform refers to a multifunctional quantum material.
    4 — Spin is one of the properties of particles, along with mass and electric charge, which determines their behaviour in a magnetic field.
    5 — Molecular crystals are perfectly ordered stacks of individual molecules.
    Story Source:
    Materials provided by CNRS. Note: Content may be edited for style and length. More

  • in

    A 'zigzag' blueprint for topological electronics

    A collaborative study led by the University of Wollongong confirms switching mechanism for a new, proposed generation of ultra-low energy ‘topological electronics’.
    Based on novel quantum topological materials, such devices would ‘switch’ a topological insulator from non-conducting (conventional electrical insulator) to a conducting (topological insulator) state, whereby electrical current could flow along its edge states without wasted dissipation of energy.
    Such topological electronics could radically reduce the energy consumed in computing and electronics, which is estimated to consume 8% of global electricity, and doubling every decade.
    Led by Dr Muhammad Nadeem at the University of Wollongong (UOW), the study also brought in expertise from FLEET Centre collaborators at UNSW and Monash University.
    Resolving the Switching Challenge, and Introducing the Tqfet
    Two-dimensional topological insulators are promising materials for topological quantum electronic devices where edge state transport can be controlled by a gate-induced electric field. More

  • in

    Mathematical discovery could shed light on secrets of the Universe

    How can Einstein’s theory of gravity be unified with quantum mechanics? It is a challenge that could give us deep insights into phenomena such as black holes and the birth of the universe. Now, a new article in Nature Communications, written by researchers from Chalmers University of Technology, Sweden, and MIT, USA, presents results that cast new light on important challenges in understanding quantum gravity.
    A grand challenge in modern theoretical physics is to find a ‘unified theory’ that can describe all the laws of nature within a single framework — connecting Einstein’s general theory of relativity, which describes the universe on a large scale, and quantum mechanics, which describes our world at the atomic level. Such a theory of ‘quantum gravity’ would include both a macroscopic and microscopic description of nature.
    “We strive to understand the laws of nature and the language in which these are written is mathematics. When we seek answers to questions in physics, we are often led to new discoveries in mathematics too. This interaction is particularly prominent in the search for quantum gravity — where it is extremely difficult to perform experiments,” explains Daniel Persson, Professor at the Department of Mathematical Sciences at Chalmers university of technology.
    An example of a phenomenon that requires this type of unified description is black holes. A black hole forms when a sufficiently heavy star expands and collapses under its own gravitational force, so that all its mass is concentrated in an extremely small volume. The quantum mechanical description of black holes is still in its infancy but involves spectacular advanced mathematics.
    A simplified model for quantum gravity
    “The challenge is to describe how gravity arises as an ’emergent’ phenomenon. Just as everyday phenomena — such as the flow of a liquid — emerge from the chaotic movements of individual droplets, we want to describe how gravity emerges from quantum mechanical system at the microscopic level,” says Robert Berman, Professor at the Department of Mathematical Sciences at Chalmers University of Technology. More

  • in

    Toward ever-more powerful microchips and supercomputers

    The information age created over nearly 60 years has given the world the internet, smart phones and lightning-fast computers. Making this possible has been the doubling of the number of transistors that can be packed onto a computer chip roughly every two years, giving rise to billions of atomic-scale transistors that now fit on a fingernail-sized chip. Such “atomic scale” lengths are so tiny that individual atoms can be seen and counted in them.
    Physical limit
    With this doubling now rapidly approaching a physical limit, the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) has joined industry efforts to extend the process and develop new ways to produce ever-more capable, efficient, and cost-effective chips. Laboratory scientists have now accurately predicted through modeling a key step in atomic-scale chip fabrication in the first PPPL study under a Cooperative Research and Development Agreement (CRADA) with Lam Research Corp., a world-wide supplier of chip-making equipment.
    “This would be one little piece in the whole process,” said David Graves, associate laboratory director for low-temperature plasma surface interactions, a professor in the Princeton Department of Chemical and Biological Engineering and co-author of a paper that outlines the findings in the Journal of Vacuum Science & Technology B. Insights gained through modeling, he said, “can lead to all sorts of good things, and that’s why this effort at the Lab has got some promise.”
    While the shrinkage can’t go on much longer, “it hasn’t completely reached an end,” he said. “Industry has been successful to date in using mainly empirical methods to develop innovative new processes but a deeper fundamental understanding will speed this process. Fundamental studies take time and require expertise industry does not always have,” he said. “This creates a strong incentive for laboratories to take on the work.”
    The PPPL scientists modeled what is called “atomic layer etching” (ALE), an increasingly critical fabrication step that aims to remove single atomic layers from a surface at a time. This process can be used to etch complex three-dimensional structures with critical dimensions that are thousands of times thinner than a human hair into a film on a silicon wafer.
    Basic agreement
    “The simulations basically agreed with experiments as a first step and could lead to improved understanding of the use of ALE for atomic-scale etching,” said Joseph Vella, a post-doctoral fellow at PPPL and lead author of the journal paper. Improved understanding will enable PPPL to investigate such things as the extent of surface damage and the degree of roughness developed during ALE, he said, “and this all starts with building our fundamental understanding of atomic layer etching.”
    The model simulated the sequential use of chlorine gas and argon plasma ions to control the silicon etch process on an atomic scale. Plasma, or ionized gas, is a mixture consisting of free electrons, positively charged ions and neutral molecules. The plasma used in semiconductor device processing is near room temperature, in contrast to the ultra-hot plasma used in fusion experiments.
    “A surprise empirical finding from Lam Research was that the ALE process became particularly effective when the ion energies were quite a bit higher than the ones we started with,” Graves said. “So that will be our next step in the simulations — to see if we can understand what’s happening when the ion energy is much higher and why it’s so good.”
    Going forward, “the semiconductor industry as a whole is contemplating a major expansion in the materials and the types of devices to be used, and this expansion will also have to be processed with atomic scale precision,” he said. “The U.S. goal is to lead the world in using science to tackle important industrial problems,” he said, “and our work is part of that.”
    This study was partially supported by the DOE Office of Science. Coauthors included David Humbird of DWH Consulting in Centennial, Colorado.
    Story Source:
    Materials provided by DOE/Princeton Plasma Physics Laboratory. Original written by John Greenwald. Note: Content may be edited for style and length. More

  • in

    Researchers develop hybrid human-machine framework for building smarter AI

    From chatbots that answer tax questions to algorithms that drive autonomous vehicles and dish out medical diagnoses, artificial intelligence undergirds many aspects of daily life. Creating smarter, more accurate systems requires a hybrid human-machine approach, according to researchers at the University of California, Irvine. In a study published this month in Proceedings of the National Academy of Sciences, they present a new mathematical model that can improve performance by combining human and algorithmic predictions and confidence scores.
    “Humans and machine algorithms have complementary strengths and weaknesses. Each uses different sources of information and strategies to make predictions and decisions,” said co-author Mark Steyvers, UCI professor of cognitive sciences. “We show through empirical demonstrations as well as theoretical analyses that humans can improve the predictions of AI even when human accuracy is somewhat below [that of] the AI — and vice versa. And this accuracy is higher than combining predictions from two individuals or two AI algorithms.”
    To test the framework, researchers conducted an image classification experiment in which human participants and computer algorithms worked separately to correctly identify distorted pictures of animals and everyday items — chairs, bottles, bicycles, trucks. The human participants ranked their confidence in the accuracy of each image identification as low, medium or high, while the machine classifier generated a continuous score. The results showed large differences in confidence between humans and AI algorithms across images.
    “In some cases, human participants were quite confident that a particular picture contained a chair, for example, while the AI algorithm was confused about the image,” said co-author Padhraic Smyth, UCI Chancellor’s Professor of computer science. “Similarly, for other images, the AI algorithm was able to confidently provide a label for the object shown, while human participants were unsure if the distorted picture contained any recognizable object.”
    When predictions and confidence scores from both were combined using the researchers’ new Bayesian framework, the hybrid model led to better performance than either human or machine predictions achieved alone.
    “While past research has demonstrated the benefits of combining machine predictions or combining human predictions — the so-called ‘wisdom of the crowds’ — this work forges a new direction in demonstrating the potential of combining human and machine predictions, pointing to new and improved approaches to human-AI collaboration,” Smyth said.
    This interdisciplinary project was facilitated by the Irvine Initiative in AI, Law, and Society. The convergence of cognitive sciences — which are focused on understanding how humans think and behave — with computer science — in which technologies are produced — will provide further insight into how humans and machines can collaborate to build more accurate artificially intelligent systems, the researchers said.
    Additional co-authors include Heliodoro Tejada, a UCI graduate student in cognitive sciences, and Gavin Kerrigan, a UCI Ph.D. student in computer science.
    Funding for this study was provided by the National Science Foundation under award numbers 1927245 and 1900644 and the HPI Research Center in Machine Learning and Data Science at UCI.
    Story Source:
    Materials provided by University of California – Irvine. Note: Content may be edited for style and length. More