More stories

  • in

    Researchers discover a uniquely quantum effect in erasing information

    Researchers from Trinity College Dublin have discovered a uniquely quantum effect in erasing information that may have significant implications for the design of quantum computing chips. Their surprising discovery brings back to life the paradoxical “Maxwell’s demon,” which has tormented physicists for over 150 years.
    The thermodynamics of computation was brought to the fore in 1961 when Rolf Landauer, then at IBM, discovered a relationship between the dissipation of heat and logically irreversible operations. Landauer is known for the mantra “Information is Physical,” which reminds us that information is not abstract and is encoded on physical hardware.
    The “bit” is the currency of information (it can be either 0 or 1) and Landauer discovered that when a bit is erased there is a minimum amount of heat released. This is known as Landauer’s bound and is the definitive link between information theory and thermodynamics.
    Professor John Goold’s QuSys group at Trinity is analysing this topic with quantum computing in mind, where a quantum bit (a qubit, which can be 0 and 1 at the same time) is erased.
    In just-published work in the journal, Physical Review Letters, the group discovered that the quantum nature of the information to be erased can lead to large deviations in the heat dissipation, which is not present in conventional bit erasure.
    Thermodynamics and Maxwell’s demon
    One hundred years previous to Landauer’s discovery people like Viennese scientist, Ludwig Boltzmann, and Scottish physicist, James Clerk Maxwell, were formulating the kinetic theory of gases, reviving an old idea of the ancient Greeks by thinking about matter being made of atoms and deriving macroscopic thermodynamics from microscopic dynamics.

    advertisement

    Professor Goold says:
    “Statistical mechanics tells us that things like pressure and temperature, and even the laws of thermodynamics themselves, can be understood by the average behavior of the atomic constituents of matter. The second law of thermodynamics concerns something called entropy which, in a nutshell, is a measure of the disorder in a process. The second law tells us that in the absence of external intervention, all processes in the universe tend, on average, to increase their entropy and reach a state known as thermal equilibrium.
    “It tells us that, when mixed, two gases at different temperatures will reach a new state of equilibrium at the average temperature of the two. It is the ultimate law in the sense that every dynamical system is subject to it. There is no escape: all things will reach equilibrium, even you!”
    However, the founding fathers of statistical mechanics were trying to pick holes in the second law right from the beginning of the kinetic theory. Consider again the example of a gas in equilibrium: Maxwell imagined a hypothetical “neat-fingered” being with the ability to track and sort particles in a gas based on their speed.
    Maxwell’s demon, as the being became known, could quickly open and shut a trap door in a box containing a gas, and let hot particles through to one side of the box but restrict cold ones to the other. This scenario seems to contradict the second law of thermodynamics as the overall entropy appears to decrease and perhaps physics’ most famous paradox was born.

    advertisement

    But what about Landauer’s discovery about the heat-dissipated cost of erasing information? Well, it took another 20 years until that was fully appreciated, the paradox solved, and Maxwell’s demon finally exorcised.
    Landauer’s work inspired Charlie Bennett — also at IBM — to investigate the idea of reversible computing. In 1982 Bennett argued that the demon must have a memory, and that it is not the measurement but the erasure of the information in the demon’s memory which is the act that restores the second law in the paradox. And, as a result, computation thermodynamics was born.
    New findings
    Now, 40 years on, this is where the new work led by Professor Goold’s group comes to the fore, with the spotlight on quantum computation thermodynamics.
    In the recent paper, published with collaborator Harry Miller at the University of Manchester and two postdoctoral fellows in the QuSys Group at Trinity, Mark Mitchison and Giacomo Guarnieri, the team studied very carefully an experimentally realistic erasure process that allows for quantum superposition (the qubit can be in state 0 and 1 at same time).
    Professor Goold explains:
    “In reality, computers function well away from Landauer’s bound for heat dissipation because they are not perfect systems. However, it is still important to think about the bound because as the miniaturisation of computing components continues, that bound becomes ever closer, and it is becoming more relevant for quantum computing machines. What is amazing is that with technology these days you can really study erasure approaching that limit.
    “We asked: ‘what difference does this distinctly quantum feature make for the erasure protocol?’ And the answer was something we did not expect. We found that even in an ideal erasure protocol — due to quantum superposition — you get very rare events which dissipate heat far greater than the Landauer limit.
    “In the paper we prove mathematically that these events exist and are a uniquely quantum feature. This is a highly unusual finding that could be really important for heat management on future quantum chips — although there is much more work to be done, in particular in analysing faster operations and the thermodynamics of other gate implementations.
    “Even in 2020, Maxwell’s demon continues to pose fundamental questions about the laws of nature.” More

  • in

    During COVID, scientists turn to computers to understand C4 photosynthesis

    When COVID closed down their lab in March, a team from the University of Essex turned to computational approaches to understand what makes some plants better adapted to transform light and carbon dioxide into yield through photosynthesis. They published their findings in the journal Frontiers of Plant Science.
    There are two kinds of photosynthesis: C3 and C4. Most food crops depend on C3 photosynthesis where carbon is fixed into sugar inside cells called ‘mesophyll’ where oxygen is abundant. However, oxygen can hamper photosynthesis. C4 crops evolved specialized bundle sheath cells to concentrate carbon dioxide, which makes C4 photosynthesis as much as 60 percent more efficient.
    In this study, scientists wanted to find out how C4 crops are able to express several important enzymes inside bundle sheath cells instead of the mesophyll.
    “The ultimate goal is to be able to understand these mechanisms so that we can improve C3 photosynthesis in food crops like cowpea and cassava that smallholder farmers in sub-Saharan Africa depend on for their families’ food and income,” said Chidi Afamefule, a postdoctoral researcher working on Realizing Increased Photosynthetic Efficiency (RIPE) at Essex.
    Led by the University of Illinois at the Carl R. Woese Institute for Genomic Biology, RIPE aims to boost food production by improving photosynthesis with support from the Bill & Melinda Gates Foundation, Foundation for Food and Agriculture Research, and U.K. Foreign, Commonwealth & Development Office. The RIPE project and its sponsors are committed to ensuring Global Access and making the project’s technologies available to the farmers who need them the most.
    The team compared the DNA of four C3 grass crops (including barley and rice) and four C4 grass crops (including corn and sorghum). Their goal was to identify regions of DNA that might control the expression of four enzymes involved in photosynthesis. This study is likely the first comparison of the expression of these enzymes (SBPase, FBPase, PRK, and GAPDH) in C3 and C4 crops.
    “It would have been great to find a ‘master regulator’ that operates in all these enzymes, but we didn’t find it, and we suspect it doesn’t exist,” said Afamefule, who led the study from his apartment during the pandemic.
    Instead, they discovered C4 crops have several “activators” within their DNA that trigger expression in the bundle sheath and “repressors” that restrict expression in the mesophyll. They hope that they can use this genetic code to help less-efficient C3 crops photosynthesize better in the future.
    “There are already efforts underway to help C3 crops operate more like C4 crops,” said principal investigator Christine Raines, a professor in the School of Life Sciences at Essex where she also serves as the Pro-Vice-Chancellor for Research. “Studies like this help us identify small pieces within an incredibly complex machine that we have to understand before we can fine-tune and reengineer it.”
    The next step is to validate these findings in the lab. The team returned to their lab benches on July 6, 2020, adhering to all recommended safety guidelines from the School of Life Sciences at Essex.
    Realizing Increased Photosynthetic Efficiency (RIPE) aims to improve photosynthesis and equip farmers worldwide with higher-yielding crops to ensure everyone has enough food to lead a healthy, productive life. RIPE is sponsored by the Bill & Melinda Gates Foundation, the U.S. Foundation for Food and Agriculture Research, and the U.K. Foreign, Commonwealth & Development Office.
    RIPE is led by the University of Illinois in partnership with The Australian National University, Chinese Academy of Sciences, Commonwealth Scientific and Industrial Research Organisation, Lancaster University, Louisiana State University, University of California, Berkeley, University of Cambridge, University of Essex, and U.S. Department of Agriculture, Agricultural Research Service. More

  • in

    All-terrain microrobot flips through a live colon

    A rectangular robot as tiny as a few human hairs can travel throughout a colon by doing back flips, Purdue University engineers have demonstrated in live animal models.
    Why the back flips? Because the goal is to use these robots to transport drugs in humans, whose colons and other organs have rough terrain. Side flips work, too.
    Why a back-flipping robot to transport drugs? Getting a drug directly to its target site could remove side effects, such as hair loss or stomach bleeding, that the drug may otherwise cause by interacting with other organs along the way.
    The study, published in the journal Micromachines, is the first demonstration of a microrobot tumbling through a biological system in vivo. Since it is too small to carry a battery, the microrobot is powered and wirelessly controlled from the outside by a magnetic field.
    “When we apply a rotating external magnetic field to these robots, they rotate just like a car tire would to go over rough terrain,” said David Cappelleri, a Purdue associate professor of mechanical engineering. “The magnetic field also safely penetrates different types of mediums, which is important for using these robots in the human body.”
    The researchers chose the colon for in vivo experiments because it has an easy point of entry — and it’s very messy.

    advertisement

    “Moving a robot around the colon is like using the people-walker at an airport to get to a terminal faster. Not only is the floor moving, but also the people around you,” said Luis Solorio, an assistant professor in Purdue’s Weldon School of Biomedical Engineering.
    “In the colon, you have all these fluids and materials that are following along the path, but the robot is moving in the opposite direction. It’s just not an easy voyage.”
    But this magnetic microrobot can successfully tumble throughout the colon despite these rough conditions, the researchers’ experiments showed. A video explaining the work is available on YouTube at https://youtu.be/9OsYpJFWnN8.
    The team conducted the in vivo experiments in the colons of live mice under anesthesia, inserting the microrobot in a saline solution through the rectum. They used ultrasound equipment to observe in real time how well the microrobot moved around.
    The microrobots could also tumble in colons excised from pigs, the researchers found, which have similar guts to humans.

    advertisement

    “Moving up to large animals or humans may require dozens of robots, but that also means you can target multiple sites with multiple drug payloads,” said Craig Goergen, Purdue’s Leslie A. Geddes Associate Professor of Biomedical Engineering, whose research group led work on imaging the microrobot through various kinds of tissue.
    Solorio’s lab tested the microrobot’s ability to carry and release a drug payload in a vial of saline. The researchers coated the microrobot with a fluorescent mock drug, which the microrobot successfully carried throughout the solution in a tumbling motion before the payload slowly diffused from its body an hour later.
    “We were able to get a nice, controlled release of the drug payload. This means that we could potentially steer the microrobot to a location in the body, leave it there, and then allow the drug to slowly come out. And because the microrobot has a polymer coating, the drug wouldn’t fall off before reaching a target location,” Solorio said.
    The magnetic microrobots, cheaply made of polymer and metal, are nontoxic and biocompatible, the study showed. Cappelleri’s research group designed and built each of these robots using facilities at the Birck Nanotechnology Center in Purdue’s Discovery Park.
    Commonly-used roll-to-roll manufacturing machinery could potentially produce hundreds of these microrobots at once, Cappelleri said.
    The researchers believe that the microrobots could act as diagnostic tools in addition to drug delivery vehicles.
    “From a diagnostic perspective, these microrobots might prevent the need for minimally invasive colonoscopies by helping to collect tissue. Or they could deliver payloads without having to do the prep work that’s needed for traditional colonoscopies,” Goergen said. More

  • in

    Making new materials using AI

    There is an old saying, “If rubber is the material that opened the way to the ground, aluminum is the one that opened the way to the sky.” New materials were always discovered at each turning point that changed human history. Materials used in memory devices are also drastically evolving with the emergence of new materials such as doped silicon materials, resistance changing materials, and materials that spontaneously magnetize and polarize. How are these new materials made? A research team from POSTECH has revealed the mechanism behind making materials used in new memory devices by using artificial intelligence.
    The research team led by Professor Si-Young Choi of Department of Materials Science and Engineering and the team led by Professor Daesu Lee of the Department of Physics at POSTECH have together succeeded in synthesizing a novel substance that produces electricity by causing polarization (a phenomenon in which the position of negative and positive charges is separated from the negative and positive charges within the crystal) at room temperature and confirmed its variation in the crystal structure by applying deep neural network analysis. This paper was published in the recent issue of Nature Communications.
    The atomic structures of perovskite oxides are often distorted and their properties are determined by the oxygen octahedral rotation (OOR) accordingly. In fact, there are only a few stable OOR patterns present at equilibrium and this inevitably limits the properties and functions of perovskite oxides.
    The joint research team focused on a perovskite oxide called CaTiO3 which remains nonpolar (or paraelectric) even at the absolute temperature of 0K. Based on the ab-initio calculations, however, the team found that a unique OOR pattern that does not naturally exist would be able to facilitate the ferroelectricity, a powerful polarization at room temperature.
    In this light, the research team succeeded in synthesizing a novel material (heteroepitaxial CaTiO3) that possesses the ferroelectricity by applying interface engineering that controls the atomic structures at the interface and accordingly its physical property.
    In addition, deep neural network analysis was applied to examine the fine OOR and the variation of a few decades of picometer in the atomic structures, and various atomic structures were simulated and data were utilized for AI analysis to identify artificially controlled OOR patterns.
    “We have confirmed that we can create new physical phenomena that do not naturally occur by obtaining the unique OOR pattern through controlling the variation in its atomic structure,” remarked Professor Daesu Lee. “It is especially significant to see that the results of the convergent research of physics and new materials engineering enable calculations for material design, synthesis of novel materials, and analysis to understand new phenomena.”
    Professor Si-Young Choi explained, “By applying the deep machine learning to materials research, we have successfully identified atomic-scale variations on tens of picometers that are difficult to identify with the human eye.” He added, “It could be an advanced approach for materials analysis that can help to understand the mechanism for creating new materials with unique physical phenomena.”

    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    Scientists develop 'mini-brains' to help robots recognize pain and to self-repair

    Using a brain-inspired approach, scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed a way for robots to have the artificial intelligence (AI) to recognise pain and to self-repair when damaged.
    The system has AI-enabled sensor nodes to process and respond to ‘pain’ arising from pressure exerted by a physical force. The system also allows the robot to detect and repair its own damage when minorly ‘injured’, without the need for human intervention.
    Currently, robots use a network of sensors to generate information about their immediate environment. For example, a disaster rescue robot uses camera and microphone sensors to locate a survivor under debris and then pulls the person out with guidance from touch sensors on their arms. A factory robot working on an assembly line uses vision to guide its arm to the right location and touch sensors to determine if the object is slipping when picked up.
    Today’s sensors typically do not process information but send it to a single large, powerful, central processing unit where learning occurs. As a result, existing robots are usually heavily wired which result in delayed response times. They are also susceptible to damage that will require maintenance and repair, which can be long and costly.
    The new NTU approach embeds AI into the network of sensor nodes, connected to multiple small, less-powerful, processing units, that act like ‘mini-brains’ distributed on the robotic skin. This means learning happens locally and the wiring requirements and response time for the robot are reduced five to ten times compared to conventional robots, say the scientists.
    Combining the system with a type of self-healing ion gel material means that the robots, when damaged, can recover their mechanical functions without human intervention.

    advertisement

    The breakthrough research by the NTU scientists was published in the peer-reviewed scientific journal Nature Communications in August.
    Co-lead author of the study, Associate Professor Arindam Basu from the School of Electrical & Electronic Engineering said, “For robots to work together with humans one day, one concern is how to ensure they will interact safely with us. For that reason, scientists around the world have been finding ways to bring a sense of awareness to robots, such as being able to ‘feel’ pain, to react to it, and to withstand harsh operating conditions. However, the complexity of putting together the multitude of sensors required and the resultant fragility of such a system is a major barrier for widespread adoption.”
    Assoc Prof Basu, who is a neuromorphic computing expert added, “Our work has demonstrated the feasibility of a robotic system that is capable of processing information efficiently with minimal wiring and circuits. By reducing the number of electronic components required, our system should become affordable and scalable. This will help accelerate the adoption of a new generation of robots in the marketplace.”
    Robust system enables ‘injured’ robot to self-repair
    To teach the robot how to recognise pain and learn damaging stimuli, the research team fashioned memtransistors, which are ‘brain-like’ electronic devices capable of memory and information processing, as artificial pain receptors and synapses.

    advertisement

    Through lab experiments, the research team demonstrated how the robot was able to learn to respond to injury in real time. They also showed that the robot continued to respond to pressure even after damage, proving the robustness of the system.
    When ‘injured’ with a cut from a sharp object, the robot quickly loses mechanical function. But the molecules in the self-healing ion gel begin to interact, causing the robot to ‘stitch’ its ‘wound’ together and to restore its function while maintaining high responsiveness.
    First author of the study, Rohit Abraham John, who is also a Research Fellow at the School of Materials Science & Engineering at NTU, said, “The self-healing properties of these novel devices help the robotic system to repeatedly stitch itself together when ‘injured’ with a cut or scratch, even at room temperature. This mimics how our biological system works, much like the way human skin heals on its own after a cut.
    “In our tests, our robot can ‘survive’ and respond to unintentional mechanical damage arising from minor injuries such as scratches and bumps, while continuing to work effectively. If such a system were used with robots in real world settings, it could contribute to savings in maintenance.”
    Associate Professor Nripan Mathews, who is co-lead author and from the School of Materials Science & Engineering at NTU, said, “Conventional robots carry out tasks in a structured programmable manner, but ours can perceive their environment, learning and adapting behaviour accordingly. Most researchers focus on making more and more sensitive sensors, but do not focus on the challenges of how they can make decisions effectively. Such research is necessary for the next generation of robots to interact effectively with humans.
    “In this work, our team has taken an approach that is off-the-beaten path, by applying new learning materials, devices and fabrication methods for robots to mimic the human neuro-biological functions. While still at a prototype stage, our findings have laid down important frameworks for the field, pointing the way forward for researchers to tackle these challenges.”
    Building on their previous body of work on neuromorphic electronics such as using light-activated devices to recognise objects, the NTU research team is now looking to collaborate with industry partners and government research labs to enhance their system for larger scale application. More

  • in

    What laser color do you like?

    Researchers at the National Institute of Standards and Technology (NIST) and the University of Maryland have developed a microchip technology that can convert invisible near-infrared laser light into any one of a panoply of visible laser colors, including red, orange, yellow and green. Their work provides a new approach to generating laser light on integrated microchips.
    The technique has applications in precision timekeeping and quantum information science, which often rely on atomic or solid-state systems that must be driven with visible laser light at precisely specified wavelengths. The approach suggests that a wide range of such wavelengths can be accessed using a single, small-scale platform, instead of requiring bulky, tabletop lasers or a series of different semiconductor materials. Constructing such lasers on microchips also provides a low-cost way to integrate lasers with miniature optical circuits needed for optical clocks and quantum communication systems.
    The study, reported in the October 20 issue of Optica, contributes to NIST on a Chip, a program that miniaturizes NIST’s state-of-the-art measurement-science technology, enabling it to be distributed directly to users in industry, medicine, defense and academia.
    Atomic systems that form the heart of the most precise and accurate experimental clocks and new tools for quantum information science typically rely on high-frequency visible (optical) laser light to operate, as opposed to the much lower frequency microwaves that are used to set official time worldwide.
    Scientists are now developing atomic optical system technologies that are compact and operate at low power so that they can be used outside the laboratory. While many different elements are required to realize such a vision, one key ingredient is access to visible-light laser systems that are small, lightweight and operate at low power.
    Although researchers have made great progress in creating compact, high-performance lasers at the near-infrared wavelengths used in telecommunications, it has been challenging to achieve equivalent performance at visible wavelengths. Some scientists have made strides by employing semiconductor materials to generate compact visible-light lasers. In contrast, Xiyuan Lu, Kartik Srinivasan and their colleagues at NIST and the University of Maryland in College Park adopted a different approach, focusing on a material called silicon nitride, which has a pronounced nonlinear response to light.

    advertisement

    Materials such as silicon nitride have a special property: If incoming light has high enough intensity, the color of the exiting light does not necessarily match the color of the light that entered. That is because when bound electrons in a nonlinear optical material interact with high-intensity incident light, the electrons re-radiate that light at frequencies, or colors, that differ from those of the incident light.
    (This effect stands in contrast to the everyday experience of seeing light bounce off a mirror or refract through a lens. In those cases, the color of the light always remains the same.)
    Lu and his colleagues employed a process known as third-order optical parametric oscillation (OPO), in which the nonlinear material converts incident light in the near-infrared into two different frequencies. One of the frequencies is higher than that of the incident light, placing it in the visible range, and the other is lower in frequency, extending deeper into the infrared. Although researchers have employed OPO for years to create different colors of light in large, table-top optical instruments, the new NIST-led study is the first to apply this effect to produce particular visible-light wavelengths on a microchip that has the potential for mass production.
    To miniaturize the OPO method, the researchers directed the near-infrared laser light into a microresonator, a ring-shaped device less than a millionth of a square meter in area and fabricated on a silicon chip. The light inside this microresonator circulates some 5,000 times before it dissipates, building a high enough intensity to access the nonlinear regime where it gets converted to the two different output frequencies.
    To create a multitude of visible and infrared colors, the team fabricated dozens of microresonators, each with slightly different dimensions, on each microchip. The researchers carefully chose these dimensions so that the different microresonators would produce output light of different colors. The team showed that this strategy enabled a single near-infrared laser that varied in wavelength by a relatively small amount to generate a wide range of specific visible-light and infrared colors.
    In particular, although the input laser operates over a narrow range of near-infrared wavelengths (from 780 nanometers to 790 nm), the microchip system generated visible-light colors ranging from green to red (560 nm to 760 nm) and infrared wavelengths ranging from 800 nm to 1,200 nm.
    “The benefit of our approach is that any one of these wavelengths can be accessed just by adjusting the dimensions of our microresonators,” said Srinivasan.
    “Though a first demonstration,” Lu said, “we are excited at the possibility of combining this nonlinear optics technique with well established near-infrared laser technology to create new types of on-chip light sources that can be used in a variety of applications.” More

  • in

    Assessing state of the art in AI for brain disease treatment

    Artificial intelligence is lauded for its ability to solve problems humans cannot, thanks to novel computing architectures that process large amounts of complex data quickly. As a result, AI methods, such as machine learning, computer vision, and neural networks, are applied to some of the most difficult problems in science and society.
    One tough problem is the diagnosis, surgical treatment, and monitoring of brain diseases. The range of AI technologies available for dealing with brain disease is growing fast, and exciting new methods are being applied to brain problems as computer scientists gain a deeper understanding of the capabilities of advanced algorithms.
    In a paper published this week in APL Bioengineering, by AIP Publishing, Italian researchers conducted a systematic literature review to understand the state of the art in the use of AI for brain disease. Their search yielded 2,696 results, and they narrowed their focus to the top 154 most cited papers and took a closer look.
    Their qualitative review sheds light on the most interesting corners of AI development. For example, a generative adversarial network was used to synthetically create an aged brain in order to see how disease advances over time.
    “The use of artificial intelligence techniques is gradually bringing efficient theoretical solutions to a large number of real-world clinical problems related to the brain,” author Alice Segato said. “Especially in recent years, thanks to the accumulation of relevant data and the development of increasingly effective algorithms, it has been possible to significantly increase the understanding of complex brain mechanisms.”
    The authors’ analysis covers eight paradigms of brain care, examining AI methods used to process information about structure and connectivity characteristics of the brain and in assessing surgical candidacy, identifying problem areas, predicting disease trajectory, and for intraoperative assistance. Image data used to study brain disease, including 3D data, such as magnetic resonance imaging, diffusion tensor imaging, positron emission tomography, and computed tomography imaging, can be analyzed using computer vision AI techniques.
    But the authors urge caution, noting the importance of “explainable algorithms” with paths to solutions that are clearly delineated, not a “black box” — the term for AI that reaches an accurate solution but relies on inner workings that are little understood or invisible.
    “If humans are to accept algorithmic prescriptions or diagnosis, they need to trust them,” Segato said. “Researchers’ efforts are leading to the creation of increasingly sophisticated and interpretable algorithms, which could favor a more intensive use of ‘intelligent’ technologies in practical clinical contexts.”

    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Temperature evolution of impurities in a quantum gas

    A new, Monash-led theoretical study advances our understanding of its role in thermodynamics in the quantum impurity problem.
    Quantum impurity theory studies the behaviour of deliberately introduced atoms (ie, ‘impurities’) that behave as particularly ‘clean’ quasiparticles within a background atomic gas, allowing a controllable ‘perfect test bed’ study of quantum correlations.
    The study extends quantum impurity theory, which is of significant interest to the quantum-matter research community, into a new dimension — the thermal effect.
    “We have discovered a general relationship between two distinct experimental protocols, namely ejection and injection radio-frequency spectroscopy, where prior to our work no such relationship was known.” explains lead author Dr Weizhe Liu (Monash University School of Physics and Astronomy).
    QUANTUM IMPURITY THEORY
    Quantum impurity theory studies the effects of introducing atoms of one element (ie, ‘impurities’) into an ultracold atomic gas of another element.

    advertisement

    For example, a small number of potassium atoms can be introduced into a ‘background’ quantum gas of lithium atoms.
    The introduced impurities (in this case, the potassium atoms) behave as a particularly ‘clean’ quasiparticle within the atomic gas.
    Interactions between the introduced impurity atoms and the background atomic gas can be ‘tuned’ via an external magnetic field, allowing investigation of quantum correlations.
    In recent years there has been an explosion of studies on the subject of quantum impurities immersed in different background mediums, thanks to their controllable realization in ultracold atomic gases.
    MODELLING ‘PUSH’ AND ‘PULL’ WITH RADIO-FREQUENCY PULSES
    “Our study is based on radio-frequency spectroscopy, modelling two different scenarios: ejection and injection,” says Dr Weizhe Liu, who is a Research Fellow with FLEET, FLEET working in the group of A/Prof Meera Parish and Dr Jesper Levinsen.

    advertisement

    The team modelled the effect of radio-frequency pulses that would force impurity atoms from one spin state into another, unoccupied spin state.
    Under the ‘ejection’ scenario, radio-frequency pulses act on impurities in a spin state that strongly interact with the background medium, ‘pushing’ those impurities into a non-interacting spin state.
    The inverse ‘injection’ scenario ‘pulls’ impurities from a non-interacting state into an interacting state.
    These two spectroscopies are commonly used separately, to study distinctive aspects of the quantum impurity problem.
    * Instead, the new Monash study shows that the ejection and injection protocols probe the same information.
    “We found that the two scenarios — ejection and injection — are related to each other by an exponential function of the free-energy difference between the interacting and noninteracting impurity states,” says Dr Liu. More