More stories

  • in

    How to overcome noise in quantum computations

    Researchers Ludovico Lami (QuSoft, University of Amsterdam) and Mark M. Wilde (Cornell) have made significant progress in quantum computing by deriving a formula that predicts the effects of environmental noise. This is crucial for designing and building quantum computers capable of working in our imperfect world.
    The choreography of quantum computing
    Quantum computing uses the principles of quantum mechanics to perform calculations. Unlike classical computers, which use bits that can be either 0 or 1, quantum computers use quantum bits, or qubits, which can be in a superposition of 0 and 1 simultaneously.
    This allows quantum computers to perform certain types of calculations much faster than classical computers. For example, a quantum computer can factor very large numbers in a fraction of the time it would take a classical computer.
    While one could naively attribute such an advantage to the ability of a quantum computer to perform numerous calculations in parallel, the reality is more complicated. The quantum wave function of the quantum computer (which represents its physical state) possesses several branches, each with its own phase. A phase can be thought of as the position of the hand of a clock, which can point in any direction on the clockface.
    At the end of its computation, the quantum computer recombines the results of all computations it simultaneously carried out on different branches of the wave function into a single answer. “The phases associated to the different branches play a key role in determining the outcome of this recombination process, not unlike how the timing of a ballerina’s steps play a key role in determining the success of a ballet performance,” explains Lami.

    Disruptive environmental noise
    A significant obstacle to quantum computing is environmental noise. Such noise can be likened to a little demon that alters the phase of different branches of the wave function in an unpredictable way. This process of tampering with the phase of a quantum system is called dephasing, and can be detrimental to the success of a quantum computation.
    Dephasing can occur in everyday devices such as optical fibres, which are used to transfer information in the form of light. Light rays travelling through an optical fibre can take different paths; since each path is associated to a specific phase, not knowing the path taken amounts to an effective dephasing noise.
    In their new publication in Nature Photonics, Lami and Wilde analyse a model, called the bosonic dephasing channel, to study how noise affects the transmission of quantum information. It represents the dephasing acting on a single mode of light at definite wavelength and polarisation.
    The number quantifying the effect of the noise on quantum information is the quantum capacity, which is the number of qubits that can be safely transmitted per use of a fibre. The new publication provides a full analytical solution to the problem of calculating the quantum capacity of the bosonic dephasing channel, for all possible forms of dephasing noise.
    Longer messages overcome errors
    To overcome the effects of noise, one can incorporate redundancy in the message to ensure that the quantum information can still be retrieved at the receiving end. This is similar to saying “Alpha, Beta, Charlie” instead of “A, B, C” when speaking on the phone. Although the transmitted message is longer, the redundancy ensures that it is understood correctly.
    The new study quantifies exactly how much redundancy needs to be added to a quantum message to protect it from dephasing noise. This is significant because it enables scientists to quantify the effects of noise on quantum computing and develop methods to overcome these effects. More

  • in

    Random matrix theory approaches the mystery of the neutrino mass

    When any matter is divided into smaller and smaller pieces, eventually all you are left with — when it cannot be divided any further — is a particle. Currently, there are 12 different known elementary particles, which in turn are made up of quarks and leptons each of which come in six different flavors. These flavors are grouped into three generations — each with one charged and one neutral lepton — to form different particles, including the electron, muon, and tau neutrinos. In the Standard Model, the masses of the three generations of neutrinos are represented by a three-by-three matrix.
    A research team led by Professor Naoyuki Haba from the Osaka Metropolitan University Graduate School of Science, analyzed the collection of leptons that make up the neutrino mass matrix. Neutrinos are known to have less difference in mass between generations than other elementary particles, so the research team considered that neutrinos are roughly equal in mass between generations. They analyzed the neutrino mass matrix by randomly assigning each element of the matrix. They showed theoretically, using the random mass matrix model that the lepton flavor mixings are large.
    “Clarifying the properties of elementary particles leads to the exploration of the universe and ultimately to the grand theme of where we came from!” Professor Haba explained. “Beyond the remaining mysteries of the Standard Model, there is a whole new world of physics.”
    After studying the neutrino mass anarchy in the Dirac neutrino, seesaw, double seesaw models, the researchers found that the anarchy approach requires that the measure of the matrix should obey the Gaussian distribution. Having considered several models of light neutrino mass where the matrix is composed of the product of several random matrices, the research team was able to prove, as best they could at this stage, why the calculation of the squared difference of the neutrino masses are closest with the experimental results in the case of the seesaw model with the random Dirac and Majorana matrices.
    “In this study, we showed that the neutrino mass hierarchy can be mathematically explained using random matrix theory. However, this proof is not mathematically complete and is expected to be rigorously proven as random matrix theory continues to develop,” said Professor Haba. “In the future, we will continue with our challenge of elucidating the three-generation copy structure of elementary particles, the essential nature of which is still completely unknown both theoretically and experimentally.” More

  • in

    A new type of photonic time crystal gives light a boost

    Researchers have developed a way to create photonic time crystals and shown that these bizarre, artificial materials amplify the light that shines on them. These findings, described in a paper in Science Advances, could lead to more efficient and robust wireless communications and significantly improved lasers.
    Time crystals were first conceived by Nobel laureate Frank Wilczek in 2012. Mundane, familiar crystals have a structural pattern that repeats in space, but in a time crystal, the pattern repeats in time instead. While some physicists were initially sceptical that time crystals could exist, recent experiments have succeeding in creating them. Last year, researchers at Aalto University’s Low Temperature Laboratory created paired time crystals that could be useful for quantum devices.
    Now, another team has made photonic time crystals, which are time-based versions of optical materials. The researchers created photonic time crystals that operate at microwave frequencies, and they showed that the crystals can amplify electromagnetic waves. This ability has potential applications in various technologies, including wireless communication, integrated circuits, and lasers.
    So far, research on photonic time crystals has focused on bulk materials — that is, three-dimensional structures. This has proven enormously challenging, and the experiments haven’t gotten past model systems with no practical applications. So the team, which included researchers from Aalto University, the Karlsruhe Institute of Technology (KIT), and Stanford University, tried a new approach: building a two-dimensional photonic time crystal, known as a metasurface.
    ‘We found that reducing the dimensionality from a 3D to a 2D structure made the implementation significantly easier, which made it possible to realise photonic time crystals in reality,’ says Xuchen Wang, the study’s lead author, who was a doctoral student at Aalto and is currently at KIT.
    The new approach enabled the team to fabricate a photonic time crystal and experimentally verify the theoretical predictions about its behaviour. ‘We demonstrated for the first time that photonic time crystals can amplify incident light with high gain,’ says Wang.
    ‘In a photonic time crystal, the photons are arranged in a pattern that repeats over time. This means that the photons in the crystal are synchronized and coherent, which can lead to constructive interference and amplification of the light,’ explains Wang. The periodic arrangement of the photons means they can also interact in ways that boost the amplification.
    Two-dimensional photonic time crystals have a range of potential applications. By amplifying electromagnetic waves, they could make wireless transmitters and receivers more powerful or more efficient. Wang points out that coating surfaces with 2D photonic time crystals could also help with signal decay, which is a significant problem in wireless transmission. Photonic time crystals could also simplify laser designs by removing the need for bulk mirrors that are typically used in laser cavities.
    Another application emerges from the finding that 2D photonic time crystals don’t just amplify electromagnetic waves that hit them in free space but also waves travelling along the surface. Surface waves are used for communication between electronic components in integrated circuits. ‘When a surface wave propagates, it suffers from material losses, and the signal strength is reduced. With 2D photonic time crystals integrated into the system, the surface wave can be amplified, and communication efficiency enhanced,’ says Wang. More

  • in

    Social consequences of using AI in conversations

    Cornell University researchers have found people have more efficient conversations, use more positive language and perceive each other more positively when using an artificial intelligence-enabled chat tool.
    The study, published in Scientific Reports, examined how the use of AI in conversations impacts the way that people express themselves and view each other.
    “Technology companies tend to emphasize the utility of AI tools to accomplish tasks faster and better, but they ignore the social dimension,” said Malte Jung, associate professor of information science. “We do not live and work in isolation, and the systems we use impact our interactions with others.”
    However, in addition to greater efficiency and positivity, the group found that when participants think their partner is using more AI-suggested responses, they perceive that partner as less cooperative, and feel less affiliation toward them.
    “I was surprised to find that people tend to evaluate you more negatively simply because they suspect that you’re using AI to help you compose text, regardless of whether you actually are,” said Jess Hohenstein, lead author and postdoctoral researcher. “This illustrates the persistent overall suspicion that people seem to have around AI.”
    For their first experiment, researchers developed a smart-reply platform the group called “Moshi” (Japanese for “hello”), patterned after the now-defunct Google “Allo” (French for “hello”), the first smart-reply platform, unveiled in 2016. Smart replies are generated from LLMs (large language models) to predict plausible next responses in chat-based interactions.
    Participants were asked to talk about a policy issue and assigned to one of three conditions: both participants can use smart replies; only one participant can use smart replies; or neither participant can use smart replies.
    Researchers found that using smart replies increased communication efficiency, positive emotional language and positive evaluations by communication partners. On average, smart replies accounted for 14.3% of sent messages (1 in 7).
    But participants who their partners suspected of responding with smart replies were evaluated more negatively than those who were thought to have typed their own responses, consistent with common assumptions about the negative implications of AI.
    “While AI might be able to help you write,” Hohenstein said, “it’s altering your language in ways you might not expect, especially by making you sound more positive. This suggests that by using text-generating AI, you’re sacrificing some of your own personal voice.”
    Said Jung: “What we observe in this study is the impact that AI has on social dynamics and some of the unintended consequences that could result from integrating AI in social contexts. This suggests that whoever is in control of the algorithm may have influence on people’s interactions, language and perceptions of each other.”
    This work was supported by the National Science Foundation. More

  • in

    Is artificial intelligence better at assessing heart health?

    Who can assess and diagnose cardiac function best after reading an echocardiogram: artificial intelligence (AI) or a sonographer?
    According to Cedars-Sinai investigators and their research published today in the peer-reviewed journal Nature, AI proved superior in assessing and diagnosing cardiac function when compared with echocardiogram assessments made by sonographers.
    The findings are based on a first-of-its-kind, blinded, randomized clinical trial of AI in cardiology led by investigators in the Smidt Heart Institute and the Division of Artificial Intelligence in Medicine at Cedars-Sinai.
    “The results have immediate implications for patients undergoing cardiac function imaging as well as broader implications for the field of cardiac imaging,” said cardiologist David Ouyang, MD, principal investigator of the clinical trial and senior author of the study. “This trial offers rigorous evidence that utilizing AI in this novel way can improve the quality and effectiveness of echocardiogram imaging for many patients.”
    Investigators are confident that this technology will be found beneficial when deployed across the clinical system at Cedars-Sinai and health systems nationwide.
    “This successful clinical trial sets a superb precedent for how novel clinical AI algorithms can be discovered and tested within health systems, increasing the likelihood of seamless deployment for improved patient care,” said Sumeet Chugh, MD, director of the Division of Artificial Intelligence in Medicine and the Pauline and Harold Price Chair in Cardiac Electrophysiology Research.

    In 2020, researchers at the Smidt Heart Institute and Stanford University developed one of the first AI technologies to assess cardiac function, specifically, left ventricular ejection fraction — the key heart measurement used in diagnosing cardiac function. Their research also was published in Nature.
    Building on those findings, the new study assessed whether AI was more accurate in evaluating 3,495 transthoracic echocardiogram studies by comparing initial assessment by AI or by a sonographer — also known as an ultrasound technician.
    Among the findings: Cardiologists more frequently agreed with the AI initial assessment and made corrections to only 16.8% of the initial assessments made by AI. Cardiologists made corrections to 27.2% of the initial assessments made by the sonographers. The physicians were unable to tell which assessments were made by AI and which were made by sonographers. The AI assistance saved cardiologists and sonographers time.”We asked our cardiologists to guess if the preliminary interpretation was performed by AI or by a sonographer, and it turns out that they couldn’t tell the difference,” said Ouyang. “This speaks to the strong performance of the AI algorithm as well as the seamless integration into clinical software. We believe these are all good signs for future AI trial research in the field.”
    The hope, Ouyang says, is to save clinicians time and minimize the more tedious parts of the cardiac imaging workflow. The cardiologist, however, remains the final expert adjudicator of the AI model output.
    The clinical trial and subsequent published research also shed light on the opportunity for regulatory approvals.
    “This work raises the bar for artificial intelligence technologies being considered for regulatory approval, as the Food and Drug Administration has previously approved artificial intelligence tools without data from prospective clinical trials,” said Susan Cheng, MD, MPH, director of the Institute for Research on Healthy Aging in the Department of Cardiology at the Smidt Heart Institute and co-senior author of the study. “We believe this level of evidence offers clinicians extra assurance as health systems work to adopt artificial intelligence more broadly as part of efforts to increase efficiency and quality overall.” More

  • in

    Robots predict human intention for faster builds

    Humans have a way of understandings others’ goals, desires and beliefs, a crucial skill that allows us to anticipate people’s actions. Taking bread out of the toaster? You’ll need a plate. Sweeping up leaves? I’ll grab the green trash can.
    This skill, often referred to as “theory of mind,” comes easily to us as humans, but is still challenging for robots. But, if robots are to become truly collaborative helpers in manufacturing and in everyday life, they need to learn the same abilities.
    In a new paper, a best paper award finalist at the ACM/IEEE International Conference on Human-Robot Interaction (HRI), USC Viterbi computer science researchers aim to teach robots how to predict human preferences in assembly tasks, so they can one day help out on everything from building a satellite to setting a table.
    “When working with people, a robot needs to constantly guess what the person will do next,” said lead author Heramb Nemlekar, a USC computer science PhD student working under the supervision of Stefanos Nikolaidis, an assistant professor of computer science. “For example, if the robot thinks the person will need a screwdriver to assemble the next part, it can get the screwdriver ahead of time so that the person does not have to wait. This way the robot can help people finish the assembly much faster.”
    But, as anyone who has co-built furniture with a partner can attest, predicting what a person will do next is difficult: different people prefer to build the same product in different ways. While some people want to start with the most difficult parts to get them over with, others may want to start with the easiest parts to save energy.
    Making predictions
    Most of the current techniques require people to show the robot how they would like to perform the assembly, but this takes time and effort and can defeat the purpose, said Nemlekar. “Imagine having to assemble an entire airplane just to teach the robot your preferences,” he said.

    In this new study, however, the researchers found similarities in how an individual will assemble different products. For instance, if you start with the hardest part when building an Ikea sofa, you are likely to use the same tact when putting together a baby’s crib.
    So, instead of “showing” the robot their preferences in a complex task, they created a small assembly task (called a “canonical” task) that people can easily and quickly perform. In this case, putting together parts of a simple model airplane, such as the wings, tail and propeller.
    The robot “watched” the human complete the task using a camera placed directly above the assembly area, looking down. To detect the parts operated by the human, the system used AprilTags, similar to QR codes, attached to the parts.
    Then, the system used machine learning to learn a person’s preference based on their sequence of actions in the canonical task.
    “Based on how a person performs the small assembly, the robot predicts what that person will do in the larger assembly,” said Nemlekar. “For example, if the robot sees that a person likes to start the small assembly with the easiest part, it will predict that they will start with the easiest part in the large assembly as well.”
    Building trust

    In the researchers’ user study, their system was able to predict the actions that humans will take with around 82% accuracy.
    “We hope that our research can make it easier for people to show robots what they prefer,” said Nemlekar. “By helping each person in their preferred way, robots can reduce their work, save time and even build trust with them.”
    For instance, imagine you’re assembling a piece of furniture at home, but you’re not particularly handy and struggle with the task. A robot that has been trained to predict your preferences could provide you with the necessary tools and parts ahead of time, making the assembly process easier.
    This technology could also be useful in industrial settings where workers are tasked with assembling products on a mass scale, saving time and reducing the risk of injury or accidents. Additionally, it could help persons with disabilities or limited mobility to more easily assemble products and maintain independence.
    Quickly learning preferences
    The goal is not to replace humans on the factory floor, say the researchers. Instead, they hope this research will lead to significant improvements in the safety and productivity of assembly workers in human-robot hybrid factories. “Robots can perform the non-value-added or ergonomically challenging tasks that are currently being performed by workers.
    As for the next steps, the researchers plan to develop a method to automatically design canonical tasks for different types of assembly task. They also aim to evaluate the benefit of learning human preferences from short tasks and predicting their actions in a complex task in different contexts, for instance, personal assistance in homes.
    “While we observed that human preferences transfer from canonical to actual tasks in assembly manufacturing, I expect similar findings in other applications as well,” said Nikolaidis. “A robot that can quickly learn our preferences can help us prepare a meal, rearrange furniture or do house repairs, having a significant impact in our daily lives.” More

  • in

    Here’s why the geometric patterns in salt flats worldwide look so similar

    From Death Valley to Chile to Iran, similarly sized polygons of salt form in playas all over the world — and subterranean fluid flows might be the key to solving the long-standing puzzle of why.

    Geometric shapes such as pentagons and hexagons spontaneously form in a wide range of geologic settings. Dried mud, ice and rock often crack into polygons, but these patterns tend to vary dramatically in size.

    So why are all playas so persistently similar? The answer lies underground, physicist Jana Lasser and colleagues propose February 24 in Physical Review X. With sophisticated mathematical models, computer simulations and experiments performed at Owens Lake in California, the team connected what they saw on the surface with what is going on beneath.

    “Fluid flows and convection underground are uniquely able to explain why the patterns form,” says Lasser, of the Graz University of Technology in Austria.

    This 3-D approach was key to explaining the universality of salty polygons.

    Salt flats form in places where rainfall is scarce and there’s a lot of evaporation (SN: 12/5/07). Groundwater seeping up to the surface evaporates, leaving a crust of salts and other minerals that had been dissolved in the water. Most striking, this process results in low ridges of concentrated salt that divide the playa into polygons: mostly hexagons with a smattering of pentagons and other geometric shapes.

    The type of salt varies from one playa to another. Table salt, or sodium chloride, dominates in some playas, but others have more sulfite salts. And the salt crusts themselves range in thickness from a few millimeters to several meters. That variation seems to be why previous attempts to describe the playas’ patterns failed.

    Whether the crusts are meter- or millimeter-thick, salt pans feature polygons that are 1 to 2 meters across. Previous models based on cracking, expansion and other phenomena that describe how mud and rock fracture instead produce polygons with sizes that vary according to crust thickness.

    As groundwater evaporates from the surface, it concentrates salt in the remaining groundwater. That salty water, now denser and heavier, sinks, forcing other less dense water upward. Lasser and colleagues showed that over time, the circulation, known as convection, tends to push the descending plumes of saltier water into a network of vertical sheets. The surface above these sheets accrues more salt, so thick salt ridges grow there. Thinner crusts of salt form between, where less salty water upwells, spontaneously making the characteristic polygons shared by playas around the world.

    Computer simulations of the fluid dynamics beneath the surface of salt flats demonstrate how the sinking of high-salinity groundwater (purple plumes) forms distinctive polygons on the surface (red is areas with the highest downward flow).J. Lasser et al/Physical Review X

    The equations the researchers used describe the relative salinity of the groundwater, the pressure within the fluid and the speed at which the water circulates. Computer simulations that embraced the full complexity of the 3-D problem started with no salt crust or polygons and produced something that looks very much like real playas.

    “This fluid dynamical model makes much more sense than a model that ignores what’s happening beneath the surface,” says physicist Julyan Cartwright of the Spanish National Research Council, who is based in Granada and was not involved in the research.

    Tests at Owens Lake helped the team verify and refine the model. “Physics is so much more than just sitting in front of a computer,” Lasser says, “and I wanted to do something that involves experiments.”

    The lake dried up in the 1920s as water was diverted to Los Angeles. The deposited minerals on the remaining salt flat include large natural concentrations of arsenic, which blows away with the dust kicked up by wind — creating serious health hazards. Among other remediation efforts, brine has been pumped onto the lake bed to try to create a more stable salt crust (SN: 11/28/01). That human intervention gave the researchers the opportunity to test their ideas in a controlled way.

    “The whole area is destroyed,” Lasser says, “but for us it was the perfect research environment.” More

  • in

    DMI allows magnon-magnon coupling in hybrid perovskites

    An international group of researchers has created a mixed magnon state in an organic hybrid perovskite material by utilizing the Dzyaloshinskii-Moriya-Interaction (DMI). The resulting material has potential for processing and storing quantum computing information. The work also expands the number of potential materials that can be used to create hybrid magnonic systems.
    In magnetic materials, quasi-particles called magnons direct the electron spin within the material. There are two types of magnons — optical and acoustic — which refer to the direction of their spin.
    “Both optical and acoustic magnons propagate spin waves in antiferromagnets,” says Dali Sun, associate professor of physics and member of the Organic and Carbon Electronics Lab (ORaCEL) at North Carolina State University. “But in order to use spin waves to process quantum information, you need a mixed spin wave state.”
    “Normally two magnon modes cannot generate a mixed spin state due to their different symmetries,” Sun says. “But by harnessing the DMI we discovered a hybrid perovskite with a mixed magnon state.” Sun is also a corresponding author of the research.
    The researchers accomplished this by adding an organic cation to the material, which created a particular interaction called the DMI. In short, the DMI breaks the symmetry of the material, allowing the spins to mix.
    The team utilized a copper based magnetic hybrid organic-inorganic perovskite, which has a unique octahedral structure. These octahedrons can tilt and deform in different ways. Adding an organic cation to the material breaks the symmetry, creating angles within the material that allow the different magnon modes to couple and the spins to mix.
    “Beyond the quantum implications, this is the first time we’ve observed broken symmetry in a hybrid organic-inorganic perovskite,” says Andrew Comstock, NC State graduate research assistant and first author of the research.
    “We found that the DMI allows magnon coupling in copper-based hybrid perovskite materials with the correct symmetry requirements,” Comstock says. “Adding different cations creates different effects. This work really opens up ways to create magnon coupling from a lot of different materials — and studying the dynamic effects of this material can teach us new physics as well.”
    The work appears in Nature Communications and was primarily supported by the U.S. Department of Energy’s Center for Hybrid Organic Inorganic Semiconductors for Energy (CHOISE). Chung-Tao Chou of the Massachusetts Institute of Technology is co-first author of the work. Luqiao Liu of MIT, and Matthew Beard and Haipeng Lu of the National Renewable Energy Laboratory are co-corresponding authors of the research. More