More stories

  • in

    New method of measuring qubits promises ease of scalability in a microscopic package

    Chasing ever-higher qubit counts in near-term quantum computers constantly demands new feats of engineering.
    Among the troublesome hurdles of this scaling-up race is refining how qubits are measured. Devices called parametric amplifiers are traditionally used to do these measurements. But as the name suggests, the device amplifies weak signals picked up from the qubits to conduct the readout, which causes unwanted noise and can lead to decoherence of the qubits if not protected by additional large components. More importantly, the bulky size of the amplification chain becomes technically challenging to work around as qubit counts increase in size-limited refrigerators.
    Cue the Aalto University research group Quantum Computing and Devices (QCD). They have a hefty track record of showing how thermal bolometers can be used as ultrasensitive detectors, and they just demonstrated in an April 10 Nature Electronics paper that bolometer measurements can be accurate enough for single-shot qubit readout.
    A new method of measuring
    To the chagrin of many physicists, the Heisenberg uncertainty principle determines that one cannot simultaneously know a signal’s position and momentum, or voltage and current, with accuracy. So it goes with qubit measurements conducted with parametric voltage-current amplifiers. But bolometric energy sensing is a fundamentally different kind of measurement — serving as a means of evading Heisenberg’s infamous rule. Since a bolometer measures power, or photon number, it is not bound to add quantum noise stemming from the Heisenberg uncertainty principle in the way that parametric amplifiers are.
    Unlike amplifiers, bolometers very subtly sense microwave photons emitted from the qubit via a minimally invasive detection interface. This form factor is roughly 100 times smaller than its amplifier counterpart, making it extremely attractive as a measurement device.
    ‘When thinking of a quantum-supreme future, it is easy to imagine high qubit counts in the thousands or even millions could be commonplace. A careful evaluation of the footprint of each component is absolutely necessary for this massive scale-up. We have shown in the Nature Electronics paper that our nanobolometers could seriously be considered as an alternative to conventional amplifiers. In our very first experiments, we found these bolometers accurate enough for single-shot readout, free of added quantum noise, and they consume 10,000 times less power than the typical amplifiers — all in a tiny bolometer, the temperature-sensitive part of which can fit inside of a single bacterium,’ says Aalto University Professor Mikko Möttönen, who heads the QCD research group.

    Single-shot fidelity is an important metric physicists use to determine how accurately a device can detect a qubit’s state in just one measurement as opposed to an average of multiple measurements. In the case of the QCD group’s experiments, they were able to obtain a single-shot fidelity of 61.8% with a readout duration of roughly 14 microseconds. When correcting for the qubit’s energy relaxation time, the fidelity jumps up to 92.7%.
    ‘With minor modifications, we could expect to see bolometers approaching the desired 99.9% single-shot fidelity in 200 nanoseconds. For example, we can swap the bolometer material from metal to graphene, which has a lower heat capacity and can detect very small changes in its energy quickly. And by removing other unnecessary components between the bolometer and the chip itself, we can not only make even greater improvements on the readout fidelity, but we can achieve a smaller and simpler measurement device that makes scaling-up to higher qubit counts more feasible,’ says András Gunyhó, the first author on the paper and a doctoral researcher in the QCD group.
    Prior to demonstrating the high single-shot readout fidelity of bolometers in their most recent paper, the QCD research group first showed that bolometers can be used for ultrasensitive, real-time microwave measurements in 2019. They then published in 2020 a paper in Nature showing how bolometers made of graphene can shorten readout times to well below a microsecond.
    The work was carried out in the Research Council of Finland Centre of Excellence for Quantum Technology (QTF) using OtaNano research infrastructure in collaboration with VTT Technical Research Centre of Finland and IQM Quantum Computers. It was primarily funded by the European Research Council Advanced Grant ConceptQ and the Future Makers Program of the Jane and Aatos Erkko Foundation and the Technology Industries of Finland Centennial Foundation. More

  • in

    Breakthrough for next-generation digital displays

    Researchers at Linköping University, Sweden, have developed a digital display screen where the LEDs themselves react to touch, light, fingerprints and the user’s pulse, among other things. Their results, published in Nature Electronics, could be the start of a whole new generation of displays for phones, computers and tablets.
    “We’ve now shown that our design principle works. Our results show that there is great potential for a new generation of digital displays where new advanced features can be created. From now on, it’s about improving the technology into a commercially viable product,” says Feng Gao, professor in optoelectronics at Linköping University (LiU).
    Digital displays have become a cornerstone of almost all personal electronics. However, the most modern LCD and OLED screens on the market can only display information. To become a multi-function display that detects touch, fingerprints or changing lighting conditions, a variety of sensors are required that are layered on top of or around the display.
    Researchers at Linköping University have now developed a completely new type of display where all sensor functions are also found in the display’s LEDs without the need of any additional sensors.
    The LEDs are made of a crystalline material called perovskite. Its excellent ability of light absorption and emission is the key that enables the newly developed screen.
    In addition to the screen reacting to touch, light, fingerprints and the user’s pulse, the device can also be charged through the screen thanks to the perovskites’ ability to also act as solar cells.
    “Here’s an example — your smartwatch screen is off most of the time. During the off-time of the screen, instead of displaying information, it can harvest light to charge your watch, significantly extending how long you can go between charges,” says Chunxiong Bao, associate professor at Nanjing University, previously a postdoc researcher at LiU and the lead author of the paper.
    For a screen to display all colours, there needs to be LEDs in three colours — red, green and blue — that glow with different intensity and thus produce thousands of different colours. The researchers at Linköping University have developed screens with perovskite LEDs in all three colours, paving the way for a screen that can display all colours within the visible light spectrum.
    But there are still many challenges to be solved before the screen is in everyone’s pocket. Zhongcheng Yuan, researcher at the University of Oxford, previously postdoc at LiU and the other lead author of the paper, believes that many of the problems will be solved within ten years:
    “For instance, the service life of perovskite LEDs needs to be improved. At present, the screen only works for a few hours before the material becomes unstable, and the LEDs go out,” he says. More

  • in

    Waterproof ‘e-glove’ could help scuba divers communicate

    When scuba divers need to say “I’m okay” or “Shark!” to their dive partners, they use hand signals to communicate visually. But sometimes these movements are difficult to see. Now, researchers reporting in ACS Nano have constructed a waterproof “e-glove” that wirelessly transmits hand gestures made underwater to a computer that translates them into messages. The new technology could someday help divers communicate better with each other and with boat crews on the surface.
    E-gloves — gloves fitted with electronic sensors that translate hand motions into information — are already in development, including designs that allow the wearer to interact with virtual reality environments or help people recovering from a stroke regain fine motor skills. However, rendering the electronic sensors waterproof for use in a swimming pool or the ocean, while also keeping the glove flexible and comfortable to wear, is a challenge. So Fuxing Chen, Lijun Qu, Mingwei Tian and colleagues wanted to create an e-glove capable of sensing hand motions when submerged underwater.
    The researchers began by fabricating waterproof sensors that rely on flexible microscopic pillars inspired by the tube-like feet of a starfish. Using laser writing tools, they created an array of these micropillars on a thin film of polydimethylsiloxane (PDMS), a waterproof plastic commonly used in contact lenses. After coating the PDMS array with conductive layer of silver, the researchers sandwiched two of the films together with the pillars facing inward to create a waterproof sensor. The sensor — roughly the size of a USB-C port — is responsive when flexed and can detect a range of pressures comparable to the light touch of a dollar bill up to the impact of water streaming from a garden hose. The researchers packaged 10 of these waterproof sensors within self-adhesive bandages and sewed them over the knuckles and first finger joints of their e-glove prototype.
    To create a hand-gesture vocabulary for the researchers’ demonstration, a participant wearing the e-glove made 16 gestures, including “OK” and “Exit.” The researchers recorded the specific electronic signals generated by the e-glove sensors for each corresponding gesture. They applied a machine learning technique for translating sign language into words to create a computer program that could translate the e-glove gestures into messages. When tested, the program translated hand gestures made on land and underwater with 99.8% accuracy. In the future, the team says a version of this e-glove could help scuba divers communicate with visual hand signals even when they cannot clearly see their dive partners.
    The authors acknowledge funding from the Shiyanjia Lab, National Key Research and Development Program, Taishan Scholar Program of Shandong Province in China, Shandong Province Key Research and Development Plan, Shandong Provincial Universities Youth Innovation Technology Plan Team, National Natural Science Foundation of China, Natural Science Foundation of Shandong Province of China, Shandong Province Science and Technology Small and Medium sized Enterprise Innovation Ability Enhancement Project, Natural Science Foundation of Qingdao, Qingdao Key Technology Research and Industrialization Demonstration Projects, Qingdao Shinan District Science and Technology Plan Project, and Suqian Key Research and Development Plan. More

  • in

    AI-assisted breast-cancer screening may reduce unnecessary testing

    Using artificial intelligence (AI) to supplement radiologists’ evaluations of mammograms may improve breast-cancer screening by reducing false positives without missing cases of cancer, according to a study by researchers at Washington University School of Medicine in St. Louis and Whiterabbit.ai, a Silicon Valley-based technology startup.
    The researchers developed an algorithm that identified normal mammograms with very high sensitivity. They then ran a simulation on patient data to see what would have happened if all of the very low-risk mammograms had been taken off radiologists’ plates, freeing the doctors to concentrate on the more questionable scans. The simulation revealed that fewer people would have been called back for additional testing but that the same number of cancer cases would have been detected.
    “False positives are when you call a patient back for additional testing, and it turns out to be benign,” explained senior author Richard L. Wahl, MD, a professor of radiology at Washington University’s Mallinckrodt Institute of Radiology (MIR) and a professor of radiation oncology. “That causes a lot of unnecessary anxiety for patients and consumes medical resources. This simulation study showed that very low-risk mammograms can be reliably identified by AI to reduce false positives and improve workflows.”
    The study is published April 10 in the journal Radiology: Artificial Intelligence.
    Wahl previously collaborated with Whiterabbit.ai on an algorithm to help radiologists judge breast density on mammograms to identify people who could benefit from additional or alternative screening. That algorithm received clearance from the Food and Drug Administration (FDA) in 2020 and is now marketed by Whiterabbit.ai as WRDensity.
    In this study, Wahl and colleagues at Whiterabbit.ai worked together to develop a way to rule out cancer using AI to evaluate mammograms. They trained the AI model on 123,248 2D digital mammograms (containing 6,161 showing cancer) that were largely collected and read by Washington University radiologists. Then, they validated and tested the AI model on three independent sets of mammograms, two from institutions in the U.S. and one in the United Kingdom.
    First, the researchers figured out what the doctors did: how many patients were called back for secondary screening and biopsies; the results of those tests; and the final determination in each case. Then, they applied AI to the datasets to see what would have been different if AI had been used to remove negative mammograms in the initial assessments and physicians had followed standard diagnostic procedures to evaluate the rest.
    For example, consider the largest dataset, which contained 11,592 mammograms. When scaled to 10,000 mammograms (to make the math simpler for the purposes of the simulation), AI identified 34.9% as negative. If those 3,485 negative mammograms had been removed from the workload, radiologists would have made 897 callbacks for diagnostic exams, a reduction of 23.7% from the 1,159 they made in reality. At the next step, 190 people would have been called in a second time for biopsies, a reduction of 6.9% from the 200 in reality. At the end of the process, both the AI rule-out and real-world standard-of-care approaches identified the same 55 cancers. In other words, this study of AI suggests that out of 10,000 people who underwent initial mammograms, 262 could have avoided diagnostic exams, and 10 could have avoided biopsies, without any cancer cases being missed.
    “At the end of the day, we believe in a world where the doctor is the superhero who finds cancer and helps patients navigate their journey ahead,” said co-author Jason Su, co-founder and chief technology officer at Whiterabbit.ai. “The way AI systems can help is by being in a supporting role. By accurately assessing the negatives, it can help remove the hay from the haystack so doctors can find the needle more easily. This study demonstrates that AI can potentially be highly accurate in identifying negative exams. More importantly, the results showed that automating the detection of negatives may also lead to a tremendous benefit in the reduction of false positives without changing the cancer detection rate.” More

  • in

    Can the bias in algorithms help us see our own?

    Algorithms were supposed to make our lives easier and fairer: help us find the best job applicants, help judges impartially assess the risks of bail and bond decisions, and ensure that healthcare is delivered to the patients with the greatest need. By now, though, we know that algorithms can be just as biased as the human decision-makers they inform and replace.
    What if that weren’t a bad thing?
    New research by Carey Morewedge, a Boston University Questrom School of Business professor of marketing and Everett W. Lord Distinguished Faculty Scholar, found that people recognize more of their biases in algorithms’ decisions than they do in their own — even when those decisions are the same. The research, publishing in the Proceedings of the National Academy of Sciences, suggests ways that awareness might help human decision-makers recognize and correct for their biases.
    “A social problem is that algorithms learn and, at scale, roll out biases in the human decisions on which they were trained,” says Morewedge, who also chairs Questrom’s marketing department. For example: In 2015, Amazon tested (and soon scrapped) an algorithm to help its hiring managers filter through job applicants. They found that the program boosted résumés it perceived to come from male applicants, and downgraded those from female applicants, a clear case of gender bias.
    But that same year, just 39 percent of Amazon’s workforce were women. If the algorithm had been trained on Amazon’s existing hiring data, it’s no wonder it prioritized male applicants — Amazon already was. If its algorithm had a gender bias, “it’s because Amazon’s managers were biased in their hiring decisions,” Morewedge says.
    “Algorithms can codify and amplify human bias, but algorithms also reveal structural biases in our society,” he says. “Many biases cannot be observed at an individual level. It’s hard to prove bias, for instance, in a single hiring decision. But when we add up decisions within and across persons, as we do when building algorithms, it can reveal structural biases in our systems and organizations.”
    Morewedge and his collaborators — Begüm Çeliktutan and Romain Cadario, both at Erasmus University in the Netherlands — devised a series of experiments designed to tease out people’s social biases (including racism, sexism, and ageism). The team then compared research participants’ recognition of how those biases colored their own decisions versus decisions made by an algorithm. In the experiments, participants sometimes saw the decisions of real algorithms. But there was a catch: other times, the decisions attributed to algorithms were actually the participants’ choices, in disguise.

    Across the board, participants were more likely to see bias in the decisions they thought came from algorithms than in their own decisions. Participants also saw as much bias in the decisions of algorithms as they did in the decisions of other people. (People generally better recognize bias in others than in themselves, a phenomenon called the bias blind spot.) Participants were also more likely to correct for bias in those decisions after the fact, a crucial step for minimizing bias in the future.
    Algorithms Remove the Bias Blind Spot
    The researchers ran sets of participants, more than 6,000 in total, through nine experiments. In the first, participants rated a set of Airbnb listings, which included a few pieces of information about each listing: its average star rating (on a scale of 1 to 5) and the host’s name. The researchers assigned these fictional listings to hosts with names that were “distinctively African American or white,” based on previous research identifying racial bias, according to the paper. The participants rated how likely they were to rent each listing.
    In the second half of the experiment, participants were told about a research finding that explained how the host’s race might bias the ratings. Then, the researchers showed participants a set of ratings and asked them to assess (on a scale of 1 to 7) how likely it was that bias had influenced the ratings.
    Participants saw either their own rating reflected back to them, their own rating under the guise of an algorithm’s, their own rating under the guise of someone else’s, or an actual algorithm rating based on their preferences.
    The researchers repeated this setup several times, testing for race, gender, age, and attractiveness bias in the profiles of Lyft drivers and Airbnb hosts. Each time, the results were consistent. Participants who thought they saw an algorithm’s ratings or someone else’s ratings (whether or not they actually were) were more likely to perceive bias in the results.

    Morewedge attributes this to the different evidence we use to assess bias in others and bias in ourselves. Since we have insight into our own thought process, he says, we’re more likely to trace back through our thinking and decide that it wasn’t biased, perhaps driven by some other factor that went into our decisions. When analyzing the decisions of other people, however, all we have to judge is the outcome.
    “Let’s say you’re organizing a panel of speakers for an event,” Morewedge says. “If all those speakers are men, you might say that the outcome wasn’t the result of gender bias because you weren’t even thinking about gender when you invited these speakers. But if you were attending this event and saw a panel of all-male speakers, you’re more likely to conclude that there was gender bias in the selection.”
    Indeed, in one of their experiments, the researchers found that participants who were more prone to this bias blind spot were also more likely to see bias in decisions attributed to algorithms or others than in their own decisions. In another experiment, they discovered that people more easily saw their own decisions influenced by factors that were fairly neutral or reasonable, such as an Airbnb host’s star rating, compared to a prejudicial bias, such as race — perhaps because admitting to preferring a five-star rental isn’t as threatening to one’s sense of self or how others might view us, Morewedge suggests.
    Algorithms as Mirrors: Seeing and Correcting Human Bias
    In the researchers’ final experiment, they gave participants a chance to correct bias in either their ratings or the ratings of an algorithm (real or not). People were more likely to correct the algorithm’s decisions, which reduced the actual bias in its ratings.
    This is the crucial step for Morewedge and his colleagues, he says. For anyone motivated to reduce bias, being able to see it is the first step. Their research presents evidence that algorithms can be used as mirrors — a way to identify bias even when people can’t see it in themselves.
    “Right now, I think the literature on algorithmic bias is bleak,” Morewedge says. “A lot of it says that we need to develop statistical methods to reduce prejudice in algorithms. But part of the problem is that prejudice comes from people. We should work to make algorithms better, but we should also work to make ourselves less biased.
    “What’s exciting about this work is that it shows that algorithms can codify or amplify human bias, but algorithms can also be tools to help people better see their own biases and correct them,” he says. “Algorithms are a double-edged sword. They can be a tool that amplifies our worst tendencies. And algorithms can be a tool that can help better ourselves.” More

  • in

    Could new technique for ‘curving’ light be the secret to improved wireless communication?

    While cellular networks and Wi-Fi systems are more advanced than ever, they are also quickly reaching their bandwidth limits. Scientists know that in the near future they’ll need to transition to much higher communication frequencies than what current systems rely on, but before that can happen there are a number of — quite literal — obstacles standing in the way.
    Researchers from Brown University and Rice University say they’ve advanced one step closer to getting around these solid obstacles, like walls, furniture and even people — and they do it by curving light.
    In a new study published in Communications Engineering, the researchers describe how they are helping address one of the biggest logjams emerging in wireless communication. Current systems rely on microwave radiation to carry data, but it’s become clear that the future standard for transmitting data will make use of terahertz waves, which have as much as 100 times the data-carrying capacity of microwaves. One longstanding issue has been that, unlike microwaves, terahertz signals can be blocked by most solid objects, making a direct line of sight between transmitter and receiver a logistical requirement.
    “Most people probably use a Wi-Fi base station that fills the room with wireless signals,” said Daniel Mittleman, a professor in Brown’s School of Engineering and senior author of the study. “No matter where they move, they maintain the link. At the higher frequencies that we’re talking about here, you won’t be able to do that anymore. Instead, it’s going to be a directional beam. If you move around, that beam is going to have to follow you in order to maintain the link, and if you move outside of the beam or something blocks that link, then you’re not getting any signal.”
    The researchers circumvented this by creating a terahertz signal that follows a curved trajectory around an obstacle, instead of being blocked by it. The novel method unveiled in the study could help revolutionize wireless communication and highlights the future feasibility of wireless data networks that run on terahertz frequencies, according to the researchers.
    “We want more data per second,” Mittleman said. “If you want to do that, you need more bandwidth, and that bandwidth simply doesn’t exist using conventional frequency bands.”
    In the study, Mittleman and his colleagues introduce the concept of self-accelerating beams. The beams are special configurations of electromagnetic waves that naturally bend or curve to one side as they move through space. The beams have been studied at optical frequencies but are now explored for terahertz communication.

    The researchers used this idea as a jumping off point. They engineered transmitters with carefully designed patterns so that the system can manipulate the strength, intensity and timing of the electromagnetic waves that are produced. With this ability to manipulate the light, the researchers make the waves work together more effectively to maintain the signal when a solid object blocks a portion of the beam. Essentially, the light beam adjusts to the blockage by shuffling data along the patterns the researchers engineered into the transmitter. When one pattern is blocked, the data transfers to the next one, and then the next one if that is blocked. This keeps the signal link fully intact. Without this level of control, when the beam is blocked, the system can’t make any adjustments, so no signal gets through.
    This effectively makes the signal bend around objects as long as the transmitter is not completely blocked. If it is completely blocked, another way of getting the data to the receiver will be needed.
    “Curving a beam doesn’t solve all possible blockage problems, but what it does is solve some of them and it solves them in a way that’s better than what others have tried,” said Hichem Guerboukha, who led the study as a postdoctoral researcher at Brown and is now an assistant professor at the University of Missouri — Kansas City.
    The researchers validated their findings through extensive simulations and experiments navigating around obstacles to maintain communication links with high reliability and integrity. The work builds on a previous study from the team that showed terahertz data links can be bounced off walls in a room without dropping too much data.
    By using these curved beams, the researchers hope to one day make wireless networks more reliable, even in crowded or obstructed environments. This could lead to faster and more stable internet connections in places like offices or cities where obstacles are common. Before getting to that point, however, there’s much more basic research to be done and plenty of challenges to overcome as terahertz communication technology is still in its infancy.
    “One of the key questions that everybody asks us is how much can you curve and how far away,” Mittleman said. “We’ve done rough estimations of these things, but we haven’t really quantified it yet, so we hope to map it out.” More

  • in

    New technique lets scientists create resistance-free electron channels

    An international research team led by Lawrence Berkeley National Laboratory (Berkeley Lab) has taken the first atomic-resolution images and demonstrated electrical control of a chiral interface state — an exotic quantum phenomenon that could help researchers advance quantum computing and energy-efficient electronics.
    The chiral interface state is a conducting channel that allows electrons to travel in only one direction, preventing them from being scattered backwards and causing energy-wasting electrical resistance. Researchers are working to better understand the properties of chiral interface states in real materials but visualizing their spatial characteristics has proved to be exceptionally difficult.
    But now, for the first time, atomic-resolution images captured by a research team at Berkeley Lab and UC Berkeley have directly visualized a chiral interface state. The researchers also demonstrated on-demand creation of these resistance-free conducting channels in a 2D insulator.
    Their work, which was reported in the journal Nature Physics, is part of Berkeley Lab’s broader push to advance quantum computing and other quantum information system applications, including the design and synthesis of quantum materials to address pressing technological needs.
    “Previous experiments have demonstrated that chiral interface states exist, but no one has ever visualized them with such high resolution. Our work shows for the first time what these 1D states look like at the atomic scale, including how we can alter them — and even create them,” said first author Canxun Zhang, a former graduate student researcher in Berkeley Lab’s Materials Sciences Division and the Department of Physics at UC Berkeley. He is now a postdoctoral researcher at UC Santa Barbara.
    Chiral interface states can occur in certain types of 2D materials known as quantum anomalous Hall (QAH) insulators that are insulators in bulk but conduct electrons without resistance at one-dimensional “edges” — the physical boundaries of the material and interfaces with other materials.
    To prepare chiral interface states, the team worked at Berkeley Lab’s Molecular Foundry to fabricate a device called twisted monolayer-bilayer graphene, which is a stack of two atomically thin layers of graphene rotated precisely relative to one another, creating a moiré superlattice that exhibits the QAH effect.

    In subsequent experiments at the UC Berkeley Department of Physics, the researchers used a scanning tunneling microscope (STM) to detect different electronic states in the sample, allowing them to visualize the wavefunction of the chiral interface state. Other experiments showed that the chiral interface state can be moved across the sample by modulating the voltage on a gate electrode placed underneath the graphene layers. In a final demonstration of control, the researchers showed that a voltage pulse from the tip of an STM probe can “write” a chiral interface state into the sample, erase it, and even rewrite a new one where electrons flow in the opposite direction.
    The findings may help researchers build tunable networks of electron channels with promise for energy-efficient microelectronics and low-power magnetic memory devices in the future, and for quantum computation making use of the exotic electron behaviors in QAH insulators.
    The researchers intend to use their technique to study more exotic physics in related materials, such as anyons, a new type of quasiparticle that could enable a route to quantum computation.
    “Our results provide information that wasn’t possible before. There is still a long way to go, but this is a good first step,” Zhang said.
    The work was led by Michael Crommie,a senior faculty scientist in Berkeley Lab’s Materials Sciences Division and physics professor at UC Berkeley.
    Tiancong Zhu, a former postdoctoral researcher in the Crommie group at Berkeley Lab and UC Berkeley, contributed as co-corresponding author and is now a physics professor at Purdue University.
    The Molecular Foundry is a DOE Office of Science user facility at Berkeley Lab.
    This work was supported by the DOE Office of Science. Additional funding was provided by the National Science Foundation. More

  • in

    Will the convergence of light and matter in Janus particles transcend performance limitations in the optical display industry?

    A research team consisting of Professor Kyoung-Duck Park and Hyeongwoo Lee, an integrated PhD student, from the Department of Physics at Pohang University of Science and Technology (POSTECH) has pioneered an innovative technique in ultra-high-resolution spectroscopy. Their breakthrough marks the world’s first instance of electrically controlling polaritons — hybridized light-matter particles — at room temperature.
    Polaritons are “half-light half-matter” hybrid particles, having both the characteristics of photons — particles of light — and those of solid matter. Their unique characteristics exhibit properties distinct from both traditional photons and solid matter, unlocking the potential for next-generation materials, particularly in surpassing performance limitations of optical displays. Until now, the inability to electrically control polaritons at room temperature on a single particle level has hindered their commercial viability.
    The research team has devised a novel method called “electric-field tip-enhanced strong coupling spectroscopy,” enabling ultra-high-resolution electrically controlled spectroscopy. This new technique empowers the active manipulation of individual polariton particles at room temperature.
    This technique introduces a novel approach to measurement, integrating super-resolution microscopy previously invented by Prof. Kyoung-Duck Park ‘s team with ultra-precise electrical control. The resulting instrument not only facilitates stable generation of polariton in a distinctive physical state called strong coupling at room temperature but also allows for the manipulation of the color and brightness of the light emitted by the polariton particles through the use of electric-field. Using polariton particles instead of quantum dots, key materials of QLED televisions, offers a notable advantage. A single polariton particle can emit light in all colors with significantly enhanced brightness. This eliminates the need for three distinct types of quantum dots to produce red, green, and blue light separately. Moreover, this property can be electrically controlled similar to conventional electronics. In terms of academic significance, the team has successfully established and experimentally validated the quantum confined stark effect in the strong coupling regime, shedding light on a longstanding mystery in polariton particle research.
    The team’s accomplishment holds profound significance as it marks a scientific breakthrough paving the path for the next generation of research aimed at creating diverse optoelectronic devices and optical components based on polariton technology. This breakthrough is poised to make a substantial contribution to industrial advancement, particularly in providing key source technology for the development of groundbreaking products within the optical display industry including ultra-bright and compact outdoor displays. Hyeongwoo Lee, the lead author of the paper, emphasized the research’s importance, stating that it represents “a significant discovery with the potential to drive advancements across numerous fields including next-generation optical sensors, optical communications, and quantum photonic devices.”
    The research utilized quantum dots fabricated by Professor Sohee Jeong’s team and Professor Jaehoon Lim’s team from Sungkyunkwan University. The theoretical model was crafted by Professor Alexander Efros of the Naval Research Laboratory while data analysis was conducted by Professor Markus Raschke’s team from the University of Colorado and Professor Matthew Pelton’s team from the University of Maryland. Yeonjeong Koo, Jinhyuk Bae, Mingu Kang, Taeyoung Moon, and Huitae Joo from POSTECH’s Physics Department carried out the measurement work.
    This research has been recently published in Physical Review Letters, an international physics journal, and was conducted with support from the Samsung Future Technology Incubation Program. More