More stories

  • in

    What laser color do you like?

    Researchers at the National Institute of Standards and Technology (NIST) and the University of Maryland have developed a microchip technology that can convert invisible near-infrared laser light into any one of a panoply of visible laser colors, including red, orange, yellow and green. Their work provides a new approach to generating laser light on integrated microchips.
    The technique has applications in precision timekeeping and quantum information science, which often rely on atomic or solid-state systems that must be driven with visible laser light at precisely specified wavelengths. The approach suggests that a wide range of such wavelengths can be accessed using a single, small-scale platform, instead of requiring bulky, tabletop lasers or a series of different semiconductor materials. Constructing such lasers on microchips also provides a low-cost way to integrate lasers with miniature optical circuits needed for optical clocks and quantum communication systems.
    The study, reported in the October 20 issue of Optica, contributes to NIST on a Chip, a program that miniaturizes NIST’s state-of-the-art measurement-science technology, enabling it to be distributed directly to users in industry, medicine, defense and academia.
    Atomic systems that form the heart of the most precise and accurate experimental clocks and new tools for quantum information science typically rely on high-frequency visible (optical) laser light to operate, as opposed to the much lower frequency microwaves that are used to set official time worldwide.
    Scientists are now developing atomic optical system technologies that are compact and operate at low power so that they can be used outside the laboratory. While many different elements are required to realize such a vision, one key ingredient is access to visible-light laser systems that are small, lightweight and operate at low power.
    Although researchers have made great progress in creating compact, high-performance lasers at the near-infrared wavelengths used in telecommunications, it has been challenging to achieve equivalent performance at visible wavelengths. Some scientists have made strides by employing semiconductor materials to generate compact visible-light lasers. In contrast, Xiyuan Lu, Kartik Srinivasan and their colleagues at NIST and the University of Maryland in College Park adopted a different approach, focusing on a material called silicon nitride, which has a pronounced nonlinear response to light.

    advertisement

    Materials such as silicon nitride have a special property: If incoming light has high enough intensity, the color of the exiting light does not necessarily match the color of the light that entered. That is because when bound electrons in a nonlinear optical material interact with high-intensity incident light, the electrons re-radiate that light at frequencies, or colors, that differ from those of the incident light.
    (This effect stands in contrast to the everyday experience of seeing light bounce off a mirror or refract through a lens. In those cases, the color of the light always remains the same.)
    Lu and his colleagues employed a process known as third-order optical parametric oscillation (OPO), in which the nonlinear material converts incident light in the near-infrared into two different frequencies. One of the frequencies is higher than that of the incident light, placing it in the visible range, and the other is lower in frequency, extending deeper into the infrared. Although researchers have employed OPO for years to create different colors of light in large, table-top optical instruments, the new NIST-led study is the first to apply this effect to produce particular visible-light wavelengths on a microchip that has the potential for mass production.
    To miniaturize the OPO method, the researchers directed the near-infrared laser light into a microresonator, a ring-shaped device less than a millionth of a square meter in area and fabricated on a silicon chip. The light inside this microresonator circulates some 5,000 times before it dissipates, building a high enough intensity to access the nonlinear regime where it gets converted to the two different output frequencies.
    To create a multitude of visible and infrared colors, the team fabricated dozens of microresonators, each with slightly different dimensions, on each microchip. The researchers carefully chose these dimensions so that the different microresonators would produce output light of different colors. The team showed that this strategy enabled a single near-infrared laser that varied in wavelength by a relatively small amount to generate a wide range of specific visible-light and infrared colors.
    In particular, although the input laser operates over a narrow range of near-infrared wavelengths (from 780 nanometers to 790 nm), the microchip system generated visible-light colors ranging from green to red (560 nm to 760 nm) and infrared wavelengths ranging from 800 nm to 1,200 nm.
    “The benefit of our approach is that any one of these wavelengths can be accessed just by adjusting the dimensions of our microresonators,” said Srinivasan.
    “Though a first demonstration,” Lu said, “we are excited at the possibility of combining this nonlinear optics technique with well established near-infrared laser technology to create new types of on-chip light sources that can be used in a variety of applications.” More

  • in

    Assessing state of the art in AI for brain disease treatment

    Artificial intelligence is lauded for its ability to solve problems humans cannot, thanks to novel computing architectures that process large amounts of complex data quickly. As a result, AI methods, such as machine learning, computer vision, and neural networks, are applied to some of the most difficult problems in science and society.
    One tough problem is the diagnosis, surgical treatment, and monitoring of brain diseases. The range of AI technologies available for dealing with brain disease is growing fast, and exciting new methods are being applied to brain problems as computer scientists gain a deeper understanding of the capabilities of advanced algorithms.
    In a paper published this week in APL Bioengineering, by AIP Publishing, Italian researchers conducted a systematic literature review to understand the state of the art in the use of AI for brain disease. Their search yielded 2,696 results, and they narrowed their focus to the top 154 most cited papers and took a closer look.
    Their qualitative review sheds light on the most interesting corners of AI development. For example, a generative adversarial network was used to synthetically create an aged brain in order to see how disease advances over time.
    “The use of artificial intelligence techniques is gradually bringing efficient theoretical solutions to a large number of real-world clinical problems related to the brain,” author Alice Segato said. “Especially in recent years, thanks to the accumulation of relevant data and the development of increasingly effective algorithms, it has been possible to significantly increase the understanding of complex brain mechanisms.”
    The authors’ analysis covers eight paradigms of brain care, examining AI methods used to process information about structure and connectivity characteristics of the brain and in assessing surgical candidacy, identifying problem areas, predicting disease trajectory, and for intraoperative assistance. Image data used to study brain disease, including 3D data, such as magnetic resonance imaging, diffusion tensor imaging, positron emission tomography, and computed tomography imaging, can be analyzed using computer vision AI techniques.
    But the authors urge caution, noting the importance of “explainable algorithms” with paths to solutions that are clearly delineated, not a “black box” — the term for AI that reaches an accurate solution but relies on inner workings that are little understood or invisible.
    “If humans are to accept algorithmic prescriptions or diagnosis, they need to trust them,” Segato said. “Researchers’ efforts are leading to the creation of increasingly sophisticated and interpretable algorithms, which could favor a more intensive use of ‘intelligent’ technologies in practical clinical contexts.”

    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Temperature evolution of impurities in a quantum gas

    A new, Monash-led theoretical study advances our understanding of its role in thermodynamics in the quantum impurity problem.
    Quantum impurity theory studies the behaviour of deliberately introduced atoms (ie, ‘impurities’) that behave as particularly ‘clean’ quasiparticles within a background atomic gas, allowing a controllable ‘perfect test bed’ study of quantum correlations.
    The study extends quantum impurity theory, which is of significant interest to the quantum-matter research community, into a new dimension — the thermal effect.
    “We have discovered a general relationship between two distinct experimental protocols, namely ejection and injection radio-frequency spectroscopy, where prior to our work no such relationship was known.” explains lead author Dr Weizhe Liu (Monash University School of Physics and Astronomy).
    QUANTUM IMPURITY THEORY
    Quantum impurity theory studies the effects of introducing atoms of one element (ie, ‘impurities’) into an ultracold atomic gas of another element.

    advertisement

    For example, a small number of potassium atoms can be introduced into a ‘background’ quantum gas of lithium atoms.
    The introduced impurities (in this case, the potassium atoms) behave as a particularly ‘clean’ quasiparticle within the atomic gas.
    Interactions between the introduced impurity atoms and the background atomic gas can be ‘tuned’ via an external magnetic field, allowing investigation of quantum correlations.
    In recent years there has been an explosion of studies on the subject of quantum impurities immersed in different background mediums, thanks to their controllable realization in ultracold atomic gases.
    MODELLING ‘PUSH’ AND ‘PULL’ WITH RADIO-FREQUENCY PULSES
    “Our study is based on radio-frequency spectroscopy, modelling two different scenarios: ejection and injection,” says Dr Weizhe Liu, who is a Research Fellow with FLEET, FLEET working in the group of A/Prof Meera Parish and Dr Jesper Levinsen.

    advertisement

    The team modelled the effect of radio-frequency pulses that would force impurity atoms from one spin state into another, unoccupied spin state.
    Under the ‘ejection’ scenario, radio-frequency pulses act on impurities in a spin state that strongly interact with the background medium, ‘pushing’ those impurities into a non-interacting spin state.
    The inverse ‘injection’ scenario ‘pulls’ impurities from a non-interacting state into an interacting state.
    These two spectroscopies are commonly used separately, to study distinctive aspects of the quantum impurity problem.
    * Instead, the new Monash study shows that the ejection and injection protocols probe the same information.
    “We found that the two scenarios — ejection and injection — are related to each other by an exponential function of the free-energy difference between the interacting and noninteracting impurity states,” says Dr Liu. More

  • in

    Bringing a power tool from math into quantum computing

    The Fourier transform is an important mathematical tool that decomposes a function or dataset into a its constituting frequencies, much like one could decompose a musical chord into a combination of its notes. It is used across all fields of engineering in some form or another and, accordingly, algorithms to compute it efficiently have been developed — that is, at least for conventional computers. But what about quantum computers?
    Though quantum computing remains an enormous technical and intellectual challenge, it has the potential to speed up many programs and algorithms immensely provided that appropriate quantum circuits are designed. In particular, the Fourier transform already has a quantum version called the quantum Fourier transform (QFT), but its applicability is quite limited because its results cannot be used in subsequent quantum arithmetic operations.
    To address this issue, in a recent study published in Quantum Information Processing, scientists from Tokyo University of Science developed a new quantum circuit that executes the “quantum fast Fourier transform (QFFT)” and fully benefits from the peculiarities of the quantum world. The idea for the study came to Mr. Ryo Asaka, first-year Master’s student and one of the scientists on the study, when he first learned about the QFT and its limitations. He thought it would be useful to create a better alternative based on a variant of the standard Fourier transform called the “fast Fourier transform (FFT),” an indispensable algorithm in conventional computing that greatly speeds things up if the input data meets some basic conditions.
    To design the quantum circuit for the QFFT, the scientists had to first devise quantum arithmetic circuits to perform the basic operations of the FFT, such as addition, subtraction, and digit shifting. A notable advantage of their algorithm is that no “garbage bits” are generated; the calculation process does not waste any qubits, the basic unit of quantum information. Considering that increasing the number of qubits of quantum computers has been an uphill battle over the last few years, the fact that this novel quantum circuit for the QFFT can use qubits efficiently is very promising.
    Another merit of their quantum circuit over the traditional QFT is that their implementation exploits a unique property of the quantum world to greatly increase computational speed. Associate Professor Kazumitsu Sakai, who led the study, explains: “In quantum computing, we can process a large amount of information at the same time by taking advantage of a phenomenon known as ‘superposition of states.’ This allows us to convert a lot of data, such as multiple images and sounds, into the frequency domain in one go.” Processing speed is regularly cited as the main advantage of quantum computing, and this novel QFFT circuit represents a step in the right direction.
    Moreover, the QFFT circuit is much more versatile than the QFT, as Assistant Professor Ryoko Yahagi, who also participated in the study, remarks: “One of the main advantages of the QFFT is that it is applicable to any problem that can be solved by the conventional FFT, such as the filtering of digital images in the medical field or analyzing sounds for engineering applications.” With quantum computers (hopefully) right around the corner, the outcomes of this study will make it easier to adopt quantum algorithms to solve the many engineering problems that rely on the FFT.

    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    Scientists voice concerns, call for transparency and reproducibility in AI research

    International scientists are challenging their colleagues to make Artificial Intelligence (AI) research more transparent and reproducible to accelerate the impact of their findings for cancer patients.
    In an article published in Nature on October 14, 2020, scientists at Princess Margaret Cancer Centre, University of Toronto, Stanford University, Johns Hopkins, Harvard School of Public Health, Massachusetts Institute of Technology, and others, challenge scientific journals to hold computational researchers to higher standards of transparency, and call for their colleagues to share their code, models and computational environments in publications.
    “Scientific progress depends on the ability of researchers to scrutinize the results of a study and reproduce the main finding to learn from,” says Dr. Benjamin Haibe-Kains, Senior Scientist at Princess Margaret Cancer Centre and first author of the article. “But in computational research, it’s not yet a widespread criterion for the details of an AI study to be fully accessible. This is detrimental to our progress.”
    The authors voiced their concern about the lack of transparency and reproducibility in AI research after a Google Health study by McKinney et al., published in a prominent scientific journal in January 2020, claimed an artificial intelligence (AI) system could outperform human radiologists in both robustness and speed for breast cancer screening. The study made waves in the scientific community and created a buzz with the public, with headlines appearing in BBC News, CBC, CNBC.
    A closer examination raised some concerns: the study lacked a sufficient description of the methods used, including their code and models. The lack of transparency prohibited researchers from learning exactly how the model works and how they could apply it to their own institutions.
    “On paper and in theory, the McKinney et al. study is beautiful,” says Dr. Haibe-Kains, “But if we can’t learn from it then it has little to no scientific value.”
    According to Dr. Haibe-Kains, who is jointly appointed as Associate Professor in Medical Biophysics at the University of Toronto and affiliate at the Vector Institute for Artificial Intelligence, this is just one example of a problematic pattern in computational research.
    “Researchers are more incentivized to publish their finding rather than spend time and resources ensuring their study can be replicated,” explains Dr. Haibe-Kains. “Journals are vulnerable to the ‘hype’ of AI and may lower the standards for accepting papers that don’t include all the materials required to make the study reproducible — often in contradiction to their own guidelines.”
    This can actually slow down the translation of AI models into clinical settings. Researchers are not able to learn how the model works and replicate it in a thoughtful way. In some cases, it could lead to unwarranted clinical trials, because a model that works on one group of patients or in one institution, may not be appropriate for another.
    In the article titled Transparency and reproducibility in artificial intelligence, the authors offer numerous frameworks and platforms that allow safe and effective sharing to uphold the three pillars of open science to make AI research more transparent and reproducible: sharing data, sharing computer code and sharing predictive models.
    “We have high hopes for the utility of AI for our cancer patients,” says Dr. Haibe-Kains. “Sharing and building upon our discoveries — that’s real scientific impact.”

    Story Source:
    Materials provided by University Health Network. Note: Content may be edited for style and length. More

  • in

    To make mini-organs grow faster, give them a squeeze

    The closer people are physically to one another, the higher the chance for exchange, of things like ideas, information, and even infection. Now researchers at MIT and Boston Children’s Hospital have found that, even in the microscopic environment within a single cell, physical crowding increases the chance for interactions, in a way that can significantly alter a cell’s health and development.
    In a paper published today in the journal Cell Stem Cell, the researchers have shown that physically squeezing cells, and crowding their contents, can trigger cells to grow and divide faster than they normally would.
    While squeezing something to make it grow may sound counterintuitive, the team has an explanation: Squeezing acts to wring water out of a cell. With less water to swim in, proteins and other cell constituents are packed closer together. And when certain proteins are brought in close proximity, they can trigger cell signaling and activate genes within the cell.
    In their new study, the scientists found that squeezing intestinal cells triggered proteins to cluster along a specific signaling pathway, which can help cells maintain their stem-cell state, an undifferentiated state in which can quickly grow and divide into more specialized cells. Ming Guo, associate professor of mechanical engineering at MIT, says that if cells can simply be squeezed to promote their “stemness,” they can then be directed to quickly build up miniature organs, such as artificial intestines or colons, which could then be used as platforms to understand organ function and test drug candidates for various diseases, and even as transplants for regenerative medicine.
    Guo’s co-authors are lead author Yiwei Li, Jiliang Hu, and Qirong Lin from MIT, and Maorong Chen, Ren Sheng, and Xi He of Boston Children’s Hospital.
    Packed in
    To study squeezing’s effect on cells, the researchers mixed various cell types in solutions that solidified as rubbery slabs of hydrogel. To squeeze the cells, they placed weights on the hydrogel’s surface, in the form of either a quarter or a dime.

    advertisement

    “We wanted to achieve a significant amount of cell size change, and those two weights can compress the cell by something like 10 to 30 percent of their total volume,” Guo explains.
    The team used a confocal microscope to measure in 3D how individual cells’ shapes changed as each sample was compressed. As they expected, the cells shrank with pressure. But did squeezing also affect the cell’s contents? To answer this, the researchers first looked to see whether a cell’s water content changed. If squeezing acts to wring water out of a cell, the researchers reasoned that the cells should be less hydrated, and stiffer as a result.
    They measured the stiffness of cells before and after weights were applied, using optical tweezers, a laser-based technique that Guo’s lab has employed for years to study interactions within cells, and found that indeed, cells stiffened with pressure. They also saw that there was less movement within cells that were squeezed, suggesting that their contents were more packed than usual.
    Next, they looked at whether there were changes in the interactions between certain proteins in the cells, in response to cells being squeezed. They focused on several proteins that are known to trigger Wnt/?-catenin signaling, which is involved in cell growth and maintenance of “stemness.”
    “In general, this pathway is known to make a cell more like a stem cell,” Guo says. “If you change this pathway’s activity, how cancer progresses and how embryos develop have been shown to be very different. So we thought we could use this pathway to demonstrate how cell crowding is important.”
    A “refreshing” path

    advertisement

    To see whether cell squeezing affects the Wnt pathway, and how fast a cell grows, the researchers grew small organoids — miniature organs, and in this case, clusters of cells that were collected from the intestines of mice.
    “The Wnt pathway is particularly important in the colon,” Guo says, pointing out that the cells that line the human intestine are constantly being replenished. The Wnt pathway, he says, is essential for maintaining intestinal stem cells, generating new cells, and “refreshing” the intestinal lining.
    He and his colleagues grew intestinal organoids, each measuring about half a millimeter, in several Petri dishes, then “squeezed” the organoids by infusing the dishes with polymers. This influx of polymers increased the osmotic pressure surrounding each organoid and forced water out of their cells. The team observed that as a result, specific proteins involved in activating the Wnt pathway were packed closer together, and were more likely to cluster to turn on the pathway and its growth-regulating genes.
    The upshot: Those organoids that were squeezed actually grew larger and more quickly, with more stem cells on their surface than those that were not squeezed.
    “The difference was very obvious,” Guo says. “Whenever you apply pressure, the organoids grow even bigger, with a lot more stem cells.”
    He says the results demonstrate how squeezing can affect a organoid’s growth. The findings also show that a cell’s behavior can change depending on the amount of water that it contains.
    “This is very general and broad, and the potential impact is profound, that cells can simply tune how much water they have to tune their biological consequences,” Guo says.
    Going forward, he and his colleagues plan to explore cell squeezing as a way to speed up the growth of artificial organs that scientists may use to test new, personalized drugs.
    “I could take my own cells and transfect them to make stem cells that can then be developed into a lung or intestinal organoid that would mimic my own organs,” Guo says. “I could then apply different pressures to make organoids of different size, then try different drugs. I imagine there would be a lot of possibilities.”
    This research is supported, in part, by the National Cancer Institute and the Alfred P. Sloan Foundation. More

  • in

    Researchers are working on tech so machines can thermally 'breathe'

    In the era of electric cars, machine learning and ultra-efficient vehicles for space travel, computers and hardware are operating faster and more efficiently. But this increase in power comes with a trade-off: They get superhot.
    To counter this, University of Central Florida researchers are developing a way for large machines to “breathe” in and out cooling blasts of water to keep their systems from overheating.
    The findings are detailed in a recent study in the journal Physical Review Fluids.
    The process is much like how humans and some animals breath in air to cool their bodies down, except in this case, the machines would be breathing in cool blasts of water, says Khan Rabbi, a doctoral candidate in UCF’s Department of Mechanical and Aerospace Engineering and lead author of the study.
    “Our technique used a pulsed water-jet to cool a hot titanium surface,” Rabbi says. “The more water we pumped out of the spray jet nozzles, the greater the amount of heat that transferred between the solid titanium surface and the water droplets, thus cooling the titanium. Fundamentally, an idea of optimum jet-pulsation needs to be generated to ensure maximum heat transfer performance.”
    “It is essentially like exhaling the heat from the surface,” he says.

    advertisement

    The water is emitted from small water-jet nozzles, about 10 times the thickness of a human hair, that douse a hot surface of a large electronic system and the water is collected in a storage chamber, where it can be pumped out and circulated again to repeat the cooling process. The storage chamber in their study held about 10 ounces of water.
    Using high-speed, infrared thermal imaging, the researchers were able to find the optimum amount of water for maximum cooling performance.
    Rabbi says everyday applications for the system could include cooling large electronics, space vehicles, batteries in electric vehicles and gas turbines.
    Shawn Putnam, an associate professor in UCF’s Department of Mechanical and Aerospace Engineering and study co-author, says that this research is part of an effort to explore different techniques to efficiently cool hot devices and surfaces.
    “Most likely, the most versatile and efficient cooling technology will take advantage of several different cooling mechanisms, where pulsed jet cooling is expected to be one of these key contributors,” Putnam says.
    The researcher says there are multiple ways to cool hot hardware, but water-jet cooling is a preferred method because it can be adjusted to different directions, has good heat-transfer ability, and uses minimum amounts of water or liquid coolant.
    However, it has its drawbacks, namely either over or underwatering that results in floods or dry hotspots. The UCF method overcomes this problem by offering a system that is tunable to hardware needs so that the only water applied is the amount needed and in the right spot.
    The technology is needed since once device temperatures surpass a threshold value, for example, 194 degrees Fahrenheit, the device’s performance decreases, Rabbi says.
    “For this reason, we need better cooling technologies in place to keep the device temperature well within the maximum temperature for optimum operation,” he says. “We believe this study will provide engineers, scientists and researchers a unique understanding to develop future generation liquid cooling systems.”

    Story Source:
    Materials provided by University of Central Florida. Original written by Robert H Wells. Note: Content may be edited for style and length. More

  • in

    Engineers create helical topological exciton-polaritons

    Our understanding of quantum physics has involved the creation of a wide range of “quasiparticles.” These notional constructs describe emergent phenomena that appear to have the properties of multiple other particles mixed together.
    An exciton, for example, is a quasiparticle that acts like an electron bound to an electron hole, or the empty space in a semiconducting material where an electron could be. A step further, an exciton-polariton combines the properties of an exciton with that of a photon, making it behave like a combination of matter and light. Achieving and actively controlling the right mixture of these properties — such as their mass, speed, direction of motion, and capability to strongly interact with one another — is the key to applying quantum phenomena to technology, like computers.
    Now, researchers at the University of Pennsylvania’s School of Engineering and Applied Science are the first to create an even more exotic form of the exciton-polariton, one which has a defined quantum spin that is locked to its direction of motion. Depending on the direction of their spin, these helical topological exciton-polaritons move in opposite directions along the surface of an equally specialized type of topological insulator.
    In a study published in the journal Science, they have demonstrated this phenomenon at temperatures much warmer than the near-absolute-zero usually required to maintain this sort of quantum phenomenon. The ability to route these quasiparticles based on their spin in more user-friendly conditions, and an environment where they do not back-scatter, opens up the possibility of using them to transmit information or perform computations at unprecedented speeds.
    The study was led by Ritesh Agarwal, professor in the Department of Materials Science and Engineering, and Wenjing Liu, a postdoctoral researcher in his lab. They collaborated with researchers from Hunan University and George Washington University.
    The study also demonstrates a new type of topological insulators, a class of material developed at Penn by Charles Kane and Eugene Mele that has a conductive surface and an insulating core. Topological insulators are prized for their ability to propagate electrons at their surface without scattering them, and the same idea can be extended to quasiparticles such as photons or polaritons.

    advertisement

    “Replacing electrons with photons would make for even faster computers and other technologies, but photons are very hard to modulate, route or switch. They cannot be transported around sharp turns and leak out of the waveguide,” Agarwal says. “This is where topological exciton-polaritons can be useful, but that means we need to make new types of topological insulators that can work with polaritons. If we could make this type of quantum material, we could route exciton-polaritons along certain channels without any scattering, as well as modulate or switch them via externally applied electric fields or by slight changes in temperature.”
    Agarwal’s group has created several types of photonic topological insulators in the past. While the first “chiral” polariton topological insulator was reported by a group in Europe, it worked at extremely low temperatures while requiring strong magnetic fields The missing piece, and distinction between “chiral” and “helical” in this case, was the ability to control the direction of flow via the quasiparticles’ spin.
    “To create this phase, we used an atomically thin semiconductor, tungsten disulfide, which forms very tightly bound excitons, and coupled it strongly to a properly designed photonic crystal via symmetry engineering. This induced nontrivial topology to the resulting polaritons,” Agarwal says. “At the interface between photonic crystals with different topology, we demonstrated the generation of helical topological polaritons that did not scatter at sharp corners or defects, as well as spin-dependent transport.”
    Agarwal and his colleagues conducted the study at 200K, or roughly -100F without the need for applying any magnetic fields. While that seems cold, it is considerably warmer — and easier to achieve — than similar systems that operate at 4K, or roughly -450F.
    They are confident that further research and improved fabrication techniques for their semiconductor material will easily allow their design to operate at room temperature.
    “From an academic point of view, 200K is already almost room temperature, so small advances in material purity could easily push it to working in ambient conditions,” says Agarwal. “Atomically thin, ‘2D’ materials form very strong excitons that survive room temperature and beyond, so we think we need only small modifications to how our materials are assembled.”
    Agarwal’s group is now working on studying how topological polaritons interact with one another, which would bring them a step closer to using them in practical photonic devices. More