More stories

  • in

    'Egg carton' quantum dot array could lead to ultralow power devices

    A new path toward sending and receiving information with single photons of light has been discovered by an international team of researchers led by the University of Michigan.
    Their experiment demonstrated the possibility of using an effect known as nonlinearity to modify and detect extremely weak light signals, taking advantage of distinct changes to a quantum system to advance next generation computing.
    Today, as silicon-electronics-based information technology becomes increasingly throttled by heating and energy consumption, nonlinear optics is under intense investigation as a potential solution. The quantum egg carton captures and releases photons, supporting “excited” quantum states while it possesses the extra energy. As the energy in the system rises, it takes a bigger jump in energy to get to that next excited state — that’s the nonlinearity.
    “Researchers have wondered whether detectable nonlinear effects can be sustained at extremely low power levels — down to individual photons. This would bring us to the fundamental lower limit of power consumption in information processing,” said Hui Deng, professor of physics and senior author of the paper in Nature.
    “We demonstrated a new type of hybrid state to bring us to that regime, linking light and matter through an array of quantum dots,” she added.
    The physicists and engineers used a new kind of semiconductor to create quantum dots arranged like an egg carton. Quantum dots are essentially tiny structures that can isolate and confine individual quantum particles, such as electrons and other, stranger things. These dots are the pockets in the egg carton. In this case, they confine excitons, quasi-particles made up of an electron and a “hole.” A hole appears when an electron in a semiconductor is kicked into a higher energy band, leaving a positive charge behind in its usual spot. If the hole shadows the electron in its parallel energy band, the two are considered as a single entity, an exciton.

    advertisement

    In conventional devices — with little to no nonlinearity — the excitons roam freely and scarcely meet with each other. These materials can contain many identical excitons at the same time without researchers noticing any change to the material properties.
    However, if the exciton is confined to a quantum dot, it becomes impossible to put in a second identical exciton in the same pocket. You’ll need an exciton with a higher energy if you want to get another one in there, which means you’ll need a higher energy photon to make it. This is known as quantum blockade, and it’s the cause of the nonlinearity.
    But typical quantum dots are only a few atoms across — they aren’t on a usable scale. As a solution, Deng’s team created an array of quantum dots that contribute to the nonlinearity all at once.
    The team produced this egg carton energy landscape with two flakes of semiconductor, which are considered two-dimensional materials because they are made of a single molecular layer, just a few atoms thick. 2D semiconductors have quantum properties that are very different from larger chunks. One flake was tungsten disulfide and the other was molybdenum diselenide. Laid with an angle of about 56.5 degrees between their atomic lattices, the two intertwined electronic structures created a larger electronic lattice, with pockets about 10 atoms across.
    In order for the array of quantum dots inside the 2D semiconductor to be controlled as a group with light, the team built a resonator by making one mirror at the bottom, laying the semiconductor on top of it, and then depositing a second mirror on top of the semiconductor.

    advertisement

    “You need to control the thickness very tightly so that the semiconductor is at the maximum of the optical field,” said Zhang Long, a postdoctoral research fellow in the Deng lab and first author on the paper.
    With the quantum egg carton embedded in the mirrored “cavity” that enabled red laser light to resonate, the team observed the formation of another quantum state, called a polariton. Polaritons are a hybrid of the excitons and the light in the cavity. This confirmed all the quantum dots interact with light in concert. In this system, Deng’s team showed that putting a few excitons into the carton led to a measurable change of the polariton’s energy — demonstrating nonlinearity and showing that quantum blockade was occurring.
    “Engineers can use that nonlinearity to discern energy deposited into the system, potentially down to that of a single photon, which makes the system promising as an ultra-low energy switch,” Deng said.
    Switches are among the devices needed to achieve ultralow power computing, and they can be built into more complex gates.
    “Professor Deng’s research describes how polariton nonlinearities can be tailored to consume less energy,” said Michael Gerhold, program manager at the Army Research Office, an element of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. “Control of polaritons is aimed at future integrated photonics used for ultra-low energy computing and information processing that could be used for neuromorphic processing for vision systems, natural language processing or autonomous robots.”
    The quantum blockade also means a similar system could possibly be used for qubits, the building blocks for quantum information processing. One forward path is figuring out how to address each quantum dot in the array as an individual qubit. Another way would be to achieve polariton blockade, similar to the exciton blockade seen here. In this version, the array of excitons, resonating in time with the light wave, would be the qubit.
    Used in these ways, the new 2D semiconductors have potential for bringing quantum devices up to room temperature, rather than the extreme cold of liquid nitrogen or liquid helium.
    “We are coming to the end of Moore’s Law,” said Steve Forrest, the Peter A. Franken Distinguished University Professor of Electrical Engineering and co-author of the paper, referring to the trend of the density of transistors on a chip doubling every two years. “Two dimensional materials have many exciting electronic and optical properties that may, in fact, lead us to that land beyond silicon.” More

  • in

    The (robotic) doctor will see you now

    In the era of social distancing, using robots for some health care interactions is a promising way to reduce in-person contact between health care workers and sick patients. However, a key question that needs to be answered is how patients will react to a robot entering the exam room.
    Researchers from MIT and Brigham and Women’s Hospital recently set out to answer that question. In a study performed in the emergency department at Brigham and Women’s, the team found that a large majority of patients reported that interacting with a health care provider via a video screen mounted on a robot was similar to an in-person interaction with a health care worker.
    “We’re actively working on robots that can help provide care to maximize the safety of both the patient and the health care workforce. The results of this study give us some confidence that people are ready and willing to engage with us on those fronts,” says Giovanni Traverso, an MIT assistant professor of mechanical engineering, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.
    In a larger online survey conducted nationwide, the researchers also found that a majority of respondents were open to having robots not only assist with patient triage but also perform minor procedures such as taking a nose swab.
    Peter Chai, an assistant professor of emergency medicine at Brigham and Women’s Hospital and a research affiliate in Traverso’s lab, is the lead author of the study, which appears today in JAMA Network Open.
    Triage by robot
    After the Covid-19 pandemic began early last year, Traverso and his colleagues turned their attention toward new strategies to minimize interactions between potentially sick patients and health care workers. To that end, they worked with Boston Dynamics to create a mobile robot that could interact with patients as they waited in the emergency department. The robots were equipped with sensors that allow them to measure vital signs, including skin temperature, breathing rate, pulse rate, and blood oxygen saturation. The robots also carried an iPad that allowed for remote video communication with a health care provider.

    advertisement

    This kind of robot could reduce health care workers’ risk of exposure to Covid-19 and help to conserve the personal protective equipment that is needed for each interaction. However, the question still remained whether patients would be receptive to this type of interaction.
    “Often as engineers, we think about different solutions, but sometimes they may not be adopted because people are not fully accepting of them,” Traverso says. “So, in this study we were trying to tease that out and understand if the population is receptive to a solution like this one.”
    The researchers first conducted a nationwide survey of about 1,000 people, working with a market research company called YouGov. They asked questions regarding the acceptability of robots in health care, including whether people would be comfortable with robots performing not only triage but also other tasks such as performing nasal swabs, inserting a catheter, or turning a patient over in bed. On average, the respondents stated that they were open to these types of interactions.
    The researchers then tested one of their robots in the emergency department at Brigham and Women’s Hospital last spring, when Covid-19 cases were surging in Massachusetts. Fifty-one patients were approached in the waiting room or a triage tent and asked if they would be willing to participate in the study, and 41 agreed. These patients were interviewed about their symptoms via video connection, using an iPad carried by a quadruped, dog-like robot developed by Boston Dynamics. More than 90 percent of the participants reported that they were satisfied with the robotic system.
    “For the purposes of gathering quick triage information, the patients found the experience to be similar to what they would have experienced talking to a person,” Chai says.

    advertisement

    Robotic assistants
    The numbers from the study suggest that it could be worthwhile to try to develop robots that can perform procedures that currently require a lot of human effort, such as turning a patient over in bed, the researchers say. Turning Covid-19 patients onto their stomachs, also known as “proning,” has been shown to boost their blood oxygen levels and make breathing easier. Currently the process requires several people to perform. Administering Covid-19 tests is another task that requires a lot of time and effort from health care workers, who could be deployed for other tasks if robots could help perform swabs.
    “Surprisingly, people were pretty accepting of the idea of having a robot do a nasal swab, which suggests that potential engineering efforts could go into thinking about building some of these systems,” Chai says.
    The MIT team is continuing to develop sensors that can obtain vital sign data from patients remotely, and they are working on integrating these systems into smaller robots that could operate in a variety of environments, such as field hospitals or ambulances.
    Other authors of the paper include Farah Dadabhoy, Hen-wei Huang, Jacqueline Chu, Annie Feng, Hien Le, Joy Collins, Marco da Silva, Marc Raibert, Chin Hur, and Edward Boyer. The research was funded by the National Institutes of Health, the Hans and Mavis Lopater Psychosocial Foundation, e-ink corporation, the Karl Van Tassel (1925) Career Development Professorship, MIT’s Department of Mechanical Engineering, and the Brigham and Women’s Hospital Division of Gastroenterology. More

  • in

    Cutting off stealthy interlopers: a framework for secure cyber-physical systems

    In 2015, hackers infiltrated the corporate network of Ukraine’s power grid and injected malicious software, which caused a massive power outage. Such cyberattacks, along with the dangers to society that they represent, could become more common as the number of cyber-physical systems (CPS) increases.
    A CPS is any system controlled by a network involving physical elements that tangibly interact with the material world. CPSs are incredibly common in industries, especially those integrating robotics or similar automated machinery to the production line. However, as CPSs make their way into societal infrastructures such as public transport and energy management, it becomes even more important to be able to efficiently fend off various types of cyberattacks.
    In a recent study published in IEEE Transactions on Industrial Informatics, researchers from Daegu Gyeongbuk Institute of Science and Technology (DGIST), Korea, have developed a framework for CPSs that is resilient against a sophisticated kind of cyberattack: the pole-dynamics attack (PDA). In a PDA, the hacker connects to a node in the network of the CPS and injects false sensor data. Without proper readings from the sensors of the physical elements of the system, the control signals sent by the control algorithm to the physical actuators are incorrect, causing them to malfunction and behave in unexpected, potentially dangerous ways.
    To address PDAs, the researchers adopted a technique known as software-defined networking (SDN), whereby the network of the CPS is made more dynamic by distributing the relaying of signals through controllable SDN switches. In addition, the proposed approach relies on a novel attack-detection algorithm embedded in the SDN switches, which can raise an alarm to the centralized network manager if false sensor data are being injected.
    Once the network manager is notified, it not only cuts the cyberattacker off by pruning the compromised nodes but also establishes a new safe path for the sensor data. “Existing studies have only focused on attack detection, but they fail to consider the implications of detection and recovery in real time,” explains Professor Kyung-Joon Park, who led the study, “In our study, we simultaneously considered these factors to understand their effects on real-time performance and guarantee stable CPS operation.”
    The new framework was validated experimentally in a dedicated testbed, showing promising results. Excited about the outcomes of the study, Park remarks, “Considering CPSs are a key technology of smart cities and unmanned transport systems, we expect our research will be crucial to provide reliability and resiliency to CPSs in various application domains.” Having a system that is robust against cyberattacks means that economic losses and personal injuries can be minimized. Therefore, this study paves the way to a more secure future for both CPSs and ourselves.

    Story Source:
    Materials provided by DGIST (Daegu Gyeongbuk Institute of Science and Technology). Note: Content may be edited for style and length. More

  • in

    Advance in 'optical tweezers' to boost biomedical research

    Much like the Jedis in Star Wars use ‘the force’ to control objects from a distance, scientists can use light or ‘optical force’ to move very small particles.
    The inventors of this ground-breaking laser technology, known as ‘optical tweezers’, were awarded the 2018 Nobel Prize in physics.
    Optical tweezers are used in biology, medicine and materials science to assemble and manipulate nanoparticles such as gold atoms. However, the technology relies on a difference in the refractive properties of the trapped particle and the surrounding environment.
    Now scientists have discovered a new technique that allows them to manipulate particles that have the same refractive properties as the background environment, overcoming a fundamental technical challenge.
    The study ‘Optical tweezers beyond refractive index mismatch using highly doped upconversion nanoparticles’ has just been published in Nature Nanotechnology.
    “This breakthrough has huge potential, particularly in fields such as medicine,” says leading co-author Dr Fan Wang from the University of Technology Sydney (UTS).

    advertisement

    “The ability to push, pull and measure the forces of microscopic objects inside cells, such as strands of DNA or intracellular enzymes, could lead to advances in understanding and treating many different diseases such as diabetes or cancer.
    “Traditional mechanical micro-probes used to manipulate cells are invasive, and the positioning resolution is low. They can only measure things like the stiffness of a cell membrane, not the force of molecular motor proteins inside a cell,” he says.
    The research team developed a unique method to control the refractive properties and luminescence of nanoparticles by doping nanocrystals with rare-earth metal ions.
    Having overcome this first fundamental challenge, the team then optimised the doping concentration of ions to achieve the trapping of nanoparticles at a much lower energy level, and at 30 times increased efficiency.
    “Traditionally, you need hundreds of milliwatts of laser power to trap a 20 nanometre gold particle. With our new technology, we can trap a 20 nanometre particle using tens of milliwatts of power,” says Xuchen Shan, first co-author and UTS PhD candidate in the UTS School of Electrical and Data Engineering.

    advertisement

    “Our optical tweezers also achieved a record high degree of sensitivity or ‘stiffness’ for nanoparticles in a water solution. Remarkably, the heat generated by this method was negligible compared with older methods, so our optical tweezers offer a number of advantages,” he says.
    Fellow leading co-author Dr Peter Reece, from the University of New South Wales, says this proof-of-concept research is a significant advancement in a field that is becoming increasingly sophisticated for biological researchers.
    “The prospect of developing a highly-efficient nanoscale force probe is very exciting. The hope is that the force probe can be labelled to target intracellular structures and organelles, enabling the optical manipulation of these intracellular structures,” he says.
    Distinguished Professor Dayong Jin, Director of the UTS Institute for Biomedical Materials and Devices (IBMD) and a leading co-author, says this work opens up new opportunities for super resolution functional imaging of intracellular biomechanics.
    “IBMD research is focused on the translation of advances in photonics and material technology into biomedical applications, and this type of technology development is well aligned to this vision,” says Professor Jin.
    “Once we have answered the fundamental science questions and discovered new mechanisms of photonics and material science, we then move to apply them. This new advance will allow us to use lower-power and less-invasive ways to trap nanoscopic objects, such as live cells and intracellular compartments, for high precision manipulation and nanoscale biomechanics measurement.” More

  • in

    Researchers discover that privacy-preserving tools leave private data anything but

    Machine-learning (ML) systems are becoming pervasive not only in technologies affecting our day-to-day lives, but also in those observing them, including face expression recognition systems. Companies that make and use such widely deployed services rely on so-called privacy preservation tools that often use generative adversarial networks (GANs), typically produced by a third party to scrub images of individuals’ identity. But how good are they?
    Researchers at the NYU Tandon School of Engineering, who explored the machine-learning frameworks behind these tools, found that the answer is “not very.” In the paper “Subverting Privacy-Preserving GANs: Hiding Secrets in Sanitized Images,” presented last month at the 35th AAAI Conference on Artificial Intelligence, a team led by Siddharth Garg, Institute Associate Professor of electrical and computer engineering at NYU Tandon, explored whether private data could still be recovered from images that had been “sanitized” by such deep-learning discriminators as privacy protecting GANs (PP-GANs) and that had even passed empirical tests. The team, including lead author Kang Liu, a Ph.D. candidate, and Benjamin Tan, research assistant professor of electrical and computer engineering, found that PP-GAN designs can, in fact, be subverted to pass privacy checks, while still allowing secret information to be extracted from sanitized images.
    Machine-learning-based privacy tools have broad applicability, potentially in any privacy sensitive domain, including removing location-relevant information from vehicular camera data, obfuscating the identity of a person who produced a handwriting sample, or removing barcodes from images. The design and training of GAN-based tools are outsourced to vendors because of the complexity involved.
    “Many third-party tools for protecting the privacy of people who may show up on a surveillance or data-gathering camera use these PP-GANs to manipulate images,” said Garg. “Versions of these systems are designed to sanitize images of faces and other sensitive data so that only application-critical information is retained. While our adversarial PP-GAN passed all existing privacy checks, we found that it actually hid secret data pertaining to the sensitive attributes, even allowing for reconstruction of the original private image.”
    The study provides background on PP-GANs and associated empirical privacy checks, formulates an attack scenario to ask if empirical privacy checks can be subverted, and outlines an approach for circumventing empirical privacy checks.
    The team provides the first comprehensive security analysis of privacy-preserving GANs and demonstrate that existing privacy checks are inadequate to detect leakage of sensitive information.
    Using a novel steganographic approach, they adversarially modify a state-of-the-art PP-GAN to hide a secret (the user ID), from purportedly sanitized face images.
    They show that their proposed adversarial PP-GAN can successfully hide sensitive attributes in “sanitized” output images that pass privacy checks, with 100% secret recovery rate.
    Noting that empirical metrics are dependent on discriminators’ learning capacities and training budgets, Garg and his collaborators argue that such privacy checks lack the necessary rigor for guaranteeing privacy.
    “From a practical standpoint, our results sound a note of caution against the use of data sanitization tools, and specifically PP-GANs, designed by third parties,” explained Garg. “Our experimental results highlighted the insufficiency of existing DL-based privacy checks and the potential risks of using untrusted third-party PP-GAN tools.”

    Story Source:
    Materials provided by NYU Tandon School of Engineering. Note: Content may be edited for style and length. More

  • in

    High end of climate sensitivity in new climate models seen as less plausible

    A recent analysis of the latest generation of climate models — known as a CMIP6 — provides a cautionary tale on interpreting climate simulations as scientists develop more sensitive and sophisticated projections of how the Earth will respond to increasing levels of carbon dioxide in the atmosphere.
    Researchers at Princeton University and the University of Miami reported that newer models with a high “climate sensitivity” — meaning they predict much greater global warming from the same levels of atmospheric carbon dioxide as other models — do not provide a plausible scenario of Earth’s future climate.
    Those models overstate the global cooling effect that arises from interactions between clouds and aerosols and project that clouds will moderate greenhouse gas-induced warming — particularly in the northern hemisphere — much more than climate records show actually happens, the researchers reported in the journal Geophysical Research Letters.
    Instead, the researchers found that models with lower climate sensitivity are more consistent with observed differences in temperature between the northern and southern hemispheres, and, thus, are more accurate depictions of projected climate change than the newer models. The study was supported by the Carbon Mitigation Initiative (CMI) based in Princeton’s High Meadows Environmental Institute (HMEI).
    These findings are potentially significant when it comes to climate-change policy, explained co-author Gabriel Vecchi, a Princeton professor of geosciences and the High Meadows Environmental Institute and principal investigator in CMI. Because models with higher climate sensitivity forecast greater warming from greenhouse gas emissions, they also project more dire — and imminent — consequences such as more extreme sea-level rise and heat waves.
    The high climate-sensitivity models forecast an increase in global average temperature from 2 to 6 degrees Celsius under current carbon dioxide levels. The current scientific consensus is that the increase must be kept under 2 degrees to avoid catastrophic effects. The 2016 Paris Agreement sets the threshold to 1.5 degrees Celsius.

    advertisement

    “A higher climate sensitivity would obviously necessitate much more aggressive carbon mitigation,” Vecchi said. “Society would need to reduce carbon emissions much more rapidly to meet the goals of the Paris Agreement and keep global warming below 2 degrees Celsius. Reducing the uncertainty in climate sensitivity helps us make a more reliable and accurate strategy to deal with climate change.”
    The researchers found that both the high and low climate-sensitivity models match global temperatures observed during the 20th century. The higher-sensitivity models, however, include a stronger cooling effect from aerosol-cloud interaction that offsets the greater warming due to greenhouse gases. Moreover, the models have aerosol emissions occurring primarily in the northern hemisphere, which is not consistent with observations.
    “Our results remind us that we should be cautious about a model result, even if the models accurately represent past global warming,” said first author Chenggong Wang, a Ph.D. candidate in Princeton’s Program in Atmospheric and Oceanic Sciences. “We show that the global average hides important details about the patterns of temperature change.”
    In addition to the main findings, the study helps shed light on how clouds can moderate warming both in models and the real world at large and small scales.
    “Clouds can amplify global warming and may cause warming to accelerate rapidly during the next century,” said co-author Wenchang Yang, an associate research scholar in geosciences at Princeton. “In short, improving our understanding and ability to correctly simulate clouds is really the key to more reliable predictions of the future.”
    Scientists at Princeton and other institutions have recently turned their focus to the effect that clouds have on climate change. Related research includes two papers by Amilcare Porporato, Princeton’s Thomas J. Wu ’94 Professor of Civil and Environmental Engineering and the High Meadows Environmental Institute and a member of the CMI leadership team, that reported on the future effect of heat-induced clouds on solar power and how climate models underestimate the cooling effect of the daily cloud cycle.
    “Understanding how clouds modulate climate change is at the forefront of climate research,” said co-author Brian Soden, a professor of atmospheric sciences at the University of Miami. “It is encouraging that, as this study shows, there are still many treasures we can exploit from historical climate observations that help refine the interpretations we get from global mean-temperature change.”

    Story Source:
    Materials provided by Princeton University. Original written by Morgan Kelly. Note: Content may be edited for style and length. More

  • in

    Filming a 3D video of a virus with instantaneous light and AI

    It is millions of trillions of times brighter than the sunlight and a whopping 1,000 trillionth of a second, appropriately called the instantaneous light. It is the X-ray Free Electron Laser (XFEL) light that opens a new scientific paradigm. Combining it with AI, an international research team has succeeded in filming and restoring the 3D structure of nanoparticles that share structural similarities with viruses. With the fear of a new pandemic growing around the world due to COVID-19, this discovery is attracting the attention among academic circles for imaging the structure of the virus with both high accuracy and speed.
    An international team of researchers from POSTECH, National University of Singapore (NUS), KAIST, GIST, and IBS have successfully analyzed the structural heterogeneities in 3D structures of nanoparticles by irradiating thousands of nanoparticles per hour using the XFEL at Pohang Accelerator Laboratory (PAL) in Korea and restoring 3D multi-models through machine learning. The research team led by Professor Changyong Song and Ph.D. candidate Do Hyung Cho of Department of Physics at POSTECH has driven the international research collaboration to realize it.
    Nanoparticles have a peculiar function that may not be available from native bulk materials, and one can control their physical and chemical properties by designing 3D structures and compositions of constituting elements.
    The commonality between nanoparticles and viruses is that they exist in the form of independent particles, rather than in crystal-regular, periodic arrangements, and, as such, their structures are not uniform at the nanometer level. To precisely understand their structures, it is necessary to statistically analyze the structure of individual particles using the whole ensemble distribution of structures from thousands to hundreds of thousands of specimens. However, electron microscopes often fall short of providing enough penetration to limit the size of the sample to be probed; conventional X-rays may damage the sample by X-ray radiation itself, making it difficult to obtain sufficient resolution.
    The research team overcame the practical limitations of the conventional method by using the X-ray free electron laser and the machine learning method to observe the statistical distribution of the 3D structure of thousands of nanoparticles at the nanometer level. As a result, 3D structures of nanoparticles having a size of 300 nm were obtained with a resolution better than 20 nm.
    This achievement was particularly significant for restoring the 3D structure of thousands of nanoparticles using machine learning. Since conventional single-particle imaging techniques often assume an identical 3D structure of the specimens, it was difficult to restore the structure in actual experimental data where the sample structure is not homogeneous. However, with the introduction of the multi-model this time, the researchers succeeded in restoring the representative 3D structures. This research has enabled the classification of nanoparticles into four major shapes, and confirmed that about 40% of them had similar structures.
    In addition, through quantitative analysis of the restored 3D structure, the international research collaboration team also uncovered the internal elastic strain distribution accompanied by the characteristic polyhedron structure of the nanoparticles and the inhomogeneous density distribution.
    “These findings enable the observation of 3D structure of noncrystalline viral specimens with inhomogeneously distributed internal molecules,” explained Professor Changyong Song of POSTECH. “Adding the 3D image restoration algorithm to this through machine learning shows promise to be applicable to studies of macromolecule structures or viruses in living organisms.”

    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    Heat-free optical switch would enable optical quantum computing chips

    In a potential boost for quantum computing and communication, a European research collaboration reported a new method of controlling and manipulating single photons without generating heat. The solution makes it possible to integrate optical switches and single-photon detectors in a single chip.
    Publishing in Nature Communications, the team reported to have developed an optical switch that is reconfigured with microscopic mechanical movement rather than heat, making the switch compatible with heat-sensitive single-photon detectors.
    Optical switches in use today work by locally heating light guides inside a semiconductor chip. “This approach does not work for quantum optics,” says co-author Samuel Gyger, a PhD student at KTH Royal Institute of Technology in Stockholm.
    “Because we want to detect every single photon, we use quantum detectors that work by measuring the heat a single photon generates when absorbed by a superconducting material,” Gyger says. “If we use traditional switches, our detectors will be flooded by heat, and thus not work at all.”
    The new method enables control of single photons without the disadvantage of heating up a semiconductor chip and thereby rendering single-photon detectors useless, says Carlos Errando Herranz, who conceived the research idea and led the work at KTH as part of the European Quantum Flagship project, S2QUIP.
    Using microelectromechanical (MEMS) actuation, the solution enables optical switching and photon detection on a single semiconductor chip while maintaining the cold temperatures required by single-photon detectors.
    “Our technology will help to connect all building blocks required for integrated optical circuits for quantum technologies,” Errando Herranz says.
    “Quantum technologies will enable secure message encryption and methods of computation that solve problems today’s computers cannot,” he says. “And they will provide simulation tools that enable us to understand fundamental laws of nature, which can lead to new materials and medicines.”
    The group will further develop the technology to make it compatible with typical electronics, which will involve reducing the voltages used in the experimental setup.
    Errando Herranz says that the group aims to integrate the fabrication process in semiconductor foundries that already fabricate on-chip optics — a necessary step in order to make quantum optic circuits large enough to fulfill some of the promises of quantum technologies.

    Story Source:
    Materials provided by KTH, Royal Institute of Technology. Note: Content may be edited for style and length. More