More stories

  • in

    High end of climate sensitivity in new climate models seen as less plausible

    A recent analysis of the latest generation of climate models — known as a CMIP6 — provides a cautionary tale on interpreting climate simulations as scientists develop more sensitive and sophisticated projections of how the Earth will respond to increasing levels of carbon dioxide in the atmosphere.
    Researchers at Princeton University and the University of Miami reported that newer models with a high “climate sensitivity” — meaning they predict much greater global warming from the same levels of atmospheric carbon dioxide as other models — do not provide a plausible scenario of Earth’s future climate.
    Those models overstate the global cooling effect that arises from interactions between clouds and aerosols and project that clouds will moderate greenhouse gas-induced warming — particularly in the northern hemisphere — much more than climate records show actually happens, the researchers reported in the journal Geophysical Research Letters.
    Instead, the researchers found that models with lower climate sensitivity are more consistent with observed differences in temperature between the northern and southern hemispheres, and, thus, are more accurate depictions of projected climate change than the newer models. The study was supported by the Carbon Mitigation Initiative (CMI) based in Princeton’s High Meadows Environmental Institute (HMEI).
    These findings are potentially significant when it comes to climate-change policy, explained co-author Gabriel Vecchi, a Princeton professor of geosciences and the High Meadows Environmental Institute and principal investigator in CMI. Because models with higher climate sensitivity forecast greater warming from greenhouse gas emissions, they also project more dire — and imminent — consequences such as more extreme sea-level rise and heat waves.
    The high climate-sensitivity models forecast an increase in global average temperature from 2 to 6 degrees Celsius under current carbon dioxide levels. The current scientific consensus is that the increase must be kept under 2 degrees to avoid catastrophic effects. The 2016 Paris Agreement sets the threshold to 1.5 degrees Celsius.

    advertisement

    “A higher climate sensitivity would obviously necessitate much more aggressive carbon mitigation,” Vecchi said. “Society would need to reduce carbon emissions much more rapidly to meet the goals of the Paris Agreement and keep global warming below 2 degrees Celsius. Reducing the uncertainty in climate sensitivity helps us make a more reliable and accurate strategy to deal with climate change.”
    The researchers found that both the high and low climate-sensitivity models match global temperatures observed during the 20th century. The higher-sensitivity models, however, include a stronger cooling effect from aerosol-cloud interaction that offsets the greater warming due to greenhouse gases. Moreover, the models have aerosol emissions occurring primarily in the northern hemisphere, which is not consistent with observations.
    “Our results remind us that we should be cautious about a model result, even if the models accurately represent past global warming,” said first author Chenggong Wang, a Ph.D. candidate in Princeton’s Program in Atmospheric and Oceanic Sciences. “We show that the global average hides important details about the patterns of temperature change.”
    In addition to the main findings, the study helps shed light on how clouds can moderate warming both in models and the real world at large and small scales.
    “Clouds can amplify global warming and may cause warming to accelerate rapidly during the next century,” said co-author Wenchang Yang, an associate research scholar in geosciences at Princeton. “In short, improving our understanding and ability to correctly simulate clouds is really the key to more reliable predictions of the future.”
    Scientists at Princeton and other institutions have recently turned their focus to the effect that clouds have on climate change. Related research includes two papers by Amilcare Porporato, Princeton’s Thomas J. Wu ’94 Professor of Civil and Environmental Engineering and the High Meadows Environmental Institute and a member of the CMI leadership team, that reported on the future effect of heat-induced clouds on solar power and how climate models underestimate the cooling effect of the daily cloud cycle.
    “Understanding how clouds modulate climate change is at the forefront of climate research,” said co-author Brian Soden, a professor of atmospheric sciences at the University of Miami. “It is encouraging that, as this study shows, there are still many treasures we can exploit from historical climate observations that help refine the interpretations we get from global mean-temperature change.”

    Story Source:
    Materials provided by Princeton University. Original written by Morgan Kelly. Note: Content may be edited for style and length. More

  • in

    Filming a 3D video of a virus with instantaneous light and AI

    It is millions of trillions of times brighter than the sunlight and a whopping 1,000 trillionth of a second, appropriately called the instantaneous light. It is the X-ray Free Electron Laser (XFEL) light that opens a new scientific paradigm. Combining it with AI, an international research team has succeeded in filming and restoring the 3D structure of nanoparticles that share structural similarities with viruses. With the fear of a new pandemic growing around the world due to COVID-19, this discovery is attracting the attention among academic circles for imaging the structure of the virus with both high accuracy and speed.
    An international team of researchers from POSTECH, National University of Singapore (NUS), KAIST, GIST, and IBS have successfully analyzed the structural heterogeneities in 3D structures of nanoparticles by irradiating thousands of nanoparticles per hour using the XFEL at Pohang Accelerator Laboratory (PAL) in Korea and restoring 3D multi-models through machine learning. The research team led by Professor Changyong Song and Ph.D. candidate Do Hyung Cho of Department of Physics at POSTECH has driven the international research collaboration to realize it.
    Nanoparticles have a peculiar function that may not be available from native bulk materials, and one can control their physical and chemical properties by designing 3D structures and compositions of constituting elements.
    The commonality between nanoparticles and viruses is that they exist in the form of independent particles, rather than in crystal-regular, periodic arrangements, and, as such, their structures are not uniform at the nanometer level. To precisely understand their structures, it is necessary to statistically analyze the structure of individual particles using the whole ensemble distribution of structures from thousands to hundreds of thousands of specimens. However, electron microscopes often fall short of providing enough penetration to limit the size of the sample to be probed; conventional X-rays may damage the sample by X-ray radiation itself, making it difficult to obtain sufficient resolution.
    The research team overcame the practical limitations of the conventional method by using the X-ray free electron laser and the machine learning method to observe the statistical distribution of the 3D structure of thousands of nanoparticles at the nanometer level. As a result, 3D structures of nanoparticles having a size of 300 nm were obtained with a resolution better than 20 nm.
    This achievement was particularly significant for restoring the 3D structure of thousands of nanoparticles using machine learning. Since conventional single-particle imaging techniques often assume an identical 3D structure of the specimens, it was difficult to restore the structure in actual experimental data where the sample structure is not homogeneous. However, with the introduction of the multi-model this time, the researchers succeeded in restoring the representative 3D structures. This research has enabled the classification of nanoparticles into four major shapes, and confirmed that about 40% of them had similar structures.
    In addition, through quantitative analysis of the restored 3D structure, the international research collaboration team also uncovered the internal elastic strain distribution accompanied by the characteristic polyhedron structure of the nanoparticles and the inhomogeneous density distribution.
    “These findings enable the observation of 3D structure of noncrystalline viral specimens with inhomogeneously distributed internal molecules,” explained Professor Changyong Song of POSTECH. “Adding the 3D image restoration algorithm to this through machine learning shows promise to be applicable to studies of macromolecule structures or viruses in living organisms.”

    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    Heat-free optical switch would enable optical quantum computing chips

    In a potential boost for quantum computing and communication, a European research collaboration reported a new method of controlling and manipulating single photons without generating heat. The solution makes it possible to integrate optical switches and single-photon detectors in a single chip.
    Publishing in Nature Communications, the team reported to have developed an optical switch that is reconfigured with microscopic mechanical movement rather than heat, making the switch compatible with heat-sensitive single-photon detectors.
    Optical switches in use today work by locally heating light guides inside a semiconductor chip. “This approach does not work for quantum optics,” says co-author Samuel Gyger, a PhD student at KTH Royal Institute of Technology in Stockholm.
    “Because we want to detect every single photon, we use quantum detectors that work by measuring the heat a single photon generates when absorbed by a superconducting material,” Gyger says. “If we use traditional switches, our detectors will be flooded by heat, and thus not work at all.”
    The new method enables control of single photons without the disadvantage of heating up a semiconductor chip and thereby rendering single-photon detectors useless, says Carlos Errando Herranz, who conceived the research idea and led the work at KTH as part of the European Quantum Flagship project, S2QUIP.
    Using microelectromechanical (MEMS) actuation, the solution enables optical switching and photon detection on a single semiconductor chip while maintaining the cold temperatures required by single-photon detectors.
    “Our technology will help to connect all building blocks required for integrated optical circuits for quantum technologies,” Errando Herranz says.
    “Quantum technologies will enable secure message encryption and methods of computation that solve problems today’s computers cannot,” he says. “And they will provide simulation tools that enable us to understand fundamental laws of nature, which can lead to new materials and medicines.”
    The group will further develop the technology to make it compatible with typical electronics, which will involve reducing the voltages used in the experimental setup.
    Errando Herranz says that the group aims to integrate the fabrication process in semiconductor foundries that already fabricate on-chip optics — a necessary step in order to make quantum optic circuits large enough to fulfill some of the promises of quantum technologies.

    Story Source:
    Materials provided by KTH, Royal Institute of Technology. Note: Content may be edited for style and length. More

  • in

    New search engine for single cell atlases

    A new software tool allows researchers to quickly query datasets generated from single-cell sequencing. Users can identify which cell types any combination of genes are active in. Published in Nature Methods on 1st March, the open-access ‘scfind’ software enables swift analysis of multiple datasets containing millions of cells by a wide range of users, on a standard computer.
    Processing times for such datasets are just a few seconds, saving time and computing costs. The tool, developed by researchers at the Wellcome Sanger Institute, can be used much like a search engine, as users can input free text as well as gene names.
    Techniques to sequence the genetic material from an individual cell have advanced rapidly over the last 10 years. Single-cell RNA sequencing (scRNAseq), used to assess which genes are active in individual cells, can be used on millions of cells at once and generates vast amounts of data (2.2 GB for the Human Kidney Atlas). Projects including the Human Cell Atlas and the Malaria Cell Atlas are using such techniques to uncover and characterise all of the cell types present in an organism or population. Data must be easy to access and query, by a wide range of researchers, to get the most value from them.
    To allow for fast and efficient access, a new software tool called scfind uses a two-step strategy to compress data ~100-fold. Efficient decompression makes it possible to quickly query the data. Developed by researchers at the Wellcome Sanger Institute, scfind can perform large scale analysis of datasets involving millions of cells on a standard computer without special hardware. Queries that used to take days to return a result, now take seconds.
    The new tool can also be used for analyses of multi-omics data, for example by combining single-cell ATAC-seq data, which measures epigenetic activity, with scRNAseq data.
    Dr Jimmy Lee, Postdoctoral Fellow at the Wellcome Sanger Institute, and lead author of the research, said: “The advances of multiomics methods have opened up an unprecedented opportunity to appreciate the landscape and dynamics of gene regulatory networks. Scfind will help us identify the genomic regions that regulate gene activity — even if those regions are distant from their targets.”
    Scfind can also be used to identify new genetic markers that are associated with, or define, a cell type. The researchers show that scfind is a more accurate and precise method to do this, compared with manually curated databases or other computational methods available.
    To make scfind more user friendly, it incorporates techniques from natural language processing to allow for arbitrary queries.
    Dr Martin Hemberg, former Group Leader at the Wellcome Sanger Institute, now at Harvard Medical School and Brigham and Women’s Hospital, said: “Analysis of single-cell datasets usually requires basic programming skills and expertise in genetics and genomics. To ensure that large single-cell datasets can be accessed by a wide range of users, we developed a tool that can function like a search engine — allowing users to input any query and find relevant cell types.”
    Dr Jonah Cool, Science Program Officer at the Chan Zuckerberg Initiative, said: “New, faster analysis methods are crucial for finding promising insights in single-cell data, including in the Human Cell Atlas. User-friendly tools like scfind are accelerating the pace of science and the ability of researchers to build off of each other’s work, and the Chan Zuckerberg Initiative is proud to support the team that developed this technology.”

    Story Source:
    Materials provided by Wellcome Trust Sanger Institute. Note: Content may be edited for style and length. More

  • in

    Helping soft robots turn rigid on demand

    Imagine a robot.
    Perhaps you’ve just conjured a machine with a rigid, metallic exterior. While robots armored with hard exoskeletons are common, they’re not always ideal. Soft-bodied robots, inspired by fish or other squishy creatures, might better adapt to changing environments and work more safely with people.
    Roboticists generally have to decide whether to design a hard- or soft-bodied robot for a particular task. But that tradeoff may no longer be necessary.
    Working with computer simulations, MIT researchers have developed a concept for a soft-bodied robot that can turn rigid on demand. The approach could enable a new generation of robots that combine the strength and precision of rigid robots with the fluidity and safety of soft ones.
    “This is the first step in trying to see if we can get the best of both worlds,” says James Bern, the paper’s lead author and a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
    Bern will present the research at the IEEE International Conference on Soft Robotics next month. Bern’s advisor, Daniela Rus, who is the CSAIL director and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, is the paper’s other author.

    advertisement

    Roboticists have experimented with myriad mechanisms to operate soft robots, including inflating balloon-like chambers in a robot’s arm or grabbing objects with vacuum-sealed coffee grounds. However, a key unsolved challenge for soft robotics is control — how to drive the robot’s actuators in order to achieve a given goal.
    Until recently, most soft robots were controlled manually, but in 2017 Bern and his colleagues proposed that an algorithm could take the reigns. Using a simulation to help control a cable-driven soft robot, they picked a target position for the robot and had a computer figure out how much to pull on each of the cables in order to get there. A similar sequence happens in our bodies each time we reach for something: A target position for our hand is translated into contractions of the muscles in our arm.
    Now, Bern and his colleagues are using similar techniques to ask a question that goes beyond the robot’s movement: “If I pull the cables in just the right way, can I get the robot to act stiff?” Bern says he can — at least in a computer simulation — thanks to inspiration from the human arm. While contracting the biceps alone can bend your elbow to a certain degree, contracting the biceps and triceps simultaneously can lock your arm rigidly in that position. Put simply, “you can get stiffness by pulling on both sides of something,” says Bern. So, he applied the same principle to his robots.
    The researchers’ paper lays out a way to simultaneously control the position and stiffness of a cable-driven soft robot. The method takes advantage of the robots’ multiple cables — using some to twist and turn the body, while using others to counterbalance each other to tweak the robot’s rigidity. Bern emphasizes that the advance isn’t a revolution in mechanical engineering, but rather a new twist on controlling cable-driven soft robots.
    “This is an intuitive way of expanding how you can control a soft robot,” he says. “It’s just encoding that idea [of on-demand rigidity] into something a computer can work with.” Bern hopes his roadmap will one day allow users to control a robot’s rigidity as easily as its motion.
    On the computer, Bern used his roadmap to simulate movement and rigidity adjustment in robots of various shapes. He tested how well the robots, when stiffened, could resist displacement when pushed. Generally, the robots remained rigid as intended, though they were not equally resistant from all angles.
    Bern is building a prototype robot to test out his rigidity-on-demand control system. But he hopes to one day take the technology out of the lab. “Interacting with humans is definitely a vision for soft robotics,” he says. Bern points to potential applications in caring for human patients, where a robot’s softness could enhance safety, while its ability to become rigid could allow for lifting when necessary.
    “The core message is to make it easy to control robots’ stiffness,” says Bern. “Let’s start making soft robots that are safe but can also act rigid on demand, and expand the spectrum of tasks robots can perform.” More

  • in

    New generation of tiny, agile drones introduced

    If you’ve ever swatted a mosquito away from your face, only to have it return again (and again and again), you know that insects can be remarkably acrobatic and resilient in flight. Those traits help them navigate the aerial world, with all of its wind gusts, obstacles, and general uncertainty. Such traits are also hard to build into flying robots, but MIT Assistant Professor Kevin Yufeng Chen has built a system that approaches insects’ agility.
    Chen, a member of the Department of Electrical Engineering and Computer Science and the Research Laboratory of Electronics, has developed insect-sized drones with unprecedented dexterity and resilience. The aerial robots are powered by a new class of soft actuator, which allows them to withstand the physical travails of real-world flight. Chen hopes the robots could one day aid humans by pollinating crops or performing machinery inspections in cramped spaces.
    Chen’s work appears this month in the journal IEEE Transactions on Robotics. His co-authors include MIT PhD student Zhijian Ren, Harvard University PhD student Siyi Xu, and City University of Hong Kong roboticist Pakpong Chirarattananon.
    Typically, drones require wide open spaces because they’re neither nimble enough to navigate confined spaces nor robust enough to withstand collisions in a crowd. “If we look at most drones today, they’re usually quite big,” says Chen. “Most of their applications involve flying outdoors. The question is: Can you create insect-scale robots that can move around in very complex, cluttered spaces?”
    According to Chen, “The challenge of building small aerial robots is immense.” Pint-sized drones require a fundamentally different construction from larger ones. Large drones are usually powered by motors, but motors lose efficiency as you shrink them. So, Chen says, for insect-like robots “you need to look for alternatives.”
    The principal alternative until now has been employing a small, rigid actuator built from piezoelectric ceramic materials. While piezoelectric ceramics allowed the first generation of tiny robots to take flight, they’re quite fragile. And that’s a problem when you’re building a robot to mimic an insect — foraging bumblebees endure a collision about once every second.
    Chen designed a more resilient tiny drone using soft actuators instead of hard, fragile ones. The soft actuators are made of thin rubber cylinders coated in carbon nanotubes. When voltage is applied to the carbon nanotubes, they produce an electrostatic force that squeezes and elongates the rubber cylinder. Repeated elongation and contraction causes the drone’s wings to beat — fast.
    Chen’s actuators can flap nearly 500 times per second, giving the drone insect-like resilience. “You can hit it when it’s flying, and it can recover,” says Chen. “It can also do aggressive maneuvers like somersaults in the air.” And it weighs in at just 0.6 grams, approximately the mass of a large bumble bee. The drone looks a bit like a tiny cassette tape with wings, though Chen is working on a new prototype shaped like a dragonfly.
    Building insect-like robots can provide a window into the biology and physics of insect flight, a longstanding avenue of inquiry for researchers. Chen’s work addresses these questions through a kind of reverse engineering. “If you want to learn how insects fly, it is very instructive to build a scale robot model,” he says. “You can perturb a few things and see how it affects the kinematics or how the fluid forces change. That will help you understand how those things fly.” But Chen aims to do more than add to entomology textbooks. His drones can also be useful in industry and agriculture.
    Chen says his mini-aerialists could navigate complex machinery to ensure safety and functionality. “Think about the inspection of a turbine engine. You’d want a drone to move around [an enclosed space] with a small camera to check for cracks on the turbine plates.”
    Other potential applications include artificial pollination of crops or completing search-and-rescue missions following a disaster. “All those things can be very challenging for existing large-scale robots,” says Chen. Sometimes, bigger isn’t better.

    Story Source:
    Materials provided by Massachusetts Institute of Technology. Original written by Daniel Ackerman. Note: Content may be edited for style and length. More

  • in

    Environmental impact of computation and the future of green computing

    When you think about your carbon footprint, what comes to mind? Driving and flying, probably. Perhaps home energy consumption or those daily Amazon deliveries. But what about watching Netflix or having Zoom meetings? Ever thought about the carbon footprint of the silicon chips inside your phone, smartwatch or the countless other devices inside your home?
    Every aspect of modern computing, from the smallest chip to the largest data center comes with a carbon price tag. For the better part of a century, the tech industry and the field of computation as a whole have focused on building smaller, faster, more powerful devices — but few have considered their overall environmental impact.
    Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) are trying to change that.
    “Over the next decade, the demand, number and types of devices is only going to grow,” said Udit Gupta, a PhD candidate in Computer Science at SEAS. “We want to know what impact that will have on the environment and how we, as a field, should be thinking about how we adopt more sustainable practices.”
    Gupta, along with Gu-Yeon Wei, the Robert and Suzanne Case Professor of Electrical Engineering and Computer Science, and David Brooks, the Haley Family Professor of Computer Science, will present a paper on the environmental footprint of computing at the IEEE International Symposium on High-Performance Computer Architecture on March 3rd, 2021.
    The SEAS research is part of a collaboration with Facebook, where Gupta is an intern, and Arizona State University.

    advertisement

    The team not only explored every aspect of computing, from chip architecture to data center design, but also mapped the entire lifetime of a device, from manufacturing to recycling, to identify the stages where the most emissions occur.
    The team found that most emissions related to modern mobile and data-center equipment come from hardware manufacturing and infrastructure.
    “A lot of the focus has been on how we reduce the amount of energy used by computers, but we found that it’s also really important to think about the emissions from just building these processors,” said Brooks. “If manufacturing is really important to emissions, can we design better processors? Can we reduce the complexity of our devices so that manufacturing emissions are lower?”
    Take chip design, for example.
    Today’s chips are optimized for size, performance and battery life. The typical chip is about 100 square millimeters of silicon and houses billions of transistors. But at any given time, only a portion of that silicon is being used. In fact, if all the transistors were fired up at the same time, the device would exhaust its battery life and overheat. This so-called dark silicon improves a device’s performance and battery life but it’s wildly inefficient if you consider the carbon footprint that goes into manufacturing the chip.

    advertisement

    “You have to ask yourself, what is the carbon impact of that added performance,” said Wei. “Dark silicon offers a boost in energy efficiency but what’s the cost in terms of manufacturing? Is there a way to design a smaller and smarter chip that uses all of the silicon available? That is a really intricate, interesting, and exciting problem.”
    The same issues face data centers. Today, data centers, some of which span many millions of square feet, account for 1 percent of global energy consumption, a number that is expected to grow.
    As cloud computing continues to grow, decisions about where to run applications — on a device or in a data center — are being made based on performance and battery life, not carbon footprint.
    “We need to be asking what’s greener, running applications on the device or in a data center,” said Gupta. “These decisions must optimize for global carbon emissions by taking into account application characteristics, efficiency of each hardware device, and varying power grids over the day.”
    The researchers are also challenging industry to look at the chemicals used in manufacturing.
    Adding environmental impact to the parameters of computational design requires a massive cultural shift in every level of the field, from undergraduate CS students to CEOs.
    To that end, Brooks has partnered with Embedded EthiCS, a Harvard program that embeds philosophers directly into computer science courses to teach students how to think through the ethical and social implications of their work. Brooks is including an Embedded EthiCS module on computational sustainability in COMPSCI 146: Computer Architecture this spring.
    The researchers also hope to partner with faculty from Environmental Science and Engineering at SEAS and the Harvard University Center for the Environment to explore how to enact change at the policy level.
    “The goal of this paper is to raise awareness of the carbon footprint associated with computing and to challenge the field to add carbon footprint to the list of metrics we consider when designing new processes, new computing systems, new hardware, and new ways to use devices. We need this to be a primary objective in the development of computing overall,” said Wei.
    The paper was co-authored by Sylvia Lee, Jordan Tse, Hsien-Hsin S. Lee and Carole-Jean Wu from Facebook and Young Geun Kim from Arizona State University. More

  • in

    A quantum internet is closer to reality, thanks to this switch

    When quantum computers become more powerful and widespread, they will need a robust quantum internet to communicate.
    Purdue University engineers have addressed an issue barring the development of quantum networks that are big enough to reliably support more than a handful of users.
    The method, demonstrated in a paper published in Optica, could help lay the groundwork for when a large number of quantum computers, quantum sensors and other quantum technology are ready to go online and communicate with each other.
    The team deployed a programmable switch to adjust how much data goes to each user by selecting and redirecting wavelengths of light carrying the different data channels, making it possible to increase the number of users without adding to photon loss as the network gets bigger.
    If photons are lost, quantum information is lost — a problem that tends to happen the farther photons have to travel through fiber optic networks.
    “We show a way to do wavelength routing with just one piece of equipment — a wavelength-selective switch — to, in principle, build a network of 12 to 20 users, maybe even more,” said Andrew Weiner, Purdue’s Scifres Family Distinguished Professor of Electrical and Computer Engineering. “Previous approaches have required physically interchanging dozens of fixed optical filters tuned to individual wavelengths, which made the ability to adjust connections between users not practically viable and photon loss more likely.”
    Instead of needing to add these filters each time that a new user joins the network, engineers could just program the wavelength-selective switch to direct data-carrying wavelengths over to each new user — reducing operational and maintenance costs as well as making a quantum internet more efficient.

    advertisement

    The wavelength-selective switch also can be programmed to adjust bandwidth according to a user’s needs, which has not been possible with fixed optical filters. Some users may be using applications that require more bandwidth than others, similarly to how watching shows through a web-based streaming service uses more bandwidth than sending an email.
    For a quantum internet, forming connections between users and adjusting bandwidth means distributing entanglement, the ability of photons to maintain a fixed quantum mechanical relationship with one another no matter how far apart they may be to connect users in a network. Entanglement plays a key role in quantum computing and quantum information processing.
    “When people talk about a quantum internet, it’s this idea of generating entanglement remotely between two different stations, such as between quantum computers,” said Navin Lingaraju, a Purdue Ph.D. student in electrical and computer engineering. “Our method changes the rate at which entangled photons are shared between different users. These entangled photons might be used as a resource to entangle quantum computers or quantum sensors at the two different stations.”
    Purdue researchers performed the study in collaboration with Joseph Lukens, a research scientist at Oak Ridge National Laboratory. The wavelength-selective switch that the team deployed is based on similar technology used for adjusting bandwidth for today’s classical communication.
    The switch also is capable of using a “flex grid,” like classical lightwave communications now uses, to partition bandwidth to users at a variety of wavelengths and locations rather than being restricted to a series of fixed wavelengths, each of which would have a fixed bandwidth or information carrying capacity at fixed locations.
    “For the first time, we are trying to take something sort of inspired by these classical communications concepts using comparable equipment to point out the potential advantages it has for quantum networks,” Weiner said.
    The team is working on building larger networks using the wavelength-selective switch. The work was funded by the U.S. Department of Energy, the National Science Foundation and Oak Ridge National Laboratory.

    Story Source:
    Materials provided by Purdue University. Original written by Kayla Wiles. Note: Content may be edited for style and length. More