More stories

  • in

    Internet access must become human right or we risk ever-widening inequality

    People around the globe are so dependent on the internet to exercise socio-economic human rights such as education, healthcare, work, and housing that online access must now be considered a basic human right, a new study reveals.
    Particularly in developing countries, internet access can make the difference between people receiving an education, staying healthy, finding a home, and securing employment — or not.
    Even if people have offline opportunities, such as accessing social security schemes or finding housing, they are at a comparative disadvantage to those with Internet access.
    Publishing his findings today in Politics, Philosophy & Economics, Dr Merten Reglitz, Lecturer in Global Ethics at the University of Birmingham, calls for a standalone human right to internet access — based on it being a practical necessity for a range of socio-economic human rights.
    He calls for public authorities to provide internet access free of charge for those unable to afford it, as well as providing training in basic digital skills training for all citizens and protecting online access from arbitrary interference by states and private companies.
    Dr Reglitz commented: “The internet has unique and fundamental value for the realisation of many of our socio-economic human rights — allowing users to submit job applications, send medical information to healthcare professionals, manage their finances and business, make social security claims, and submit educational assessments.
    “The internet’s structure enables a mutual exchange of information that has the potential to contribute to the progress of humankind as a whole — potential that should be protected and deployed by declaring access to the Internet a human right.”
    The study outlines several areas in developed countries where internet access is essential to exercise socio-economic human rights: Education — students in internet-free households are disadvantaged in obtaining a good school education with essential learning aids and study materials online. Health — providing in-person healthcare to remote communities can be challenging, particularly in the US and Canada. Online healthcare can help to plug this gap. Housing — in many developed countries, significant parts of the rental housing market have moved online. Social Security — accessing these public services today is often unreasonably difficult without internet access. Work — jobs are increasingly advertised in real time online and people must be able to access relevant websites to make effective use of their right to work.Dr Reglitz’s research also highlights similar problems for people without internet access in developing countries — for example, 20 per cent of children aged 6 to 11 are out of school in sub-Saharan Africa. Many children face long walks to their schools, where class sizes are routinely very large in crumbling, unsanitary schools with insufficient numbers of teachers.
    However, online education tools can make a significant difference — allowing children living remotely from schools to complete their education. More students can be taught more effectively if teaching materials are available digitally and pupils do not have to share books.
    For people in developing countries, internet access can also make the difference between receiving an adequate level of healthcare or receiving none. Digital health tools can help diagnose illnesses — for example, in Kenya, a smartphone-based Portable Eye Examination Kit (Peek) has been used to test people’s eyesight and identify people who need treatment, especially in remote areas underserved by medical practitioners.
    People are often confronted with a lack of brick-and-mortar banks in developing countries and internet access makes possible financial inclusion. Small businesses can also raise money through online crowdfunding platforms — the World Bank expects such sums raised in Africa to rise from $32 million in 2015 to $2.5 billion in 2025. More

  • in

    Baseball’s home run boom is due, in part, to climate change

    Baseball is the best sport in the world for numberphiles. There are so many stats collected that the analysis of them even has its own name: sabermetrics. Like in Moneyball, team managers, coaches and players use these statistics in game strategy, but the mountain of available data can also be put to other uses.

    Researchers have now mined baseball’s number hoard to show that climate change caused more than 500 home runs since 2010, with higher air temperatures contributing to the sport’s ongoing home run heyday. The results appear April 7 in the Bulletin of the American Meteorological Society.

    Many factors have led to players hitting it out of the park more often in the last 40 years, from steroid use to the height of the stitches on the ball. Blog posts and news stories have also speculated about whether climate change could be increasing the number of home runs, says Christopher Callahan of Dartmouth College (SN: 3/10/22). “But nobody had quantitatively investigated it.”

    A climate change researcher and baseball fan, Callahan decided to dig into the sport’s mound of data in his free time to answer the question. After he gave a brief presentation at Dartmouth on the topic, two researchers from different fields joined the project.

    That collaboration produced an analysis that is methodologically sound and “does what it says,” says Madeleine Orr, a researcher of the impacts of climate change on sports at Loughborough University London, who was not involved with the study.

    The theorized relationship between global warming and home runs stems from fundamental physics — the ideal gas law says as temperature goes up, air density goes down, reducing air resistance. To see if home runs were happening due to warming, Callahan and colleagues took several approaches.

    First, the team looked for an effect at the game level. Across more than 100,000 MLB games, the researchers found that a 1-degree Celsius increase in the daily high temperature increased the number of home runs in a game by nearly 2 percent. For example, a game like the one on June 10, 2019, where the Arizona Diamondbacks and Philadelphia Phillies set the record for most home runs in a game, would be expected to have 14 home runs instead of 13 if it were 4 degrees C warmer.

    The researchers then ran game-day temperatures through a climate model that controls for greenhouse gas emissions and found that human-caused warming led to an average of 58 more home runs each season from 2010 to 2019. The analysis also showed that the overall trend of more home runs in higher temperatures goes back to the 1960s.

    The team followed that analysis with a look at more than 220,000 individual batted balls, made possible by the Statcast system — where high-speed cameras have tracked the trajectory and speed of every ball hit during a game since 2015. The researchers compared balls hit in almost exactly the same way on days with different temperatures, while controlling for other factors like wind speed and humidity. That analysis showed a similar increase in home runs per degree Celsius as the game-level analysis, with only lower air density due to higher temperatures left to explain higher numbers of home runs.

    Subscribe to Science News

    Get great science journalism, from the most trusted source, delivered to your doorstep.

    While climate change has “not been the dominant effect” causing more home runs, “if we continue to emit greenhouse gases strongly, we could see much more rapid increases in home runs” moving forward, Callahan says.

    Some fans feel that the prevalence of home runs has made baseball duller, and it’s at least part of the reason that the MLB unveiled several new rule changes for the 2023 season, Callahan says.

    Teams can adapt to rising temperatures by shifting day games to night games and adding domes to stadiums — the researchers found no effect of temperature on home runs for games played under a dome. But according to Orr, climate change may soon cause even more dramatic changes to America’s pastime, even with those adaptations.

    Because the sport is susceptible to snow, storms, wildfires, flooding and heat at various points during the season, Orr says, “I don’t think, without substantial change, baseball exists in the current model” within 30 years.

    Callahan agrees. “This sport, and all sports, are going to see major changes in ways that we cannot anticipate.” More

  • in

    ‘Jet packs’ and ultrasounds could reveal secrets of pregnant whale sharks

    How do you know if the world’s largest living fish is expecting babies? Not by her bulging belly, it turns out.

    Scientists thought that an enlarged area on the undersides of female whale sharks was a sign of pregnancy. But a technique used for the first time on free-swimming animals showed only skin and muscle. These humps might instead be a secondary sex characteristic on mature females, like breasts on humans, researchers report in the March 23 Endangered Species Research.

    The ultrasound is part of a suite of new methods including underwater “jet packs” and blood tests that scientists hope could unlock secrets about this creature’s reproduction.

    Whale sharks (Rhincodon typus) are classified as globally endangered by the International Union for Conservation of Nature. There are only an estimated 100,000 to 238,000 individuals left worldwide, which is more than a 50 percent decline in the last 75 years.

    In part because whale sharks are relatively rare, their reproductive biology is mostly a mystery (SN: 8/1/22). Nearly everything biologists think they know is based on the examination of one pregnant female caught by a commercial fishing boat in 1995.

    “Protecting organisms without knowing about their biology is like trying to catch a fly with our eyes closed,” says Rui Matsumoto, a fisheries biologist with the Okinawa Churashima Foundation in Japan. The organization researches subtropical animals and plants to maintain or improve natural resources in national parks.

    To learn more about these gentle giants, Matsumoto and shark biologist Kiyomi Murakumo of Japan’s Okinawa Churaumi Aquarium had to figure out how to keep up with them. Like superheroes in a comic book, the biologists used underwater jet packs — propellers attached to their scuba tanks — to swim alongside the fish, which average 12 meters in length and move about five kilometers per hour.

    Then the researchers had to maneuver a 17-kilogram briefcase containing a waterproof ultrasound wand on the undersides of 22 females swimming near the Galápagos Islands and draw blood with syringes from their fins. Until this study, the ultrasound wand had never been used outside of an aquarium on free-swimming wildlife.

    Fisheries biologist Rui Matsumoto uses a propeller mounted on his scuba tank to keep pace with a female whale shark to take an ultrasound of her belly.S. Pierce

    Performing these two tests on whale sharks is especially challenging, says study coauthor Simon Pierce, a whale shark ecologist with the Marine Megafauna Foundation, a nonprofit organization that uses research to drive marine conservation.  The fish “have some of the thickest skin of any animal — up to about 30 centimeters thick.”

    Another challenge is the seawater itself, which can contaminate blood samples. The researchers developed a two-syringe system, where the first syringe creates a vacuum and allows the second syringe to draw only blood. 

    Subscribe to Science News

    Get great science journalism, from the most trusted source, delivered to your doorstep.

    Back in the lab, the blood plasma from six of the females showed hormone levels similar to levels obtained from a captive immature female in an aquarium, indicating those wild females were not old enough to reproduce.

    Ultrasound imagery showed egg follicles in two of the 22 female sharks, meaning those females were mature enough to reproduce but not pregnant. The biologists did not locate a pregnant whale shark.

    Pioneering these noninvasive techniques on whale sharks has opened the door to possibly learning more about other endangered marine animals, too. Waterproof ultrasound wands mounted on a pole, Pierce says, are now being used on tiger sharks in places where the predators are drawn in by bait.

    Rachel Graham agrees developing these underwater sampling techniques is an “astounding feat.” But the marine conservation scientist and founder of MarAlliance, a marine wildlife conservation nonprofit, doubts whether most free-ranging wild marine animals, particularly faster-swimming sharks or marine mammals, would tolerate similar tests.

    “What makes whale sharks fairly unique … is that they move relatively slowly at times, have the ability to remain stationary, and they tolerate the presence of other animals — such as us — nearby,” says Graham, who has studied shark species around the world and was not involved in the new study.

    Coupled with satellite tracking, the new methods, could eventually show us where whale sharks give birth, Pierce says. Little is known about whale shark pups, including whether they are born in shallow or deep water, and whether pups are born one-at-a-time or if mothers gather to give birth together. “Assuming they do have some sort of breeding or pelagic nursery area we can identify … then that obviously goes quite a long way towards conserving the population.” More

  • in

    New atomic-scale understanding of catalysis could unlock massive energy savings

    In an advance they consider a breakthrough in computational chemistry research, University of Wisconsin-Madison chemical engineers have developed a model of how catalytic reactions work at the atomic scale. This understanding could allow engineers and chemists to develop more efficient catalysts and tune industrial processes — potentially with enormous energy savings, given that 90% of the products we encounter in our lives are produced, at least partially, via catalysis.
    Catalyst materials accelerate chemical reactions without undergoing changes themselves. They are critical for refining petroleum products and for manufacturing pharmaceuticals, plastics, food additives, fertilizers, green fuels, industrial chemicals and much more.
    Scientists and engineers have spent decades fine-tuning catalytic reactions — yet because it’s currently impossible to directly observe those reactions at the extreme temperatures and pressures often involved in industrial-scale catalysis, they haven’t known exactly what is taking place on the nano and atomic scales. This new research helps unravel that mystery with potentially major ramifications for industry.
    In fact, just three catalytic reactions — steam-methane reforming to produce hydrogen, ammonia synthesis to produce fertilizer, and methanol synthesis — use close to 10% of the world’s energy.
    “If you decrease the temperatures at which you have to run these reactions by only a few degrees, there will be an enormous decrease in the energy demand that we face as humanity today,” says Manos Mavrikakis, a professor of chemical and biological engineering at UW-Madison who led the research. “By decreasing the energy needs to run all these processes, you are also decreasing their environmental footprint.”
    Mavrikakis and postdoctoral researchers Lang Xu and Konstantinos G. Papanikolaou along with graduate student Lisa Je published news of their advance in the April 7, 2023 issue of the journal Science.

    In their research, the UW-Madison engineers develop and use powerful modeling techniques to simulate catalytic reactions at the atomic scale. For this study, they looked at reactions involving transition metal catalysts in nanoparticle form, which include elements like platinum, palladium, rhodium, copper, nickel, and others important in industry and green energy.
    According to the current rigid-surface model of catalysis, the tightly packed atoms of transition metal catalysts provide a 2D surface that chemical reactants adhere to and participate in reactions. When enough pressure and heat or electricity is applied, the bonds between atoms in the chemical reactants break, allowing the fragments to recombine into new chemical products.
    “The prevailing assumption is that these metal atoms are strongly bonded to each other and simply provide ‘landing spots’ for reactants. What everybody has assumed is that metal-metal bonds remain intact during the reactions they catalyze,” says Mavrikakis. “So here, for the first time, we asked the question, ‘Could the energy to break bonds in reactants be of similar amounts to the energy needed to disrupt bonds within the catalyst?'”
    According to Mavrikakis’s modeling, the answer is yes. The energy provided for many catalytic processes to take place is enough to break bonds and allow single metal atoms (known as adatoms) to pop loose and start traveling on the surface of the catalyst. These adatoms combine into clusters, which serve as sites on the catalyst where chemical reactions can take place much easier than the original rigid surface of the catalyst.
    Using a set of special calculations, the team looked at industrially important interactions of eight transition metal catalysts and 18 reactants, identifying energy levels and temperatures likely to form such small metal clusters, as well as the number of atoms in each cluster, which can also dramatically affect reaction rates.

    Their experimental collaborators at the University of California, Berkeley, used atomically-resolved scanning tunneling microscopy to look at carbon monoxide adsorption on nickel (111), a stable, crystalline form of nickel useful in catalysis. Their experiments confirmed models that showed various defects in the structure of the catalyst can also influence how single metal atoms pop loose, as well as how reaction sites form.
    Mavrikakis says the new framework is challenging the foundation of how researchers understand catalysis and how it takes place. It may apply to other non-metal catalysts as well, which he will investigate in future work. It is also relevant to understanding other important phenomena, including corrosion and tribology, or the interaction of surfaces in motion.
    “We’re revisiting some very well-established assumptions in understanding how catalysts work and, more generally, how molecules interact with solids,” Mavrikakis says.
    Manos Mavrikakis is Ernest Micek Distinguished Chair, James A. Dumesic Professor, and Vilas Distinguished Achievement Professor in Chemical and Biological Engineering at the University of Wisconsin-Madison.
    Other authors include Barbara A.J. Lechner of the Technical University of Munich, and Gabor A. Somorjai and Miquel Salmeron of Lawrence Berkeley National Laboratory and the University of California, Berkeley.
    The authors acknowledge support from the U.S. Department of Energy, Basic Energy Sciences, Division of Chemical Sciences, Catalysis Science Program, Grant DE-FG02-05ER15731; the Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, of the U.S. Department of Energy under contract no. DE-AC02-05CH11231, through the Structure and Dynamics of Materials Interfaces program (FWP KC31SM).
    Mavrikakis acknowledges financial support from the Miller Institute at UC Berkeley through a Visiting Miller Professorship with the Department of Chemistry.
    The team also used the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 using NERSC award BES- ERCAP0022773.
    Part of the computational work was carried out using supercomputing resources at the Center for Nanoscale Materials, a DOE Office of Science User Facility located at Argonne National Laboratory, supported by DOE contract DE-AC02-06CH11357. More

  • in

    Fully recyclable printed electronics ditch toxic chemicals for water

    Engineers at Duke University have produced the world’s first fully recyclable printed electronics that replace the use of chemicals with water in the fabrication process. By bypassing the need for hazardous chemicals, the demonstration points down a path industry could follow to reduce its environmental footprint and human health risks.
    The research appeared online Feb. 28 in the journal Nano Letters.
    One of the dominant challenges facing any electronics manufacturer is successfully securing several layers of components on top of each other, which is crucial to making complex devices. Getting these layers to stick together can be a frustrating process, particularly for printed electronics.
    “If you’re making a peanut butter and jelly sandwich, one layer on either slice of bread is easy,” explained Aaron Franklin, the Addy Professor of Electrical and Computer Engineering at Duke, who led the study. “But if you put the jelly down first and then try to spread peanut butter on top of it, forget it, the jelly won’t stay put and will intermix with the peanut butter. Putting layers on top of each other is not as easy as putting them down on their own — but that’s what you have to do if you want to build electronic devices with printing.”
    In previous work, Franklin and his group demonstrated the first fully recyclable printed electronics. The devices used three carbon-based inks: semiconducting carbon nanotubes, conductive graphene and insulating nanocellulose. In trying to adapt the original process to only use water, the carbon nanotubes presented the largest challenge.
    To make a water-based ink in which the carbon nanotubes don’t clump together and spread evenly on a surface, a surfactant similar to detergent is added. The resulting ink, however, does not create a layer of carbon nanotubes dense enough for a high current of electrons to travel across.

    “You want the carbon nanotubes to look like al dente spaghetti strewn down on a flat surface,” said Franklin. “But with a water-based ink, they look more like they’ve been taken one-by-one and tossed on a wall to check for doneness. If we were using chemicals, we could just print multiple passes again and again until there were enough nanotubes. But water doesn’t work that way. We could do it 100 times and there’d still be the same density as the first time.”
    This is because the surfactant used to keep the carbon nanotubes from clumping also prevents additional layers from adhering to the first. In a traditional manufacturing process, these surfactants would be removed using either very high temperatures, which takes a lot of energy, or harsh chemicals, which can pose human and environmental health risks. Franklin and his group wanted to avoid both.
    In the paper, Franklin and his group develop a cyclical process in which the device is rinsed with water, dried in relatively low heat and printed on again. When the amount of surfactant used in the ink is also tuned down, the researchers show that their inks and processes can create fully functional, fully recyclable, fully water-based transistors.
    Compared to a resistor or capacitor, a transistor is a relatively complex computer component used in devices such as power control or logic circuits and sensors. Franklin explains that, by demonstrating a transistor first, he hopes to signal to the rest of the field that there is a viable path toward making some electronics manufacturing processes much more environmentally friendly.
    Franklin has already proven that nearly 100% of the carbon nanotubes and graphene used in printing can be recovered and reused in the same process, losing very little of the substances or their performance viability. Because nanocellulose is made from wood, it can simply be recycled or biodegraded like paper. And while the process does use a lot of water, it’s not nearly as much as what is required to deal with the toxic chemicals used in traditional fabrication methods.
    According to a United Nations estimate, less than a quarter of the millions of pounds of electronics thrown away each year is recycled. And the problem is only going to get worse as the world eventually upgrades to 6G devices and the Internet of Things (IoT) continues to expand. So any dent that could be made in this growing mountain of electronic trash is important to pursue.
    While more work needs to be done, Franklin says the approach could be used in the manufacturing of other electronic components like the screens and displays that are now ubiquitous to society. Every electronic display has a backplane of thin-film transistors similar to what is demonstrated in the paper. The current fabrication technology is high-energy and relies on hazardous chemicals as well as toxic gasses. The entire industry has been flagged for immediate attention by the US Environmental Protection Agency. [https://www.epa.gov/climateleadership/sector-spotlight-electronics]
    “The performance of our thin-film transistors doesn’t match the best currently being manufactured, but they’re competitive enough to show the research community that we should all be doing more work to make these processes more environmentally friendly,” Franklin said.
    This work was supported by the National Institutes of Health (1R01HL146849), the Air Force Office of Scientific Research (FA9550-22-1-0466), and the National Science Foundation (ECCS-1542015, Graduate Research Fellowship 2139754). More

  • in

    AI-equipped eyeglasses read silent speech

    Cornell University researchers have developed a silent-speech recognition interface that uses acoustic-sensing and artificial intelligence to continuously recognize up to 31 unvocalized commands, based on lip and mouth movements.
    The low-power, wearable interface — called EchoSpeech — requires just a few minutes of user training data before it will recognize commands and can be run on a smartphone.
    Ruidong Zhang, doctoral student of information science, is the lead author of “EchoSpeech: Continuous Silent Speech Recognition on Minimally-obtrusive Eyewear Powered by Acoustic Sensing,” which will be presented at the Association for Computing Machinery Conference on Human Factors in Computing Systems (CHI) this month in Hamburg, Germany.
    “For people who cannot vocalize sound, this silent speech technology could be an excellent input for a voice synthesizer. It could give patients their voices back,” Zhang said of the technology’s potential use with further development.
    In its present form, EchoSpeech could be used to communicate with others via smartphone in places where speech is inconvenient or inappropriate, like a noisy restaurant or quiet library. The silent speech interface can also be paired with a stylus and used with design software like CAD, all but eliminating the need for a keyboard and a mouse.
    Outfitted with a pair of microphones and speakers smaller than pencil erasers, the EchoSpeech glasses become a wearable AI-powered sonar system, sending and receiving soundwaves across the face and sensing mouth movements. A deep learning algorithm then analyzes these echo profiles in real time, with about 95% accuracy.
    “We’re moving sonar onto the body,” said Cheng Zhang, assistant professor of information science and director of Cornell’s Smart Computer Interfaces for Future Interactions (SciFi) Lab.
    “We’re very excited about this system,” he said, “because it really pushes the field forward on performance and privacy. It’s small, low-power and privacy-sensitive, which are all important features for deploying new, wearable technologies in the real world.”
    Most technology in silent-speech recognition is limited to a select set of predetermined commands and requires the user to face or wear a camera, which is neither practical nor feasible, Cheng Zhang said. There also are major privacy concerns involving wearable cameras — for both the user and those with whom the user interacts, he said.
    Acoustic-sensing technology like EchoSpeech removes the need for wearable video cameras. And because audio data is much smaller than image or video data, it requires less bandwidth to process and can be relayed to a smartphone via Bluetooth in real time, said François Guimbretière, professor in information science.
    “And because the data is processed locally on your smartphone instead of uploaded to the cloud,” he said, “privacy-sensitive information never leaves your control.” More

  • in

    Technology advance paves way to more realistic 3D holograms for virtual reality and more

    Researchers have developed a new way to create dynamic ultrahigh-density 3D holographic projections. By packing more details into a 3D image, this type of hologram could enable realistic representations of the world around us for use in virtual reality and other applications.
    “A 3D hologram can present real 3D scenes with continuous and fine features,” said Lei Gong, who led a research team from the University of Science and Technology of China. “For virtual reality, our method could be used with headset-based holographic displays to greatly improve the viewing angles, which would enhance the 3D viewing experience. It could also provide better 3D visuals without requiring a headset.”
    Producing a realistic-looking holographic display of 3D objects requires projecting images with a high pixel resolution onto a large number of successive planes, or layers, that are spaced closely together. This achieves high depth resolution, which is important for providing the depth cues that make the hologram look three dimensional.
    In Optica, Optica Publishing Group’s journal for high-impact research, Gong’s team and Chengwei Qiu’s research team at the National University of Singapore describe their new approach, called three-dimensional scattering-assisted dynamic holography (3D-SDH). They show that it can achieve a depth resolution more than three orders of magnitude greater than state-of-the-art methods for multiplane holographic projection.
    “Our new method overcomes two long-existing bottlenecks in current digital holographic techniques — low axial resolution and high interplane crosstalk — that prevent fine depth control of the hologram and thus limit the quality of the 3D display,” said Gong. “Our approach could also improve holography-based optical encryption by allowing more data to be encrypted in the hologram.”
    Producing more detailed holograms
    Creating a dynamic holographic projection typically involves using a spatial light modulator (SLM) to modulate the intensity and/or phase of a light beam. However, today’s holograms are limited in terms of quality because current SLM technology allows only a few low-resolution images to be projected onto sperate planes with low depth resolution.
    To overcome this problem, the researchers combined an SLM with a diffuser that enables multiple image planes to be separated by a much smaller amount without being constrained by the properties of the SLM. By also suppressing crosstalk between the planes and exploiting scattering of light and wavefront shaping, this setup enables ultrahigh-density 3D holographic projection.
    To test the new method, the researchers first used simulations to show that it could produce 3D reconstructions with a much smaller depth interval between each plane. For example, they were able to project a 3D rocket model with 125 successive image planes at a depth interval of 0.96 mm in a single 1000×1000-pixel hologram, compared to 32 image planes with a depth interval of 3.75 mm using another recently developed approach known as random vector-based computer-generated holography.
    To validate the concept experimentally, they built a prototype 3D-SDH projector to create dynamic 3D projections and compared this to a conventional state-of- the-art setup for 3D Fresnel computer-generated holography. They showed that 3D-SDH achieved an improvement in axial resolution of more than three orders of magnitude over the conventional counterpart.
    The 3D holograms the researchers demonstrated are all point-cloud 3D images, meaning they cannot present the solid body of a 3D object. Ultimately, the researchers would like to be able to project a collection of 3D objects with a hologram, which would require a higher pixel-count hologram and new algorithms. More

  • in

    How to overcome noise in quantum computations

    Researchers Ludovico Lami (QuSoft, University of Amsterdam) and Mark M. Wilde (Cornell) have made significant progress in quantum computing by deriving a formula that predicts the effects of environmental noise. This is crucial for designing and building quantum computers capable of working in our imperfect world.
    The choreography of quantum computing
    Quantum computing uses the principles of quantum mechanics to perform calculations. Unlike classical computers, which use bits that can be either 0 or 1, quantum computers use quantum bits, or qubits, which can be in a superposition of 0 and 1 simultaneously.
    This allows quantum computers to perform certain types of calculations much faster than classical computers. For example, a quantum computer can factor very large numbers in a fraction of the time it would take a classical computer.
    While one could naively attribute such an advantage to the ability of a quantum computer to perform numerous calculations in parallel, the reality is more complicated. The quantum wave function of the quantum computer (which represents its physical state) possesses several branches, each with its own phase. A phase can be thought of as the position of the hand of a clock, which can point in any direction on the clockface.
    At the end of its computation, the quantum computer recombines the results of all computations it simultaneously carried out on different branches of the wave function into a single answer. “The phases associated to the different branches play a key role in determining the outcome of this recombination process, not unlike how the timing of a ballerina’s steps play a key role in determining the success of a ballet performance,” explains Lami.

    Disruptive environmental noise
    A significant obstacle to quantum computing is environmental noise. Such noise can be likened to a little demon that alters the phase of different branches of the wave function in an unpredictable way. This process of tampering with the phase of a quantum system is called dephasing, and can be detrimental to the success of a quantum computation.
    Dephasing can occur in everyday devices such as optical fibres, which are used to transfer information in the form of light. Light rays travelling through an optical fibre can take different paths; since each path is associated to a specific phase, not knowing the path taken amounts to an effective dephasing noise.
    In their new publication in Nature Photonics, Lami and Wilde analyse a model, called the bosonic dephasing channel, to study how noise affects the transmission of quantum information. It represents the dephasing acting on a single mode of light at definite wavelength and polarisation.
    The number quantifying the effect of the noise on quantum information is the quantum capacity, which is the number of qubits that can be safely transmitted per use of a fibre. The new publication provides a full analytical solution to the problem of calculating the quantum capacity of the bosonic dephasing channel, for all possible forms of dephasing noise.
    Longer messages overcome errors
    To overcome the effects of noise, one can incorporate redundancy in the message to ensure that the quantum information can still be retrieved at the receiving end. This is similar to saying “Alpha, Beta, Charlie” instead of “A, B, C” when speaking on the phone. Although the transmitted message is longer, the redundancy ensures that it is understood correctly.
    The new study quantifies exactly how much redundancy needs to be added to a quantum message to protect it from dephasing noise. This is significant because it enables scientists to quantify the effects of noise on quantum computing and develop methods to overcome these effects. More