More stories

  • in

    Study finds record-breaking rates of sea-level rise along the U.S. Southeast and Gulf coasts

    Sea levels along the U.S. Southeast and Gulf coasts have been rapidly accelerating, reaching record-breaking rates over the past 12 years, according to a new study led by scientists at Tulane University.
    In the study, published in Nature Communications, researchers said they had detected rates of sea-level rise of about a half an inch per year since 2010. They attribute the acceleration to the compounding effects of man-made climate change and natural climate variability.
    “These rapid rates are unprecedented over at least the 20th century and they have been three times higher than the global average over the same period,” says Sönke Dangendorf, lead author and the David and Jane Flowerree Assistant Professor in the Department of River-Coastal Science and Engineering at Tulane.
    The authors studied a combination of field and satellite measurements since 1900, pinpointing the individual contributors to the acceleration.
    “We systematically investigated the different causes, such as vertical land motion, ice-mass loss, and air pressure, but none of them could sufficiently explain the recent rate,” said Noah Hendricks, co-author and undergraduate student in Dangendorf’s team at his former institution, Old Dominion University in Norfolk, Virginia.
    “Instead, we found that the acceleration is a widespread signal that extends from the coasts of the Gulf of Mexico up to Cape Hatteras in North Carolina and into the North Atlantic Ocean and Caribbean Seas, which is indicative for changes in the ocean’s density and circulation.”
    Over the past 12 years this entire area, known as the Subtropical Gyre, has been expanding primarily due to changing wind patterns and continued warming. Warmer water masses need more space and thus lead to a rise in sea level.
    The scientists suggest that the recent acceleration was an unfortunate superposition of man-made climate change signals and a peak in weather-related variability that lasted over several years. They conclude that the rates will likely return to the more moderate values as predicted by climate models in the coming decades.
    “However, this is no reason to give the all clear,” said Torbjörn Törnqvist, co-author and the Vokes Geology Professor in the Department of Earth and Environmental Sciences at Tulane. “These high rates of sea-level rise have put even more stress on these vulnerable coastlines, particularly in Louisiana and Texas where the land is also sinking rapidly.”
    Dangendorf said the “results, once again, demonstrate the urgency of the climate crisis for the Gulf region. We need interdisciplinary and collaborative efforts to sustainably face these challenges.”
    Also collaborating on the study were Qiang Sun from Tulane, John Klinck and Tal Ezer from Old Dominion University, Thomas Frederikse from the Jet Propulsion Laboratory in Pasadena, California , Francisco M. Calafat from the National Oceanography Centre in Liverpool, UK, and Thomas Wahl from the University of Central Florida in Orlando. More

  • in

    Internet access must become human right or we risk ever-widening inequality

    People around the globe are so dependent on the internet to exercise socio-economic human rights such as education, healthcare, work, and housing that online access must now be considered a basic human right, a new study reveals.
    Particularly in developing countries, internet access can make the difference between people receiving an education, staying healthy, finding a home, and securing employment — or not.
    Even if people have offline opportunities, such as accessing social security schemes or finding housing, they are at a comparative disadvantage to those with Internet access.
    Publishing his findings today in Politics, Philosophy & Economics, Dr Merten Reglitz, Lecturer in Global Ethics at the University of Birmingham, calls for a standalone human right to internet access — based on it being a practical necessity for a range of socio-economic human rights.
    He calls for public authorities to provide internet access free of charge for those unable to afford it, as well as providing training in basic digital skills training for all citizens and protecting online access from arbitrary interference by states and private companies.
    Dr Reglitz commented: “The internet has unique and fundamental value for the realisation of many of our socio-economic human rights — allowing users to submit job applications, send medical information to healthcare professionals, manage their finances and business, make social security claims, and submit educational assessments.
    “The internet’s structure enables a mutual exchange of information that has the potential to contribute to the progress of humankind as a whole — potential that should be protected and deployed by declaring access to the Internet a human right.”
    The study outlines several areas in developed countries where internet access is essential to exercise socio-economic human rights: Education — students in internet-free households are disadvantaged in obtaining a good school education with essential learning aids and study materials online. Health — providing in-person healthcare to remote communities can be challenging, particularly in the US and Canada. Online healthcare can help to plug this gap. Housing — in many developed countries, significant parts of the rental housing market have moved online. Social Security — accessing these public services today is often unreasonably difficult without internet access. Work — jobs are increasingly advertised in real time online and people must be able to access relevant websites to make effective use of their right to work.Dr Reglitz’s research also highlights similar problems for people without internet access in developing countries — for example, 20 per cent of children aged 6 to 11 are out of school in sub-Saharan Africa. Many children face long walks to their schools, where class sizes are routinely very large in crumbling, unsanitary schools with insufficient numbers of teachers.
    However, online education tools can make a significant difference — allowing children living remotely from schools to complete their education. More students can be taught more effectively if teaching materials are available digitally and pupils do not have to share books.
    For people in developing countries, internet access can also make the difference between receiving an adequate level of healthcare or receiving none. Digital health tools can help diagnose illnesses — for example, in Kenya, a smartphone-based Portable Eye Examination Kit (Peek) has been used to test people’s eyesight and identify people who need treatment, especially in remote areas underserved by medical practitioners.
    People are often confronted with a lack of brick-and-mortar banks in developing countries and internet access makes possible financial inclusion. Small businesses can also raise money through online crowdfunding platforms — the World Bank expects such sums raised in Africa to rise from $32 million in 2015 to $2.5 billion in 2025. More

  • in

    New atomic-scale understanding of catalysis could unlock massive energy savings

    In an advance they consider a breakthrough in computational chemistry research, University of Wisconsin-Madison chemical engineers have developed a model of how catalytic reactions work at the atomic scale. This understanding could allow engineers and chemists to develop more efficient catalysts and tune industrial processes — potentially with enormous energy savings, given that 90% of the products we encounter in our lives are produced, at least partially, via catalysis.
    Catalyst materials accelerate chemical reactions without undergoing changes themselves. They are critical for refining petroleum products and for manufacturing pharmaceuticals, plastics, food additives, fertilizers, green fuels, industrial chemicals and much more.
    Scientists and engineers have spent decades fine-tuning catalytic reactions — yet because it’s currently impossible to directly observe those reactions at the extreme temperatures and pressures often involved in industrial-scale catalysis, they haven’t known exactly what is taking place on the nano and atomic scales. This new research helps unravel that mystery with potentially major ramifications for industry.
    In fact, just three catalytic reactions — steam-methane reforming to produce hydrogen, ammonia synthesis to produce fertilizer, and methanol synthesis — use close to 10% of the world’s energy.
    “If you decrease the temperatures at which you have to run these reactions by only a few degrees, there will be an enormous decrease in the energy demand that we face as humanity today,” says Manos Mavrikakis, a professor of chemical and biological engineering at UW-Madison who led the research. “By decreasing the energy needs to run all these processes, you are also decreasing their environmental footprint.”
    Mavrikakis and postdoctoral researchers Lang Xu and Konstantinos G. Papanikolaou along with graduate student Lisa Je published news of their advance in the April 7, 2023 issue of the journal Science.

    In their research, the UW-Madison engineers develop and use powerful modeling techniques to simulate catalytic reactions at the atomic scale. For this study, they looked at reactions involving transition metal catalysts in nanoparticle form, which include elements like platinum, palladium, rhodium, copper, nickel, and others important in industry and green energy.
    According to the current rigid-surface model of catalysis, the tightly packed atoms of transition metal catalysts provide a 2D surface that chemical reactants adhere to and participate in reactions. When enough pressure and heat or electricity is applied, the bonds between atoms in the chemical reactants break, allowing the fragments to recombine into new chemical products.
    “The prevailing assumption is that these metal atoms are strongly bonded to each other and simply provide ‘landing spots’ for reactants. What everybody has assumed is that metal-metal bonds remain intact during the reactions they catalyze,” says Mavrikakis. “So here, for the first time, we asked the question, ‘Could the energy to break bonds in reactants be of similar amounts to the energy needed to disrupt bonds within the catalyst?'”
    According to Mavrikakis’s modeling, the answer is yes. The energy provided for many catalytic processes to take place is enough to break bonds and allow single metal atoms (known as adatoms) to pop loose and start traveling on the surface of the catalyst. These adatoms combine into clusters, which serve as sites on the catalyst where chemical reactions can take place much easier than the original rigid surface of the catalyst.
    Using a set of special calculations, the team looked at industrially important interactions of eight transition metal catalysts and 18 reactants, identifying energy levels and temperatures likely to form such small metal clusters, as well as the number of atoms in each cluster, which can also dramatically affect reaction rates.

    Their experimental collaborators at the University of California, Berkeley, used atomically-resolved scanning tunneling microscopy to look at carbon monoxide adsorption on nickel (111), a stable, crystalline form of nickel useful in catalysis. Their experiments confirmed models that showed various defects in the structure of the catalyst can also influence how single metal atoms pop loose, as well as how reaction sites form.
    Mavrikakis says the new framework is challenging the foundation of how researchers understand catalysis and how it takes place. It may apply to other non-metal catalysts as well, which he will investigate in future work. It is also relevant to understanding other important phenomena, including corrosion and tribology, or the interaction of surfaces in motion.
    “We’re revisiting some very well-established assumptions in understanding how catalysts work and, more generally, how molecules interact with solids,” Mavrikakis says.
    Manos Mavrikakis is Ernest Micek Distinguished Chair, James A. Dumesic Professor, and Vilas Distinguished Achievement Professor in Chemical and Biological Engineering at the University of Wisconsin-Madison.
    Other authors include Barbara A.J. Lechner of the Technical University of Munich, and Gabor A. Somorjai and Miquel Salmeron of Lawrence Berkeley National Laboratory and the University of California, Berkeley.
    The authors acknowledge support from the U.S. Department of Energy, Basic Energy Sciences, Division of Chemical Sciences, Catalysis Science Program, Grant DE-FG02-05ER15731; the Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, of the U.S. Department of Energy under contract no. DE-AC02-05CH11231, through the Structure and Dynamics of Materials Interfaces program (FWP KC31SM).
    Mavrikakis acknowledges financial support from the Miller Institute at UC Berkeley through a Visiting Miller Professorship with the Department of Chemistry.
    The team also used the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 using NERSC award BES- ERCAP0022773.
    Part of the computational work was carried out using supercomputing resources at the Center for Nanoscale Materials, a DOE Office of Science User Facility located at Argonne National Laboratory, supported by DOE contract DE-AC02-06CH11357. More

  • in

    Fully recyclable printed electronics ditch toxic chemicals for water

    Engineers at Duke University have produced the world’s first fully recyclable printed electronics that replace the use of chemicals with water in the fabrication process. By bypassing the need for hazardous chemicals, the demonstration points down a path industry could follow to reduce its environmental footprint and human health risks.
    The research appeared online Feb. 28 in the journal Nano Letters.
    One of the dominant challenges facing any electronics manufacturer is successfully securing several layers of components on top of each other, which is crucial to making complex devices. Getting these layers to stick together can be a frustrating process, particularly for printed electronics.
    “If you’re making a peanut butter and jelly sandwich, one layer on either slice of bread is easy,” explained Aaron Franklin, the Addy Professor of Electrical and Computer Engineering at Duke, who led the study. “But if you put the jelly down first and then try to spread peanut butter on top of it, forget it, the jelly won’t stay put and will intermix with the peanut butter. Putting layers on top of each other is not as easy as putting them down on their own — but that’s what you have to do if you want to build electronic devices with printing.”
    In previous work, Franklin and his group demonstrated the first fully recyclable printed electronics. The devices used three carbon-based inks: semiconducting carbon nanotubes, conductive graphene and insulating nanocellulose. In trying to adapt the original process to only use water, the carbon nanotubes presented the largest challenge.
    To make a water-based ink in which the carbon nanotubes don’t clump together and spread evenly on a surface, a surfactant similar to detergent is added. The resulting ink, however, does not create a layer of carbon nanotubes dense enough for a high current of electrons to travel across.

    “You want the carbon nanotubes to look like al dente spaghetti strewn down on a flat surface,” said Franklin. “But with a water-based ink, they look more like they’ve been taken one-by-one and tossed on a wall to check for doneness. If we were using chemicals, we could just print multiple passes again and again until there were enough nanotubes. But water doesn’t work that way. We could do it 100 times and there’d still be the same density as the first time.”
    This is because the surfactant used to keep the carbon nanotubes from clumping also prevents additional layers from adhering to the first. In a traditional manufacturing process, these surfactants would be removed using either very high temperatures, which takes a lot of energy, or harsh chemicals, which can pose human and environmental health risks. Franklin and his group wanted to avoid both.
    In the paper, Franklin and his group develop a cyclical process in which the device is rinsed with water, dried in relatively low heat and printed on again. When the amount of surfactant used in the ink is also tuned down, the researchers show that their inks and processes can create fully functional, fully recyclable, fully water-based transistors.
    Compared to a resistor or capacitor, a transistor is a relatively complex computer component used in devices such as power control or logic circuits and sensors. Franklin explains that, by demonstrating a transistor first, he hopes to signal to the rest of the field that there is a viable path toward making some electronics manufacturing processes much more environmentally friendly.
    Franklin has already proven that nearly 100% of the carbon nanotubes and graphene used in printing can be recovered and reused in the same process, losing very little of the substances or their performance viability. Because nanocellulose is made from wood, it can simply be recycled or biodegraded like paper. And while the process does use a lot of water, it’s not nearly as much as what is required to deal with the toxic chemicals used in traditional fabrication methods.
    According to a United Nations estimate, less than a quarter of the millions of pounds of electronics thrown away each year is recycled. And the problem is only going to get worse as the world eventually upgrades to 6G devices and the Internet of Things (IoT) continues to expand. So any dent that could be made in this growing mountain of electronic trash is important to pursue.
    While more work needs to be done, Franklin says the approach could be used in the manufacturing of other electronic components like the screens and displays that are now ubiquitous to society. Every electronic display has a backplane of thin-film transistors similar to what is demonstrated in the paper. The current fabrication technology is high-energy and relies on hazardous chemicals as well as toxic gasses. The entire industry has been flagged for immediate attention by the US Environmental Protection Agency. [https://www.epa.gov/climateleadership/sector-spotlight-electronics]
    “The performance of our thin-film transistors doesn’t match the best currently being manufactured, but they’re competitive enough to show the research community that we should all be doing more work to make these processes more environmentally friendly,” Franklin said.
    This work was supported by the National Institutes of Health (1R01HL146849), the Air Force Office of Scientific Research (FA9550-22-1-0466), and the National Science Foundation (ECCS-1542015, Graduate Research Fellowship 2139754). More

  • in

    AI-equipped eyeglasses read silent speech

    Cornell University researchers have developed a silent-speech recognition interface that uses acoustic-sensing and artificial intelligence to continuously recognize up to 31 unvocalized commands, based on lip and mouth movements.
    The low-power, wearable interface — called EchoSpeech — requires just a few minutes of user training data before it will recognize commands and can be run on a smartphone.
    Ruidong Zhang, doctoral student of information science, is the lead author of “EchoSpeech: Continuous Silent Speech Recognition on Minimally-obtrusive Eyewear Powered by Acoustic Sensing,” which will be presented at the Association for Computing Machinery Conference on Human Factors in Computing Systems (CHI) this month in Hamburg, Germany.
    “For people who cannot vocalize sound, this silent speech technology could be an excellent input for a voice synthesizer. It could give patients their voices back,” Zhang said of the technology’s potential use with further development.
    In its present form, EchoSpeech could be used to communicate with others via smartphone in places where speech is inconvenient or inappropriate, like a noisy restaurant or quiet library. The silent speech interface can also be paired with a stylus and used with design software like CAD, all but eliminating the need for a keyboard and a mouse.
    Outfitted with a pair of microphones and speakers smaller than pencil erasers, the EchoSpeech glasses become a wearable AI-powered sonar system, sending and receiving soundwaves across the face and sensing mouth movements. A deep learning algorithm then analyzes these echo profiles in real time, with about 95% accuracy.
    “We’re moving sonar onto the body,” said Cheng Zhang, assistant professor of information science and director of Cornell’s Smart Computer Interfaces for Future Interactions (SciFi) Lab.
    “We’re very excited about this system,” he said, “because it really pushes the field forward on performance and privacy. It’s small, low-power and privacy-sensitive, which are all important features for deploying new, wearable technologies in the real world.”
    Most technology in silent-speech recognition is limited to a select set of predetermined commands and requires the user to face or wear a camera, which is neither practical nor feasible, Cheng Zhang said. There also are major privacy concerns involving wearable cameras — for both the user and those with whom the user interacts, he said.
    Acoustic-sensing technology like EchoSpeech removes the need for wearable video cameras. And because audio data is much smaller than image or video data, it requires less bandwidth to process and can be relayed to a smartphone via Bluetooth in real time, said François Guimbretière, professor in information science.
    “And because the data is processed locally on your smartphone instead of uploaded to the cloud,” he said, “privacy-sensitive information never leaves your control.” More

  • in

    Technology advance paves way to more realistic 3D holograms for virtual reality and more

    Researchers have developed a new way to create dynamic ultrahigh-density 3D holographic projections. By packing more details into a 3D image, this type of hologram could enable realistic representations of the world around us for use in virtual reality and other applications.
    “A 3D hologram can present real 3D scenes with continuous and fine features,” said Lei Gong, who led a research team from the University of Science and Technology of China. “For virtual reality, our method could be used with headset-based holographic displays to greatly improve the viewing angles, which would enhance the 3D viewing experience. It could also provide better 3D visuals without requiring a headset.”
    Producing a realistic-looking holographic display of 3D objects requires projecting images with a high pixel resolution onto a large number of successive planes, or layers, that are spaced closely together. This achieves high depth resolution, which is important for providing the depth cues that make the hologram look three dimensional.
    In Optica, Optica Publishing Group’s journal for high-impact research, Gong’s team and Chengwei Qiu’s research team at the National University of Singapore describe their new approach, called three-dimensional scattering-assisted dynamic holography (3D-SDH). They show that it can achieve a depth resolution more than three orders of magnitude greater than state-of-the-art methods for multiplane holographic projection.
    “Our new method overcomes two long-existing bottlenecks in current digital holographic techniques — low axial resolution and high interplane crosstalk — that prevent fine depth control of the hologram and thus limit the quality of the 3D display,” said Gong. “Our approach could also improve holography-based optical encryption by allowing more data to be encrypted in the hologram.”
    Producing more detailed holograms
    Creating a dynamic holographic projection typically involves using a spatial light modulator (SLM) to modulate the intensity and/or phase of a light beam. However, today’s holograms are limited in terms of quality because current SLM technology allows only a few low-resolution images to be projected onto sperate planes with low depth resolution.
    To overcome this problem, the researchers combined an SLM with a diffuser that enables multiple image planes to be separated by a much smaller amount without being constrained by the properties of the SLM. By also suppressing crosstalk between the planes and exploiting scattering of light and wavefront shaping, this setup enables ultrahigh-density 3D holographic projection.
    To test the new method, the researchers first used simulations to show that it could produce 3D reconstructions with a much smaller depth interval between each plane. For example, they were able to project a 3D rocket model with 125 successive image planes at a depth interval of 0.96 mm in a single 1000×1000-pixel hologram, compared to 32 image planes with a depth interval of 3.75 mm using another recently developed approach known as random vector-based computer-generated holography.
    To validate the concept experimentally, they built a prototype 3D-SDH projector to create dynamic 3D projections and compared this to a conventional state-of- the-art setup for 3D Fresnel computer-generated holography. They showed that 3D-SDH achieved an improvement in axial resolution of more than three orders of magnitude over the conventional counterpart.
    The 3D holograms the researchers demonstrated are all point-cloud 3D images, meaning they cannot present the solid body of a 3D object. Ultimately, the researchers would like to be able to project a collection of 3D objects with a hologram, which would require a higher pixel-count hologram and new algorithms. More

  • in

    How to overcome noise in quantum computations

    Researchers Ludovico Lami (QuSoft, University of Amsterdam) and Mark M. Wilde (Cornell) have made significant progress in quantum computing by deriving a formula that predicts the effects of environmental noise. This is crucial for designing and building quantum computers capable of working in our imperfect world.
    The choreography of quantum computing
    Quantum computing uses the principles of quantum mechanics to perform calculations. Unlike classical computers, which use bits that can be either 0 or 1, quantum computers use quantum bits, or qubits, which can be in a superposition of 0 and 1 simultaneously.
    This allows quantum computers to perform certain types of calculations much faster than classical computers. For example, a quantum computer can factor very large numbers in a fraction of the time it would take a classical computer.
    While one could naively attribute such an advantage to the ability of a quantum computer to perform numerous calculations in parallel, the reality is more complicated. The quantum wave function of the quantum computer (which represents its physical state) possesses several branches, each with its own phase. A phase can be thought of as the position of the hand of a clock, which can point in any direction on the clockface.
    At the end of its computation, the quantum computer recombines the results of all computations it simultaneously carried out on different branches of the wave function into a single answer. “The phases associated to the different branches play a key role in determining the outcome of this recombination process, not unlike how the timing of a ballerina’s steps play a key role in determining the success of a ballet performance,” explains Lami.

    Disruptive environmental noise
    A significant obstacle to quantum computing is environmental noise. Such noise can be likened to a little demon that alters the phase of different branches of the wave function in an unpredictable way. This process of tampering with the phase of a quantum system is called dephasing, and can be detrimental to the success of a quantum computation.
    Dephasing can occur in everyday devices such as optical fibres, which are used to transfer information in the form of light. Light rays travelling through an optical fibre can take different paths; since each path is associated to a specific phase, not knowing the path taken amounts to an effective dephasing noise.
    In their new publication in Nature Photonics, Lami and Wilde analyse a model, called the bosonic dephasing channel, to study how noise affects the transmission of quantum information. It represents the dephasing acting on a single mode of light at definite wavelength and polarisation.
    The number quantifying the effect of the noise on quantum information is the quantum capacity, which is the number of qubits that can be safely transmitted per use of a fibre. The new publication provides a full analytical solution to the problem of calculating the quantum capacity of the bosonic dephasing channel, for all possible forms of dephasing noise.
    Longer messages overcome errors
    To overcome the effects of noise, one can incorporate redundancy in the message to ensure that the quantum information can still be retrieved at the receiving end. This is similar to saying “Alpha, Beta, Charlie” instead of “A, B, C” when speaking on the phone. Although the transmitted message is longer, the redundancy ensures that it is understood correctly.
    The new study quantifies exactly how much redundancy needs to be added to a quantum message to protect it from dephasing noise. This is significant because it enables scientists to quantify the effects of noise on quantum computing and develop methods to overcome these effects. More

  • in

    Random matrix theory approaches the mystery of the neutrino mass

    When any matter is divided into smaller and smaller pieces, eventually all you are left with — when it cannot be divided any further — is a particle. Currently, there are 12 different known elementary particles, which in turn are made up of quarks and leptons each of which come in six different flavors. These flavors are grouped into three generations — each with one charged and one neutral lepton — to form different particles, including the electron, muon, and tau neutrinos. In the Standard Model, the masses of the three generations of neutrinos are represented by a three-by-three matrix.
    A research team led by Professor Naoyuki Haba from the Osaka Metropolitan University Graduate School of Science, analyzed the collection of leptons that make up the neutrino mass matrix. Neutrinos are known to have less difference in mass between generations than other elementary particles, so the research team considered that neutrinos are roughly equal in mass between generations. They analyzed the neutrino mass matrix by randomly assigning each element of the matrix. They showed theoretically, using the random mass matrix model that the lepton flavor mixings are large.
    “Clarifying the properties of elementary particles leads to the exploration of the universe and ultimately to the grand theme of where we came from!” Professor Haba explained. “Beyond the remaining mysteries of the Standard Model, there is a whole new world of physics.”
    After studying the neutrino mass anarchy in the Dirac neutrino, seesaw, double seesaw models, the researchers found that the anarchy approach requires that the measure of the matrix should obey the Gaussian distribution. Having considered several models of light neutrino mass where the matrix is composed of the product of several random matrices, the research team was able to prove, as best they could at this stage, why the calculation of the squared difference of the neutrino masses are closest with the experimental results in the case of the seesaw model with the random Dirac and Majorana matrices.
    “In this study, we showed that the neutrino mass hierarchy can be mathematically explained using random matrix theory. However, this proof is not mathematically complete and is expected to be rigorously proven as random matrix theory continues to develop,” said Professor Haba. “In the future, we will continue with our challenge of elucidating the three-generation copy structure of elementary particles, the essential nature of which is still completely unknown both theoretically and experimentally.” More