More stories

  • in

    New atomic-scale understanding of catalysis could unlock massive energy savings

    In an advance they consider a breakthrough in computational chemistry research, University of Wisconsin-Madison chemical engineers have developed a model of how catalytic reactions work at the atomic scale. This understanding could allow engineers and chemists to develop more efficient catalysts and tune industrial processes — potentially with enormous energy savings, given that 90% of the products we encounter in our lives are produced, at least partially, via catalysis.
    Catalyst materials accelerate chemical reactions without undergoing changes themselves. They are critical for refining petroleum products and for manufacturing pharmaceuticals, plastics, food additives, fertilizers, green fuels, industrial chemicals and much more.
    Scientists and engineers have spent decades fine-tuning catalytic reactions — yet because it’s currently impossible to directly observe those reactions at the extreme temperatures and pressures often involved in industrial-scale catalysis, they haven’t known exactly what is taking place on the nano and atomic scales. This new research helps unravel that mystery with potentially major ramifications for industry.
    In fact, just three catalytic reactions — steam-methane reforming to produce hydrogen, ammonia synthesis to produce fertilizer, and methanol synthesis — use close to 10% of the world’s energy.
    “If you decrease the temperatures at which you have to run these reactions by only a few degrees, there will be an enormous decrease in the energy demand that we face as humanity today,” says Manos Mavrikakis, a professor of chemical and biological engineering at UW-Madison who led the research. “By decreasing the energy needs to run all these processes, you are also decreasing their environmental footprint.”
    Mavrikakis and postdoctoral researchers Lang Xu and Konstantinos G. Papanikolaou along with graduate student Lisa Je published news of their advance in the April 7, 2023 issue of the journal Science.

    In their research, the UW-Madison engineers develop and use powerful modeling techniques to simulate catalytic reactions at the atomic scale. For this study, they looked at reactions involving transition metal catalysts in nanoparticle form, which include elements like platinum, palladium, rhodium, copper, nickel, and others important in industry and green energy.
    According to the current rigid-surface model of catalysis, the tightly packed atoms of transition metal catalysts provide a 2D surface that chemical reactants adhere to and participate in reactions. When enough pressure and heat or electricity is applied, the bonds between atoms in the chemical reactants break, allowing the fragments to recombine into new chemical products.
    “The prevailing assumption is that these metal atoms are strongly bonded to each other and simply provide ‘landing spots’ for reactants. What everybody has assumed is that metal-metal bonds remain intact during the reactions they catalyze,” says Mavrikakis. “So here, for the first time, we asked the question, ‘Could the energy to break bonds in reactants be of similar amounts to the energy needed to disrupt bonds within the catalyst?'”
    According to Mavrikakis’s modeling, the answer is yes. The energy provided for many catalytic processes to take place is enough to break bonds and allow single metal atoms (known as adatoms) to pop loose and start traveling on the surface of the catalyst. These adatoms combine into clusters, which serve as sites on the catalyst where chemical reactions can take place much easier than the original rigid surface of the catalyst.
    Using a set of special calculations, the team looked at industrially important interactions of eight transition metal catalysts and 18 reactants, identifying energy levels and temperatures likely to form such small metal clusters, as well as the number of atoms in each cluster, which can also dramatically affect reaction rates.

    Their experimental collaborators at the University of California, Berkeley, used atomically-resolved scanning tunneling microscopy to look at carbon monoxide adsorption on nickel (111), a stable, crystalline form of nickel useful in catalysis. Their experiments confirmed models that showed various defects in the structure of the catalyst can also influence how single metal atoms pop loose, as well as how reaction sites form.
    Mavrikakis says the new framework is challenging the foundation of how researchers understand catalysis and how it takes place. It may apply to other non-metal catalysts as well, which he will investigate in future work. It is also relevant to understanding other important phenomena, including corrosion and tribology, or the interaction of surfaces in motion.
    “We’re revisiting some very well-established assumptions in understanding how catalysts work and, more generally, how molecules interact with solids,” Mavrikakis says.
    Manos Mavrikakis is Ernest Micek Distinguished Chair, James A. Dumesic Professor, and Vilas Distinguished Achievement Professor in Chemical and Biological Engineering at the University of Wisconsin-Madison.
    Other authors include Barbara A.J. Lechner of the Technical University of Munich, and Gabor A. Somorjai and Miquel Salmeron of Lawrence Berkeley National Laboratory and the University of California, Berkeley.
    The authors acknowledge support from the U.S. Department of Energy, Basic Energy Sciences, Division of Chemical Sciences, Catalysis Science Program, Grant DE-FG02-05ER15731; the Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, of the U.S. Department of Energy under contract no. DE-AC02-05CH11231, through the Structure and Dynamics of Materials Interfaces program (FWP KC31SM).
    Mavrikakis acknowledges financial support from the Miller Institute at UC Berkeley through a Visiting Miller Professorship with the Department of Chemistry.
    The team also used the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 using NERSC award BES- ERCAP0022773.
    Part of the computational work was carried out using supercomputing resources at the Center for Nanoscale Materials, a DOE Office of Science User Facility located at Argonne National Laboratory, supported by DOE contract DE-AC02-06CH11357. More

  • in

    Fully recyclable printed electronics ditch toxic chemicals for water

    Engineers at Duke University have produced the world’s first fully recyclable printed electronics that replace the use of chemicals with water in the fabrication process. By bypassing the need for hazardous chemicals, the demonstration points down a path industry could follow to reduce its environmental footprint and human health risks.
    The research appeared online Feb. 28 in the journal Nano Letters.
    One of the dominant challenges facing any electronics manufacturer is successfully securing several layers of components on top of each other, which is crucial to making complex devices. Getting these layers to stick together can be a frustrating process, particularly for printed electronics.
    “If you’re making a peanut butter and jelly sandwich, one layer on either slice of bread is easy,” explained Aaron Franklin, the Addy Professor of Electrical and Computer Engineering at Duke, who led the study. “But if you put the jelly down first and then try to spread peanut butter on top of it, forget it, the jelly won’t stay put and will intermix with the peanut butter. Putting layers on top of each other is not as easy as putting them down on their own — but that’s what you have to do if you want to build electronic devices with printing.”
    In previous work, Franklin and his group demonstrated the first fully recyclable printed electronics. The devices used three carbon-based inks: semiconducting carbon nanotubes, conductive graphene and insulating nanocellulose. In trying to adapt the original process to only use water, the carbon nanotubes presented the largest challenge.
    To make a water-based ink in which the carbon nanotubes don’t clump together and spread evenly on a surface, a surfactant similar to detergent is added. The resulting ink, however, does not create a layer of carbon nanotubes dense enough for a high current of electrons to travel across.

    “You want the carbon nanotubes to look like al dente spaghetti strewn down on a flat surface,” said Franklin. “But with a water-based ink, they look more like they’ve been taken one-by-one and tossed on a wall to check for doneness. If we were using chemicals, we could just print multiple passes again and again until there were enough nanotubes. But water doesn’t work that way. We could do it 100 times and there’d still be the same density as the first time.”
    This is because the surfactant used to keep the carbon nanotubes from clumping also prevents additional layers from adhering to the first. In a traditional manufacturing process, these surfactants would be removed using either very high temperatures, which takes a lot of energy, or harsh chemicals, which can pose human and environmental health risks. Franklin and his group wanted to avoid both.
    In the paper, Franklin and his group develop a cyclical process in which the device is rinsed with water, dried in relatively low heat and printed on again. When the amount of surfactant used in the ink is also tuned down, the researchers show that their inks and processes can create fully functional, fully recyclable, fully water-based transistors.
    Compared to a resistor or capacitor, a transistor is a relatively complex computer component used in devices such as power control or logic circuits and sensors. Franklin explains that, by demonstrating a transistor first, he hopes to signal to the rest of the field that there is a viable path toward making some electronics manufacturing processes much more environmentally friendly.
    Franklin has already proven that nearly 100% of the carbon nanotubes and graphene used in printing can be recovered and reused in the same process, losing very little of the substances or their performance viability. Because nanocellulose is made from wood, it can simply be recycled or biodegraded like paper. And while the process does use a lot of water, it’s not nearly as much as what is required to deal with the toxic chemicals used in traditional fabrication methods.
    According to a United Nations estimate, less than a quarter of the millions of pounds of electronics thrown away each year is recycled. And the problem is only going to get worse as the world eventually upgrades to 6G devices and the Internet of Things (IoT) continues to expand. So any dent that could be made in this growing mountain of electronic trash is important to pursue.
    While more work needs to be done, Franklin says the approach could be used in the manufacturing of other electronic components like the screens and displays that are now ubiquitous to society. Every electronic display has a backplane of thin-film transistors similar to what is demonstrated in the paper. The current fabrication technology is high-energy and relies on hazardous chemicals as well as toxic gasses. The entire industry has been flagged for immediate attention by the US Environmental Protection Agency. [https://www.epa.gov/climateleadership/sector-spotlight-electronics]
    “The performance of our thin-film transistors doesn’t match the best currently being manufactured, but they’re competitive enough to show the research community that we should all be doing more work to make these processes more environmentally friendly,” Franklin said.
    This work was supported by the National Institutes of Health (1R01HL146849), the Air Force Office of Scientific Research (FA9550-22-1-0466), and the National Science Foundation (ECCS-1542015, Graduate Research Fellowship 2139754). More

  • in

    AI-equipped eyeglasses read silent speech

    Cornell University researchers have developed a silent-speech recognition interface that uses acoustic-sensing and artificial intelligence to continuously recognize up to 31 unvocalized commands, based on lip and mouth movements.
    The low-power, wearable interface — called EchoSpeech — requires just a few minutes of user training data before it will recognize commands and can be run on a smartphone.
    Ruidong Zhang, doctoral student of information science, is the lead author of “EchoSpeech: Continuous Silent Speech Recognition on Minimally-obtrusive Eyewear Powered by Acoustic Sensing,” which will be presented at the Association for Computing Machinery Conference on Human Factors in Computing Systems (CHI) this month in Hamburg, Germany.
    “For people who cannot vocalize sound, this silent speech technology could be an excellent input for a voice synthesizer. It could give patients their voices back,” Zhang said of the technology’s potential use with further development.
    In its present form, EchoSpeech could be used to communicate with others via smartphone in places where speech is inconvenient or inappropriate, like a noisy restaurant or quiet library. The silent speech interface can also be paired with a stylus and used with design software like CAD, all but eliminating the need for a keyboard and a mouse.
    Outfitted with a pair of microphones and speakers smaller than pencil erasers, the EchoSpeech glasses become a wearable AI-powered sonar system, sending and receiving soundwaves across the face and sensing mouth movements. A deep learning algorithm then analyzes these echo profiles in real time, with about 95% accuracy.
    “We’re moving sonar onto the body,” said Cheng Zhang, assistant professor of information science and director of Cornell’s Smart Computer Interfaces for Future Interactions (SciFi) Lab.
    “We’re very excited about this system,” he said, “because it really pushes the field forward on performance and privacy. It’s small, low-power and privacy-sensitive, which are all important features for deploying new, wearable technologies in the real world.”
    Most technology in silent-speech recognition is limited to a select set of predetermined commands and requires the user to face or wear a camera, which is neither practical nor feasible, Cheng Zhang said. There also are major privacy concerns involving wearable cameras — for both the user and those with whom the user interacts, he said.
    Acoustic-sensing technology like EchoSpeech removes the need for wearable video cameras. And because audio data is much smaller than image or video data, it requires less bandwidth to process and can be relayed to a smartphone via Bluetooth in real time, said François Guimbretière, professor in information science.
    “And because the data is processed locally on your smartphone instead of uploaded to the cloud,” he said, “privacy-sensitive information never leaves your control.” More

  • in

    Technology advance paves way to more realistic 3D holograms for virtual reality and more

    Researchers have developed a new way to create dynamic ultrahigh-density 3D holographic projections. By packing more details into a 3D image, this type of hologram could enable realistic representations of the world around us for use in virtual reality and other applications.
    “A 3D hologram can present real 3D scenes with continuous and fine features,” said Lei Gong, who led a research team from the University of Science and Technology of China. “For virtual reality, our method could be used with headset-based holographic displays to greatly improve the viewing angles, which would enhance the 3D viewing experience. It could also provide better 3D visuals without requiring a headset.”
    Producing a realistic-looking holographic display of 3D objects requires projecting images with a high pixel resolution onto a large number of successive planes, or layers, that are spaced closely together. This achieves high depth resolution, which is important for providing the depth cues that make the hologram look three dimensional.
    In Optica, Optica Publishing Group’s journal for high-impact research, Gong’s team and Chengwei Qiu’s research team at the National University of Singapore describe their new approach, called three-dimensional scattering-assisted dynamic holography (3D-SDH). They show that it can achieve a depth resolution more than three orders of magnitude greater than state-of-the-art methods for multiplane holographic projection.
    “Our new method overcomes two long-existing bottlenecks in current digital holographic techniques — low axial resolution and high interplane crosstalk — that prevent fine depth control of the hologram and thus limit the quality of the 3D display,” said Gong. “Our approach could also improve holography-based optical encryption by allowing more data to be encrypted in the hologram.”
    Producing more detailed holograms
    Creating a dynamic holographic projection typically involves using a spatial light modulator (SLM) to modulate the intensity and/or phase of a light beam. However, today’s holograms are limited in terms of quality because current SLM technology allows only a few low-resolution images to be projected onto sperate planes with low depth resolution.
    To overcome this problem, the researchers combined an SLM with a diffuser that enables multiple image planes to be separated by a much smaller amount without being constrained by the properties of the SLM. By also suppressing crosstalk between the planes and exploiting scattering of light and wavefront shaping, this setup enables ultrahigh-density 3D holographic projection.
    To test the new method, the researchers first used simulations to show that it could produce 3D reconstructions with a much smaller depth interval between each plane. For example, they were able to project a 3D rocket model with 125 successive image planes at a depth interval of 0.96 mm in a single 1000×1000-pixel hologram, compared to 32 image planes with a depth interval of 3.75 mm using another recently developed approach known as random vector-based computer-generated holography.
    To validate the concept experimentally, they built a prototype 3D-SDH projector to create dynamic 3D projections and compared this to a conventional state-of- the-art setup for 3D Fresnel computer-generated holography. They showed that 3D-SDH achieved an improvement in axial resolution of more than three orders of magnitude over the conventional counterpart.
    The 3D holograms the researchers demonstrated are all point-cloud 3D images, meaning they cannot present the solid body of a 3D object. Ultimately, the researchers would like to be able to project a collection of 3D objects with a hologram, which would require a higher pixel-count hologram and new algorithms. More

  • in

    How to overcome noise in quantum computations

    Researchers Ludovico Lami (QuSoft, University of Amsterdam) and Mark M. Wilde (Cornell) have made significant progress in quantum computing by deriving a formula that predicts the effects of environmental noise. This is crucial for designing and building quantum computers capable of working in our imperfect world.
    The choreography of quantum computing
    Quantum computing uses the principles of quantum mechanics to perform calculations. Unlike classical computers, which use bits that can be either 0 or 1, quantum computers use quantum bits, or qubits, which can be in a superposition of 0 and 1 simultaneously.
    This allows quantum computers to perform certain types of calculations much faster than classical computers. For example, a quantum computer can factor very large numbers in a fraction of the time it would take a classical computer.
    While one could naively attribute such an advantage to the ability of a quantum computer to perform numerous calculations in parallel, the reality is more complicated. The quantum wave function of the quantum computer (which represents its physical state) possesses several branches, each with its own phase. A phase can be thought of as the position of the hand of a clock, which can point in any direction on the clockface.
    At the end of its computation, the quantum computer recombines the results of all computations it simultaneously carried out on different branches of the wave function into a single answer. “The phases associated to the different branches play a key role in determining the outcome of this recombination process, not unlike how the timing of a ballerina’s steps play a key role in determining the success of a ballet performance,” explains Lami.

    Disruptive environmental noise
    A significant obstacle to quantum computing is environmental noise. Such noise can be likened to a little demon that alters the phase of different branches of the wave function in an unpredictable way. This process of tampering with the phase of a quantum system is called dephasing, and can be detrimental to the success of a quantum computation.
    Dephasing can occur in everyday devices such as optical fibres, which are used to transfer information in the form of light. Light rays travelling through an optical fibre can take different paths; since each path is associated to a specific phase, not knowing the path taken amounts to an effective dephasing noise.
    In their new publication in Nature Photonics, Lami and Wilde analyse a model, called the bosonic dephasing channel, to study how noise affects the transmission of quantum information. It represents the dephasing acting on a single mode of light at definite wavelength and polarisation.
    The number quantifying the effect of the noise on quantum information is the quantum capacity, which is the number of qubits that can be safely transmitted per use of a fibre. The new publication provides a full analytical solution to the problem of calculating the quantum capacity of the bosonic dephasing channel, for all possible forms of dephasing noise.
    Longer messages overcome errors
    To overcome the effects of noise, one can incorporate redundancy in the message to ensure that the quantum information can still be retrieved at the receiving end. This is similar to saying “Alpha, Beta, Charlie” instead of “A, B, C” when speaking on the phone. Although the transmitted message is longer, the redundancy ensures that it is understood correctly.
    The new study quantifies exactly how much redundancy needs to be added to a quantum message to protect it from dephasing noise. This is significant because it enables scientists to quantify the effects of noise on quantum computing and develop methods to overcome these effects. More

  • in

    Random matrix theory approaches the mystery of the neutrino mass

    When any matter is divided into smaller and smaller pieces, eventually all you are left with — when it cannot be divided any further — is a particle. Currently, there are 12 different known elementary particles, which in turn are made up of quarks and leptons each of which come in six different flavors. These flavors are grouped into three generations — each with one charged and one neutral lepton — to form different particles, including the electron, muon, and tau neutrinos. In the Standard Model, the masses of the three generations of neutrinos are represented by a three-by-three matrix.
    A research team led by Professor Naoyuki Haba from the Osaka Metropolitan University Graduate School of Science, analyzed the collection of leptons that make up the neutrino mass matrix. Neutrinos are known to have less difference in mass between generations than other elementary particles, so the research team considered that neutrinos are roughly equal in mass between generations. They analyzed the neutrino mass matrix by randomly assigning each element of the matrix. They showed theoretically, using the random mass matrix model that the lepton flavor mixings are large.
    “Clarifying the properties of elementary particles leads to the exploration of the universe and ultimately to the grand theme of where we came from!” Professor Haba explained. “Beyond the remaining mysteries of the Standard Model, there is a whole new world of physics.”
    After studying the neutrino mass anarchy in the Dirac neutrino, seesaw, double seesaw models, the researchers found that the anarchy approach requires that the measure of the matrix should obey the Gaussian distribution. Having considered several models of light neutrino mass where the matrix is composed of the product of several random matrices, the research team was able to prove, as best they could at this stage, why the calculation of the squared difference of the neutrino masses are closest with the experimental results in the case of the seesaw model with the random Dirac and Majorana matrices.
    “In this study, we showed that the neutrino mass hierarchy can be mathematically explained using random matrix theory. However, this proof is not mathematically complete and is expected to be rigorously proven as random matrix theory continues to develop,” said Professor Haba. “In the future, we will continue with our challenge of elucidating the three-generation copy structure of elementary particles, the essential nature of which is still completely unknown both theoretically and experimentally.” More

  • in

    A new type of photonic time crystal gives light a boost

    Researchers have developed a way to create photonic time crystals and shown that these bizarre, artificial materials amplify the light that shines on them. These findings, described in a paper in Science Advances, could lead to more efficient and robust wireless communications and significantly improved lasers.
    Time crystals were first conceived by Nobel laureate Frank Wilczek in 2012. Mundane, familiar crystals have a structural pattern that repeats in space, but in a time crystal, the pattern repeats in time instead. While some physicists were initially sceptical that time crystals could exist, recent experiments have succeeding in creating them. Last year, researchers at Aalto University’s Low Temperature Laboratory created paired time crystals that could be useful for quantum devices.
    Now, another team has made photonic time crystals, which are time-based versions of optical materials. The researchers created photonic time crystals that operate at microwave frequencies, and they showed that the crystals can amplify electromagnetic waves. This ability has potential applications in various technologies, including wireless communication, integrated circuits, and lasers.
    So far, research on photonic time crystals has focused on bulk materials — that is, three-dimensional structures. This has proven enormously challenging, and the experiments haven’t gotten past model systems with no practical applications. So the team, which included researchers from Aalto University, the Karlsruhe Institute of Technology (KIT), and Stanford University, tried a new approach: building a two-dimensional photonic time crystal, known as a metasurface.
    ‘We found that reducing the dimensionality from a 3D to a 2D structure made the implementation significantly easier, which made it possible to realise photonic time crystals in reality,’ says Xuchen Wang, the study’s lead author, who was a doctoral student at Aalto and is currently at KIT.
    The new approach enabled the team to fabricate a photonic time crystal and experimentally verify the theoretical predictions about its behaviour. ‘We demonstrated for the first time that photonic time crystals can amplify incident light with high gain,’ says Wang.
    ‘In a photonic time crystal, the photons are arranged in a pattern that repeats over time. This means that the photons in the crystal are synchronized and coherent, which can lead to constructive interference and amplification of the light,’ explains Wang. The periodic arrangement of the photons means they can also interact in ways that boost the amplification.
    Two-dimensional photonic time crystals have a range of potential applications. By amplifying electromagnetic waves, they could make wireless transmitters and receivers more powerful or more efficient. Wang points out that coating surfaces with 2D photonic time crystals could also help with signal decay, which is a significant problem in wireless transmission. Photonic time crystals could also simplify laser designs by removing the need for bulk mirrors that are typically used in laser cavities.
    Another application emerges from the finding that 2D photonic time crystals don’t just amplify electromagnetic waves that hit them in free space but also waves travelling along the surface. Surface waves are used for communication between electronic components in integrated circuits. ‘When a surface wave propagates, it suffers from material losses, and the signal strength is reduced. With 2D photonic time crystals integrated into the system, the surface wave can be amplified, and communication efficiency enhanced,’ says Wang. More

  • in

    Social consequences of using AI in conversations

    Cornell University researchers have found people have more efficient conversations, use more positive language and perceive each other more positively when using an artificial intelligence-enabled chat tool.
    The study, published in Scientific Reports, examined how the use of AI in conversations impacts the way that people express themselves and view each other.
    “Technology companies tend to emphasize the utility of AI tools to accomplish tasks faster and better, but they ignore the social dimension,” said Malte Jung, associate professor of information science. “We do not live and work in isolation, and the systems we use impact our interactions with others.”
    However, in addition to greater efficiency and positivity, the group found that when participants think their partner is using more AI-suggested responses, they perceive that partner as less cooperative, and feel less affiliation toward them.
    “I was surprised to find that people tend to evaluate you more negatively simply because they suspect that you’re using AI to help you compose text, regardless of whether you actually are,” said Jess Hohenstein, lead author and postdoctoral researcher. “This illustrates the persistent overall suspicion that people seem to have around AI.”
    For their first experiment, researchers developed a smart-reply platform the group called “Moshi” (Japanese for “hello”), patterned after the now-defunct Google “Allo” (French for “hello”), the first smart-reply platform, unveiled in 2016. Smart replies are generated from LLMs (large language models) to predict plausible next responses in chat-based interactions.
    Participants were asked to talk about a policy issue and assigned to one of three conditions: both participants can use smart replies; only one participant can use smart replies; or neither participant can use smart replies.
    Researchers found that using smart replies increased communication efficiency, positive emotional language and positive evaluations by communication partners. On average, smart replies accounted for 14.3% of sent messages (1 in 7).
    But participants who their partners suspected of responding with smart replies were evaluated more negatively than those who were thought to have typed their own responses, consistent with common assumptions about the negative implications of AI.
    “While AI might be able to help you write,” Hohenstein said, “it’s altering your language in ways you might not expect, especially by making you sound more positive. This suggests that by using text-generating AI, you’re sacrificing some of your own personal voice.”
    Said Jung: “What we observe in this study is the impact that AI has on social dynamics and some of the unintended consequences that could result from integrating AI in social contexts. This suggests that whoever is in control of the algorithm may have influence on people’s interactions, language and perceptions of each other.”
    This work was supported by the National Science Foundation. More