More stories

  • in

    Efficient fuel-molecule sieving using graphene

    A research team has successfully developed a new method that can prevent the crossover of large fuel molecules and suppress the degradation of electrodes in advanced fuel cell technology using methanol or formic acid. The successful sieving of the fuel molecules is achieved via selective proton transfers due to steric hindrance on holey graphene sheets that have chemical functionalization and act as proton-exchange membranes.
    For realizing carbon neutrality, the demand for the development of direct methanol/formic acid-fuel cell technology has been increasing. In this technology, methanol or formic acid is used as an e-fuel for generating electricity. The fuel cells generate electricity via proton transfer; however, conventional proton-exchange membranes suffer from the “crossover phenomenon,” where the fuel molecules are also transferred between anodes and cathodes. Thereafter, the fuel molecules are unnecessarily oxidized and the electrodes are deactivated.
    In this study, the researchers developed a new proton-exchange membrane comprising graphene sheets with 5-10 nm-diameter holes, which are chemically modified with sulfanilic functional groups affording sulfo groups around the holes. Owing to steric hindrance by the functional groups, the graphene membrane successfully suppresses the crossover phenomenon by blocking the penetration of the fuel molecules while maintaining high proton conductivity for the first time to the best of our knowledge.
    To date, conventional approaches for inhibiting fuel-molecule migration involved an increase of the membrane thickness or sandwiching two-dimensional materials, which in turn reduced the proton conductivity. In this study, the researchers investigated structures that inhibit the migration of fuel molecules through electro-osmotic drag and steric hindrance. Consequently, they found that the sulfanilic-functionalized graphene membrane can remarkably suppress electrode degradation compared with the commercially-available Nafion membranes while maintaining the proton conductivity required for fuel cells.
    Furthermore, simply pasting the graphene membrane onto a conventional proton-exchange membrane can suppress the crossover phenomenon. Thus, this study contributes to the development of advanced fuel cells as a new alternative for hydrogen-type fuel cells. More

  • in

    AI increases precision in plant observation

    Artificial intelligence (AI) can help plant scientists collect and analyze unprecedented volumes of data, which would not be possible using conventional methods. Researchers at the University of Zurich (UZH) have now used big data, machine learning and field observations in the university’s experimental garden to show how plants respond to changes in the environment.
    Climate change is making it increasingly important to know how plants can survive and thrive in a changing environment. Conventional experiments in the lab have shown that plants accumulate pigments in response to environmental factors. To date, such measurements were made by taking samples, which required a part of the plant to be removed and thus damaged. “This labor-intensive method isn’t viable when thousands or millions of samples are needed. Moreover, taking repeated samples damages the plants, which in turn affects observations of how plants respond to environmental factors. There hasn’t been a suitable method for the long-term observation of individual plants within an ecosystem,” says Reiko Akiyama, first author of the study.
    With the support of UZH’s University Research Priority Program (URPP) “Evolution in Action,” a team of researchers has now developed a method that enables scientists to observe plants in nature with great precision. PlantServation is a method that incorporates robust image-acquisition hardware and deep learning-based software to analyze field images, and it works in any kind of weather.
    Millions of images support evolutionary hypothesis of robustness
    Using PlantServation, the researchers collected (top-view) images of Arabidopsis plants on the experimental plots of UZH’s Irchel Campus across three field seasons (lasting five months from fall to spring) and then analyzed the more than four million images using machine learning. The data recorded the species-specific accumulation of a plant pigment called “anthocyanin” as a response to seasonal and annual fluctuations in temperature, light intensity and precipitation.
    PlantServation also enabled the scientists to experimentally replicate what happens after the natural speciation of a hybrid polyploid species. These species develop from a duplication of the entire genome of their ancestors, a common type of species diversification in plants. Many wild and cultivated plants such as wheat and coffee originated in this way.
    In the current study, the anthocyanin content of the hybrid polyploid species A. kamchatica resembled that of its two ancestors: from fall to winter its anthocyanin content was similar to that of the ancestor species originating from a warm region, and from winter to spring it resembled the other species from a colder region. “The results of the study thus confirm that these hybrid polyploids combine the environmental responses of their progenitors, which supports a long-standing hypothesis about the evolution of polyploids,” says Rie Shimizu-Inatsugi, one of the study’s two corresponding authors. More

  • in

    Efficient training for artificial intelligence

    Artifical intelligence not only affords impressive performance, but also creates significant demand for energy. The more demanding the tasks for which it is trained, the more energy it consumes. Víctor López-Pastor and Florian Marquardt, two scientists at the Max Planck Institute for the Science of Light in Erlangen, Germany, present a method by which artificial intelligence could be trained much more efficiently. Their approach relies on physical processes instead of the digital artificial neural networks currently used.
    The amount of energy required to train GPT-3, which makes ChatGPT an eloquent and apparently well-informed Chatbot, has not been revealed by Open AI, the company behind that artificial intelligence (AI). According to the German statistics company Statista, this would require 1000 megawatt hours — about as much as 200 German households with three or more people consume annually. While this energy expenditure has allowed GPT-3 to learn whether the word ‘deep’ is more likely to be followed by the word ‘sea’ or ‘learning’ in its data sets, by all accounts it has not understood the underlying meaning of such phrases.
    Neural networks on neuromorphic computers
    In order to reduce the energy consumption of computers, and particularly AI-applications, in the past few years several research institutions have been investigating an entirely new concept of how computers could process data in the future. The concept is known as neuromorphic computing. Although this sounds similar to artificial neural networks, it in fact has little to do with them as artificial neural networks run on conventional digital computers. This means that the software, or more precisely the algorithm, is modelled on the brain’s way of working, but digital computers serve as the hardware. They perform the calculation steps of the neuronal network in sequence, one after the other, differentiating between processor and memory.
    “The data transfer between these two components alone devours large quantities of energy when a neural network trains hundreds of billions of parameters, i.e. synapses, with up to one terabyte of data” says Florian Marquardt, director of the Max Planck Institute for the Science of Light and professor at the University of Erlangen. The human brain is entirely different and would probably never have been evolutionarily competitive, had it worked with an energy efficiency similar to that of computers with silicon transistors. It would most likely have failed due to overheating.
    The brain is characterized by undertaking the numerous steps of a thought process in parallel and not sequentially. The nerve cells, or more precisely the synapses, are both processor and memory combined. Various systems around the world are being treated as possible candidates for the neuromorphic counterparts to our nerve cells, including photonic circuits utilizing light instead of electrons to perform calculations. Their components serve simultaneously as switches and memory cells.
    A self-learning physical machine optimizes its synapses independently
    Together with Víctor López-Pastor, a doctoral student at the Max Planck Institute for the Science of Light, Florian Marquardt has now devised an efficient training method for neuromorphic computers. “We have developed the concept of a self-learning physical machine,” explains Florian Marquardt. “The core idea is to carry out the training in the form of a physical process, in which the parameters of the machine are optimized by the process itself.” More

  • in

    AI helps bring clarity to LASIK patients facing cataract surgery

    While millions of people have undergone LASIK eye surgery since it became commercially available in 1989, patients sometimes develop cataracts later in life and require new corrective lenses to be implanted in their eyes. With an increasing number of intraocular lens options becoming available, scientists have developed computational simulations to help patients and surgeons see the best options.
    In a study in the Journal of Cataracts & Refractive Surgery, researchers from the University of Rochester created computational eye models that included the corneas of post-LASIK surgery patients and studied how standard intraocular lenses and lenses designed to increase depth of focus performed in operated eyes. Susana Marcos, the David R. Williams Director of the Center for Visual Science and the Nicholas George Professor of Optics and of Ophthalmology at Rochester, says the computational models that use anatomical information of the patient’s eye provide surgeons with important guidance on the expected optical quality post-operatively.
    “Currently the only pre-operative data used to select the lens is essentially the length and curvature of the cornea,” says Marcos, a coauthor of the study. “This new technology allows us to reconstruct the eye in three dimensions, providing us the entire topography of the cornea and crystalline lens, where the intraocular lens is implanted. When you have all this three-dimensional information, you’re in a much better position to select the lens that will produce the best image at the retinal plane.”
    The future of optical coherence tomography
    Marcos and her collaborators from the Center for Visual Science, as well as Rochester’s Flaum Eye Institute and Goergen Institute for Data Science, are conducting a larger study to quantify in three dimensions the eye images using the optical coherence tomography quantification tools they’ve developed to find broader trends. They are using machine-learning algorithms to find relationships between pre- and post-operation data, providing parameters that can inform the best outcomes.
    Additionally, they have developed technology that can help patients see for themselves what different lens options will look like.
    “What we see is not strictly the image that is project on the retina,” says Marcos. “There is all the visual processing and perception that comes in. When surgeons are planning the surgery, it is very difficult for them to convey to the patients how they are going to see. A computational, personalized eye model tells which lens is the best fit for the patient’s eye anatomy, but patients want to see for themselves.”
    With an optical bench, the researchers use technology originally developed for astronomy, such as adaptive optics mirrors and spatial light modulators, to manipulate the optics of the eye as an intraocular lens would. The approach allows Marcos and her collaborators to perform fundamental experiments and collaborate with industry partners to test new products. Marcos also helped develop a commercial headset version of the instrumentation called SimVis Gekko that allows patients to see the world around them as if they had had the surgery.
    In addition to studying techniques to help treat cataracts, the researchers are applying their methods to study other major eye conditions, including presbyopia and myopia. More

  • in

    Shh! Quiet cables set to help reveal rare physics events

    Imagine trying to tune a radio to a single station but instead encountering static noise and interfering signals from your own equipment. That is the challenge facing research teams searching for evidence of extremely rare events that could help understand the origin and nature of matter in the universe. It turns out that when you are trying to tune into some of the universe’s weakest signals, it helps to make your instruments very quiet.
    Around the world more than a dozen teams are listening for the pops and electronic sizzle that might mean they have finally tuned into the right channel. These scientists and engineers have gone to extraordinary lengths to shield their experiments from false signals created by cosmic radiation. Most such experiments are found in very inaccessible places — such as a mile underground in a nickel mine in Sudbury, Ontario, Canada, or in an abandoned gold mine in Lead, South Dakota — to shield them from naturally radioactive elements on Earth. However, one such source of fake signals comes from natural radioactivity in the very electronics that are designed to record potential signals.
    Radioactive contaminants, even at concentrations as tiny as one part-per-billion, can mimic the elusive signals that scientists are seeking. Now, a research team at the Department of Energy’s Pacific Northwest National Laboratory, working with Q-Flex Inc., a small business partner in California, has produced electronic cables with ultra-pure materials. These cables are specially designed and manufactured to have such extremely low levels of the radioactive contaminants that they will not interfere with highly sensitive neutrino and dark matter experiments. The scientists report in the journal EPJ Techniques and Instrumentation that the cables have applications not only in physics experiments, but they may also be useful to reduce the effect of ionizing radiation interfering with future quantum computers.
    “We have pioneered a technique to produce electronic cabling that is a hundred times lower than current commercially available options,” said PNNL principal investigator Richard Saldanha. “This manufacturing approach and product has broad application across any field that is sensitive to the presence of even very low levels of radioactive contaminants.”
    An ultra-quiet choreographed ballet
    Small amounts of naturally occurring radioactive elements are found everywhere: in rocks, dirt and dust floating in the air. The amount of radiation that they emit is so low that they do not pose any health hazards, but it’s still enough to cause problems for next-generation neutrino and dark matter detectors.
    “We typically need to get a million or sometimes a billion times cleaner than the contamination levels you would find in just a little speck of dirt or dust,” said PNNL chemist Isaac Arnquist, who co-authored the research article and led the measurement team. More

  • in

    Topological materials open a new pathway for exploring spin hall materials

    A group of researchers have made a significant breakthrough which could revolutionize next-generation electronics by enabling non-volatility, large-scale integration, low power consumption, high speed, and high reliability in spintronic devices.
    Details of their findings were published in the journal Physical Review B on August 25, 2023.
    Spintronic devices, represented by magnetic random access memory (MRAM), utilize the magnetization direction of ferromagnetic materials for information storage and rely on spin current, a flow of spin angular momentum, for reading and writing data.
    Conventional semiconductor electronics have faced limitations in achieving these qualities.
    However, the emergence of three-terminal spintronic devices, which employ separate current paths for writing and reading information, presents a solution with reduced writing errors and increased writing speed. Nevertheless, the challenge of reducing energy consumption during information writing, specifically magnetization switching, remains a critical concern.
    A promising method for mitigating energy consumption during information writing is the utilization of the spin Hall effect, where spin angular momentum (spin current) flows transversely to the electric current. The challenge lies in identifying materials that exhibit a significant spin Hall effect, a task that has been clouded by a lack of clear guidelines.
    “We turned our attention to a unique compound known as cobalt-tin-sulfur (Co3Sn2S2), which exhibits ferromagnetic properties at low temperatures below 177 K (-96 °C) and paramagnetic behavior at room temperature,” explains Yong-Chang Lau and Takeshi Seki, both from the Institute for Materials Research (IMR), Tohoku University and co-authors of the study. “Notably, Co3Sn2S2 is classified as a topological material and exhibits a remarkable anomalous Hall effect when it transitions to a ferromagnetic state due to its distinctive electronic structure.”
    Lau, Seki and colleagues employed theoretical calculations to explore the electronic states of both ferromagnetic and paramagnetic Co3Sn2S2, revealing that electron-doping enhances the spin Hall effect. To validate this theoretical prediction, thin films of Co3Sn2S2 partially substituted with nickel (Ni) and indium (In) were synthesized. These experiments demonstrated that Co3Sn2S2 exhibited the most significant anomalous Hall effect, while (Co2Ni)Sn2S2 displayed the most substantial spin Hall effect, aligning closely with the theoretical predictions.
    “We uncovered the intricate correlation between the Hall effects, providing a clear path to discovering new spin Hall materials by leveraging existing literature as a guide,” adds Seki. “This will hopefully accelerate the development of ultralow-power-consumption spintronic devices, marking a pivotal step toward the future of electronics.” More

  • in

    Shape-changing smart speaker lets users mute different areas of a room

    In virtual meetings, it’s easy to keep people from talking over each other. Someone just hits mute. But for the most part, this ability doesn’t translate easily to recording in-person gatherings. In a bustling cafe, there are no buttons to silence the table beside you.
    The ability to locate and control sound — isolating one person talking from a specific location in a crowded room, for instance — has challenged researchers, especially without visual cues from cameras.
    A team led by researchers at the University of Washington has developed a shape-changing smart speaker, which uses self-deploying microphones to divide rooms into speech zones and track the positions of individual speakers. With the help of the team’s deep-learning algorithms, the system lets users mute certain areas or separate simultaneous conversations, even if two adjacent people have similar voices. Like a fleet of Roombas, each about an inch in diameter, the microphones automatically deploy from, and then return to, a charging station. This allows the system to be moved between environments and set up automatically. In a conference room meeting, for instance, such a system might be deployed instead of a central microphone, allowing better control of in-room audio.
    The team published its findings Sept. 21 in Nature Communications.
    “If I close my eyes and there are 10 people talking in a room, I have no idea who’s saying what and where they are in the room exactly. That’s extremely hard for the human brain to process. Until now, it’s also been difficult for technology,” said co-lead author Malek Itani, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “For the first time, using what we’re calling a robotic ‘acoustic swarm,’ we’re able to track the positions of multiple people talking in a room and separate their speech.”
    Previous research on robot swarms has required using overhead or on-device cameras, projectors or special surfaces. The UW team’s system is the first to accurately distribute a robot swarm using only sound.
    The team’s prototype consists of seven small robots that spread themselves across tables of various sizes. As they move from their charger, each robot emits a high frequency sound, like a bat navigating, using this frequency and other sensors to avoid obstacles and move around without falling off the table. The automatic deployment allows the robots to place themselves for maximum accuracy, permitting greater sound control than if a person set them. The robots disperse as far from each other as possible since greater distances make differentiating and locating people speaking easier. Today’s consumer smart speakers have multiple microphones, but clustered on the same device, they’re too close to allow for this system’s mute and active zones. More