More stories

  • in

    The thermodynamics of quantum computing

    Heat and computers do not mix well. If computers overheat, they do not work well or may even crash. But what about the quantum computers of the future? These high-performance devices are even more sensitive to heat. This is because their basic computational units — quantum bits or “qubits” — are based on highly-sensitive units, some of them individual atoms, and heat can be a crucial interference factor.
    The basic dilemma: In order to retrieve the information of a qubit, its quantum state must be destroyed. The heat released in the process can interfere with the sensitive quantum system. The quantum computer’s own heat generation could consequently become a problem, suspect physicists Wolfgang Belzig (University of Konstanz), Clemens Winkelmann (Néel Institute, Grenoble) and Jukka Pekola (Aalto University, Helsinki). In experiments, the researchers have now documented the heat generated by superconducting quantum systems. To do so, they developed a method that can measure and display the temperature curve to one millionth of a second in accuracy throughout the process of reading one qubit. “This means we are monitoring the process as it takes place,” says Wolfgang Belzig. The method was recently published in the journal Nature Physics.
    Superconducting quantum systems produce heat
    Until now, research on quantum computing has focused on the basics of getting these high-performance computers to work: Much research mainly involves the coupling of quantum bits and identifying which material systems are optimal for qubits. Little consideration has been given to heat generation: Especially in the case of superconducting qubits constructed using a supposedly ideal conducting material, researchers have often assumed that no heat is generated or that the amount is negligible. “That is simply not true,” Wolfgang Belzig says and adds: “People often think of quantum computers as idealized systems. However, even the circuitry of a superconducting quantum system produces heat.” How much heat, is what the researchers can now measure precisely.
    A thermometer for the quantum bit
    The measurement method was developed for superconducting quantum systems. These systems are based on superconducting circuits that use “Josephson junctions” as a central electronic element. “We measure the electron temperature based on the conductivity of such contacts. This is nothing special in and of itself: Many electronic thermometers are based in some way on measuring conductivity using a resistor. The only problem is: How quickly can you take the measurements?” Clemens Winkelmann explains. Changes to a quantum state take only a millionth of a second.
    “Our trick is to have the resistor measuring the temperature inside a resonator — an oscillating circuit — that produces a strong response at a certain frequency. This resonator oscillates at 600 megahertz and can be read out very quickly,” Winkelmann explains.
    Heat is always generated
    With their experimental evidence, the researchers want to draw attention to the thermodynamic processes of a quantum system. “Our message to the quantum computing world is: Be careful, and watch out for heat generation. We can even measure the exact amount,” Winkelmann adds.
    This heat generation could become particularly relevant for scaling up quantum systems. Wolfgang Belzig explains: “One of the greatest advantages of superconducting qubits is that they are so large, because this size makes them easy to build and control. On the other hand, this can be a disadvantage if you want to put many qubits on a chip. Developers need to take into account that more heat will be produced as a result and that the system needs to be cooled adequately.”
    This research was conducted in the context of the Collaborative Research Centre SFB 1432 “Fluctuations and Nonlinearities in Classical and Quantum Matter beyond Equilibrium” at the University of Konstanz. More

  • in

    AI developed to monitor changes to the globally important Thwaites Glacier

    Scientists have developed artificial intelligence techniques to track the development of crevasses — or fractures — on the Thwaites Glacier Ice Tongue in west Antarctica.
    A team of scientists from the University of Leeds and University of Bristol have adapted an AI algorithm originally developed to identify cells in microscope images to spot crevasses forming in the ice from satellite images. Crevasses are indicators of stresses building-up in the glacier.
    Thwaites is a particularly important part of the Antarctic Ice Sheet because it holds enough ice to raise global sea levels by around 60 centimetres and is considered by many to be at risk of rapid retreat, threatening coastal communities around the world.
    Use of AI will allow scientists to more accurately monitor and model changes to this important glacier.
    Published today (Monday, Jan 9) in the journal Nature Geoscience, the research focussed on a part of the glacier system where the ice flows into the sea and begins to float. Where this happens is known as the grounding line and it forms the start of the Thwaites Eastern Ice Shelf and the Thwaites Glacier Ice Tongue, which is also an ice shelf.
    Despite being small in comparison to the size of the entire glacier, changes to these ice shelves could have wide-ranging implications for the whole glacier system and future sea-level rise.

    The scientists wanted to know if crevassing or fracture formation in the glacier was more likely to occur with changes to the speed of the ice flow.
    Development of the algorithm
    Using machine learning, the researchers taught a computer to look at radar satellite images and identify changes over the last decade. The images were taken by the European Space Agency’s Sentinel-1 satellites, which can “see” through the top layer of snow and onto the glacier, revealing the fractured surface of the ice normally hidden from sight.
    The analysis revealed that over the last six years, the Thwaites Glacier ice tongue has sped up and slowed down twice, by around 40% each time — from four km/year to six km/year before slowing. This is a a substantial increase in the magnitude and frequency of speed change compared with past records.
    The study found a complex interplay between crevasse formation and speed of the ice flow. When the ice flow quickens or slows, more crevasses are likely to form. In turn, the increase in crevasses causes the ice to change speed as the level of friction between the ice and underlying rock alters.

    Dr Anna Hogg, a glaciologist in the Satellite Ice Dynamics group at Leeds and an author on the study, said: “Dynamic changes on ice shelves are traditionally thought to occur on timescales of decades to centuries, so it was surprising to see this huge glacier speed up and slow down so quickly.”
    “The study also demonstrates the key role that fractures play in un-corking the flow of ice — a process known as ‘unbuttressing’.
    “Ice sheet models must be evolved to account for the fact that ice can fracture, which will allow us to measure future sea level contributions more accurately.”
    Trystan Surawy-Stepney, lead author of the paper and a doctoral researcher at Leeds, added: “The nice thing about this study is the precision with which the crevasses were mapped.
    “It has been known for a while that crevassing is an important component of ice shelf dynamics and this study demonstrates that this link can be studied on a large scale with beautiful resolution, using computer vision techniques applied to the deluge of satellite images acquired each week.”
    Satellites orbiting the Earth provide scientists with new data over the most remote and inaccessible regions of Antarctica. The radar on board Sentinel-1 allows places like Thwaites Glacier to be imaged day or night, every week, all year round.
    Dr Mark Drinkwater of the European Space Agency commented: “Studies like this would not be possible without the large volume of high-resolution data provided by Sentinel-1. By continuing to plan future missions, we can carry on supporting work like this and broaden the scope of scientific research on vital areas of the Earth’s climate system.”
    As for Thwaites Glacier Ice Tongue, it remains to be seen whether such short-term changes have any impact on the long-term dynamics of the glacier, or whether they are simply isolated symptoms of an ice shelf close to its end.
    The paper — “Episodic dynamic change linked to damage on the thwaites glacier ice tongue” — was authored by Trystan Surawy-Stepney, Anna E. Hogg and Benjamin J. Davison, from the University of Leeds; and Stephen L. Cornford, from the University of Bristol. More

  • in

    New quantum computing architecture could be used to connect large-scale devices

    Quantum computers hold the promise of performing certain tasks that are intractable even on the world’s most powerful supercomputers. In the future, scientists anticipate using quantum computing to emulate materials systems, simulate quantum chemistry, and optimize hard tasks, with impacts potentially spanning finance to pharmaceuticals.
    However, realizing this promise requires resilient and extensible hardware. One challenge in building a large-scale quantum computer is that researchers must find an effective way to interconnect quantum information nodes — smaller-scale processing nodes separated across a computer chip. Because quantum computers are fundamentally different from classical computers, conventional techniques used to communicate electronic information do not directly translate to quantum devices. However, one requirement is certain: Whether via a classical or a quantum interconnect, the carried information must be transmitted and received.
    To this end, MIT researchers have developed a quantum computing architecture that will enable extensible, high-fidelity communication between superconducting quantum processors. In work published in Nature Physics, MIT researchers demonstrate step one, the deterministic emission of single photons — information carriers — in a user-specified direction. Their method ensures quantum information flows in the correct direction more than 96 percent of the time.
    Linking several of these modules enables a larger network of quantum processors that are interconnected with one another, no matter their physical separation on a computer chip.
    “Quantum interconnects are a crucial step toward modular implementations of larger-scale machines built from smaller individual components,” says Bharath Kannan PhD ’22, co-lead author of a research paper describing this technique.
    “The ability to communicate between smaller subsystems will enable a modular architecture for quantum processors, and this may be a simpler way of scaling to larger system sizes compared to the brute-force approach of using a single large and complicated chip,” Kannan adds.

    Kannan wrote the paper with co-lead author Aziza Almanakly, an electrical engineering and computer science graduate student in the Engineering Quantum Systems group of the Research Laboratory of Electronics (RLE) at MIT. The senior author is William D. Oliver, a professor of electrical engineering and computer science and of physics, an MIT Lincoln Laboratory Fellow, director of the Center for Quantum Engineering, and associate director of RLE.
    Moving quantum information
    In a conventional classical computer, various components perform different functions, such as memory, computation, etc. Electronic information, encoded and stored as bits (which take the value of 1s or 0s), is shuttled between these components using interconnects, which are wires that move electrons around on a computer processor.
    But quantum information is more complex. Instead of only holding a value of 0 or 1, quantum information can also be both 0 and 1 simultaneously (a phenomenon known as superposition). Also, quantum information can be carried by particles of light, called photons. These added complexities make quantum information fragile, and it can’t be transported simply using conventional protocols.
    A quantum network links processing nodes using photons that travel through special interconnects known as waveguides. A waveguide can either be unidirectional, and move a photon only to the left or to the right, or it can be bidirectional.

    Most existing architectures use unidirectional waveguides, which are easier to implement since the direction in which photons travel is easily established. But since each waveguide only moves photons in one direction, more waveguides become necessary as the quantum network expands, which makes this approach difficult to scale. In addition, unidirectional waveguides usually incorporate additional components to enforce the directionality, which introduces communication errors.
    “We can get rid of these lossy components if we have a waveguide that can support propagation in both the left and right directions, and a means to choose the direction at will. This ‘directional transmission’ is what we demonstrated, and it is the first step toward bidirectional communication with much higher fidelities,” says Kannan.
    Using their architecture, multiple processing modules can be strung along one waveguide. A remarkable feature the architecture design is that the same module can be used as both a transmitter and a receiver, he says. And photons can be sent and captured by any two modules along a common waveguide.
    “We have just one physical connection that can have any number of modules along the way. This is what makes it scalable. Having demonstrated directional photon emission from one module, we are now working on capturing that photon downstream at a second module,” Almanakly adds.
    Leveraging quantum properties
    To accomplish this, the researchers built a module comprising four qubits.
    Qubits are the building blocks of quantum computers, and are used to store and process quantum information. But qubits can also be used as photon emitters. Adding energy to a qubit causes the qubit to become excited, and then when it de-excites, the qubit will emit the energy in the form of a photon.
    However, simply connecting one qubit to a waveguide does not ensure directionality. A single qubit emits a photon, but whether it travels to the left or to the right is completely random. To circumvent this problem, the researchers utilize two qubits and a property known as quantum interference to ensure the emitted photon travels in the correct direction.
    The technique involves preparing the two qubits in an entangled state of single excitation called a Bell state. This quantum-mechanical state comprises two aspects: the left qubit being excited and the right qubit being excited. Both aspects exist simultaneously, but which qubit is excited at a given time is unknown.
    When the qubits are in this entangled Bell state, the photon is effectively emitted to the waveguide at the two qubit locations simultaneously, and these two “emission paths” interfere with each other. Depending on the relative phase within the Bell state, the resulting photon emission must travel to the left or to the right. By preparing the Bell state with the correct phase, the researchers choose the direction in which the photon travels through the waveguide.
    They can use this same technique, but in reverse, to receive the photon at another module.
    “The photon has a certain frequency, a certain energy, and you can prepare a module to receive it by tuning it to the same frequency. If they are not at the same frequency, then the photon will just pass by. It’s analogous to tuning a radio to a particular station. If we choose the right radio frequency, we’ll pick up the music transmitted at that frequency,” Almanakly says.
    The researchers found that their technique achieved more than 96 percent fidelity — this means that if they intended to emit a photon to the right, 96 percent of the time it went to the right.
    Now that they have used this technique to effectively emit photons in a specific direction, the researchers want to connect multiple modules and use the process to emit and absorb photons. This would be a major step toward the development of a modular architecture that combines many smaller-scale processors into one larger-scale, and more powerful, quantum processor.
    The research is funded, in part, by the AWS Center for Quantum Computing, the U.S. Army Research Office, the Department of Energy Office of Science National Quantum Information Science Research Centers, the Co-design Center for Quantum Advantage, and the Department of Defense. More

  • in

    A soft, stimulating scaffold supports brain cell development ex vivo

    Brain-computer interfaces (BCIs) are a hot topic these days, with companies like Neuralink racing to create devices that connect humans’ brains to machines via tiny implanted electrodes. The potential benefits of BCIs range from improved monitoring of brain activity in patients with neurological problems to restoring vision in blind people to allowing humans to control machines using only our minds. But a major hurdle for the development of these devices is the electrodes themselves — they must conduct electricity, so nearly all of them are made of metal. Metals are not the most brain-friendly materials, as they are hard, rigid, and don’t replicate the physical environment in which brain cells typically grow.
    That problem now has a solution in a new type of electrically conductive hydrogel scaffold developed at the Wyss Institute at Harvard University, Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS), and MIT. Not only does the scaffold mimic the soft, porous conditions of brain tissue, it supported the growth and differentiation of human neural progenitor cells (NPCs) into multiple different brain cell types for up to 12 weeks. The achievement is reported in Advanced Healthcare Materials.
    “This conductive, hydrogel-based scaffold has great potential. Not only can it be used to study the formation of human neural networks in vitro, it could also enable the creation of implantable biohybrid BCIs that more seamlessly integrate with a patient’s brain tissue, improving their performance and decreasing risk of injury,” said first author Christina Tringides, Ph.D., a former graduate student at the Wyss and SEAS who is now a Postdoctoral Fellow at ETH Zürich.
    Out of one, many
    Tringides and her team created their first hydrogel-based electrode in 2021, driven by the desire to make softer electrodes that could “flow” to hug the brain’s natural curves, nooks, and crannies. While the team demonstrated that their electrode was highly compatible with brain tissue, they knew that the most compatible substance for living cells is other cells. They decided to try to integrate living brain cells into the electrode itself, which could potentially allow an implanted electrode to transmit electrical impulses to a patient’s brain via more natural cell-cell contact.
    To make their conductive hydrogel a more comfortable place for cells to live, they added a freeze-drying step to the manufacturing process. The ice crystals that formed during the freeze-drying forced the hydrogel material to concentrate in the spaces around the crystals. When the ice crystals evaporated, they left behind pores surrounded by the conductive hydrogel, forming a porous scaffold. This structure ensured that cells would have ample surface area on which to grow, and that the electrically conductive components would form a continuous pathway through the hydrogel, delivering impulses to all the cells.

    The researchers varied the recipes of their hydrogels to create scaffolds that were either viscoelastic (like Jell-O) or elastic (like a rubber band) and soft or stiff. They then cultured human neural progenitor cells (NPCs) on these scaffolds to see which combination of physical properties best supported neural cell growth and development.
    Cells grown on gels that were viscoelastic and softer formed networks of lattice-like structures on the scaffold and differentiated into multiple other cell types after five weeks. Cells that were cultured on elastic gels, in contrast, had formed clumps that were largely composed of undifferentiated NPCs. The team also varied the amount of conductive materials within the hydrogel material to see how that affected neural growth and development. The more conductive a scaffold was, the more the cells formed branching networks (as they do in vivo) rather than clumps.
    The researchers then analyzed the different cell types that had developed within their hydrogel scaffolds. They found that astrocytes, which support neurons both physically and metabolically, formed their characteristic long projections when grown on viscoelastic gels vs. elastic gels, and there were significantly more of them present when the viscoelastic gels contained more conductive material. Oligodendrocytes, which create the myelin sheath that insulates the axons of neurons, were also present in the scaffolds. There was more total myelin and longer myelinated segments on viscoelastic gels than on elastic gels, and the thickness of the myelin increased when there was more conductive material present in the gels.
    The pièce de (electrical) résistance
    Finally, the team applied electrical stimulation to the living human cells via the conductive materials within their hydrogel scaffold to see how that impacted cell growth. The cells were pulsed with electricity for 15 minutes at a time, either daily or every other day. After eight days, the scaffolds that had been pulsed daily had very few living cells, while those that had been pulsed every other day were full of living cells throughout the scaffold.

    Following this stimulation period, the cells were left in the scaffolds for a total of 51 days. The few cells left in the scaffolds that had been stimulated daily did not differentiate into other cell types, while the every-other-day scaffolds had highly differentiated neurons and astrocytes with long protrusions. The variation in electrical impulses tested did not seem to have an effect on the amount of myelin present in the gels.
    “The successful differentiation of human NPCs into multiple types of brain cells within our scaffolds is confirmation that the conductive hydrogel provides them the right kind of environment in which to grow in vitro,” said senior author Dave Mooney, Ph.D., a Core Faculty member at the Wyss Institute. “It was especially exciting to see myelination on the neurons’ axons, as that has been an ongoing challenge to replicate in living models of the brain.” Mooney is also the Robert P. Pinkas Family Professor of Bioengineering at SEAS.
    Tringides is continuing work on the conductive hydrogel scaffolds, with plans to further investigate how various types of electrical stimulation could affect different cell types, and to develop a more comprehensive in vitro model. She hopes that this technology will one day enable the creation of devices that help restore function in human patients who are suffering from neurological and physiological problems.
    “This work represents a major advance by creating an in vitro microenvironment with the right physical, chemical, and electrical properties to support the growth and specialization of human brain cells. This model may be used to speed up the process of finding effective treatments for neurological diseases, in addition to opening up an entirely new approach to create more effective electrodes and brain-machine interfaces that seamlessly integrate with neuronal tissues. We’re excited to see where this innovative melding of materials science, biomechanics, and tissue engineering leads in the future,” said the Wyss Institute’s Founding Director Don Ingber, M.D., Ph.D. Ingber is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and Boston Children’s Hospital, and the Hansjörg Wyss Professor of Bioinspired Engineering at SEAS.
    Additional authors include Marjolaine Boulingre from SEAS, Andrew Khalil from the Wyss Institute and the Whitehead Institute at MIT, Tenzin Lungjangwa from the Whitehead Institute, and Rudolf Jaenisch from the Whitehead Institute and MIT.
    This work was supported by the National Science Foundation under award nos. 1541959 and DMR-1420570 and a Graduate Research Fellowship Program grant, NIH grants RO1DE013033 5R01DE013349, NSF-MRSEC DMR-2011754, the Wyss Institute for Biologically Inspired Engineering at Harvard University, the EPFL WISH Foundation, and the Wellcome Leap HOPE project. More

  • in

    Why technology alone can't solve the digital divide

    For some communities, the digital divide remains even after they have access to computers and fast internet, new research shows.
    A study of the Bhutanese refugee community in Columbus found that even though more than 95% of the population had access to the internet, very few were using it to connect with local resources and online news.
    And the study, which was done during the height of the COVID-19 pandemic stay-at-home orders in Ohio, found that nearly three-quarters of respondents never used the internet for telehealth services.
    The results showed that the digital divide must be seen as more than just a technological problem, said Jeffrey Cohen, lead author of the study and professor of anthropology at The Ohio State University.
    “We can’t just give people access to the internet and say the problem is solved,” Cohen said.
    “We found that there are social, cultural and environmental reasons that may prevent some communities from getting all the value they could out of internet access.”
    The study was published recently in the International Journal of Environmental Research and Public Health.

    For the study, researchers worked closely with members of the Bhutanese Community of Central Ohio, a nonprofit organization helping resettled Bhutanese refugees in the Columbus area.
    The study included a community survey of 493 respondents, some who were surveyed online and many more who were interviewed in person.
    While many of respondents lived in poverty — more than half had annual incomes below $35,000 — 95.4% said they had access to the internet.
    More than 9 out of 10 of those surveyed said access to digital technology was important, very important or extremely important to them.
    But most had a very limited view of how they could use the internet.

    “For just about everyone we interviewed, the internet was how you connected to your family, through apps like Facebook or WhatsApp,” Cohen said. “For many, that was nearly the only thing they used the internet for.”
    Findings revealed 82% connected to friends and family, and 68% used social media. All other uses were under 31%.
    Not surprisingly, older people, the less educated and those with poor English skills were less likely than others to use the internet.
    A common issue was that many refugees — especially the older and less educated — were just not comfortable online, the study found.
    “Of course, that is not just an issue with the Bhutanese. Many people in our country see the internet as just a place where their children or grandchildren play games, or attend classes,” he said.
    “They don’t see it as a place where they can access their health care or find resources to help them in their daily lives.”
    Language was another issue. While there was a local program to translate some important resources from English to Nepali, the most common language spoken by Bhutanese refugees, many respondents remarked that the translations were “mostly gibberish” and nearly impossible to understand, Cohen said.
    Even for those who spoke English, fewer than 25% described themselves as excellent speakers.
    “People had access to the internet, and this information was available to them, but they couldn’t use it. That is not a technological issue, but it is part of the digital divide,” he said.
    Because the study was done during the COVID-19 pandemic, one of the main areas of focus in the study was access to health care and information on COVID-19.
    Even though telehealth services were one of the main ways to access health care during the pandemic, about 73% said they never used the internet for that purpose.
    And COVID-19 was not the only health issue facing many of the those surveyed.
    “The Bhutanese community is at high risk for cardiometabolic diseases, such as cardiovascular disease and diabetes, and about 72% of those surveyed had one or more indications of these conditions,” Cohen said.
    “If they aren’t taking advantage of telehealth to consult with doctors, this could be putting them at greater risk.”
    Cohen said one key lesson from the study is that researchers must engage and partner with communities to ensure that proposed solutions to problems, including the digital divide, respond to local needs.
    The study was funded in part by the National Institutes of Health and the Ohio State Social Justice Program.
    Co-authors were Arati Maleku and Shambika Raut of the College of Social Work at Ohio State; Sudarshan Pyakurel of the Bhutanese Community of Central Ohio; Taku Suzuki of Denison University; and Francisco Alejandro Montiel Ishino of the National Institute of Environmental Health Sciences at NIH. More

  • in

    New approach to epidemic modeling could speed up pandemic simulations

    Simulations that help determine how a large-scale pandemic will spread can take weeks or even months to run. A recent study in PLOS Computational Biology offers a new approach to epidemic modeling that could drastically speed up the process.
    The study uses sparsification, a method from graph theory and computer science, to identify which links in a network are the most important for the spread of disease.
    By focusing on critical links, the authors found they could reduce the computation time for simulating the spread of diseases through highly complex social networks by 90% or more.
    “Epidemic simulations require substantial computational resources and time to run, which means your results might be outdated by the time you are ready to publish,” says lead author Alexander Mercier, a former Undergraduate Research Fellow at the Santa Fe Institute and now a Ph.D. student at the Harvard T.H. Chan School of Public Health. “Our research could ultimately enable us to use more complex models and larger data sets while still acting on a reasonable timescale when simulating the spread of pandemics such as COVID-19.”
    For the study, Mercier, with SFI researchers Samuel Scarpino and Cristopher Moore, used data from the U.S. Census Bureau to develop a mobility network describing how people across the country commute.
    Then, they applied several different sparsification methods to see if they could reduce the network’s density while retaining the overall dynamics of a disease spreading across the network.
    The most successful sparsification technique they found was effective resistance. This technique comes from computer science and is based on the total resistance between two endpoints in an electrical circuit. In the new study, effective resistance works by prioritizing the edges, or links, between nodes in the mobility network that are the most likely avenues of disease transmission while ignoring links that can be easily bypassed by alternate paths.
    “It’s common in the life sciences to naively ignore low-weight links in a network, assuming that they have a small probability of spreading a disease,” says Scarpino. “But as in the catchphrase ‘the strength of weak ties,’ even a low-weight link can be structurally important in an epidemic — for instance, if it connects two distant regions or distinct communities.”
    Using their effective resistance sparsification approach, the researchers created a network containing 25 million fewer edges — or about 7% of the original U.S. commuting network — while preserving overall epidemic dynamics.
    “Computer scientists Daniel Spielman and Nikhil Srivastava had shown that sparsification can simplify linear problems, but discovering that it works even for nonlinear, stochastic problems like an epidemic was a real surprise,” says Moore.
    While still in an early stage of development, the research not only helps reduce the computational cost of simulating large-scale pandemics but also preserves important details about disease spread, such as the probability of a specific census tract getting infected and when the epidemic is likely to arrive there.
    Story Source:
    Materials provided by Santa Fe Institute. Note: Content may be edited for style and length. More

  • in

    Next-generation wireless technology may leverage the human body for energy

    While you may be just starting to reap the advantages of 5G wireless technology, researchers throughout the world are already working hard on the future: 6G. One of the most promising breakthroughs in 6G telecommunications is the possibility of Visible Light Communication (VLC), which is like a wireless version of fiberoptics, using flashes of light to transmit information. Now, a team of researchers at the University of Massachusetts Amherst has announced that they have invented a low-cost, innovative way to harvest the waste energy from VLC by using the human body as an antenna. This waste energy can be recycled to power an array of wearable devices, or even, perhaps, larger electronics.
    “VLC is quite simple and interesting,” says Jie Xiong, professor of information and computer sciences at UMass Amherst and the paper’s senior author. “Instead of using radio signals to send information wirelessly, it uses the light from LEDs that can turn on and off, up to one million times per second.” Part of the appeal of VLC is that the infrastructure is already everywhere — our homes, vehicles, streetlights and offices are all lit by LED bulbs, which could also be transmitting data. “Anything with a camera, like our smartphones, tablets or laptops, could be the receiver,” says Xiong.
    Previously, Xiong and first author Minhao Cui, a graduate student in information and computer sciences at UMass Amherst, showed that there’s significant “leakage” of energy in VLC systems, because the LEDs also emit “side-channel RF signals,” or radio waves. If this leaked RF energy could be harvested, then it could be put to use.
    The team’s first task was to design an antenna out of coiled copper wire to collect the leaked RF, which they did. But how to maximize the collection of energy?
    The team experimented with all sorts of design details, from the thickness of the wire to the number of times it was coiled, but they also noticed that the efficiency of the antenna varied according to what the antenna touched. They tried resting the coil on plastic, cardboard, wood and steel, as well as touching it to walls of different thicknesses, phones powered on and off and laptops. And then Cui got the idea to see what happened when the coil was in contact with a human body.
    Immediately, it became apparent that a human body is the best medium for amplifying the coil’s ability to collect leaked RF energy, up to ten times more than the bare coil alone.
    After much experimentation, the team came up with “Bracelet+,” a simple coil of copper wire worn as a bracelet on the upper forearm. While the design can be adapted for wearing as a ring, belt, anklet or necklace, the bracelet seemed to offer the right balance of power harvesting and wearability.
    “The design is cheap — less than fifty cents,” note the authors, whose paper won the Best Paper Award from the Association for Computing Machinery’s Conference on Embedded Networked Sensor Systems. “But Bracelet+ can reach up to micro-watts, enough to support many sensors such as on-body health monitoring sensors that require little power to work owing to their low sampling frequency and long sleep-mode duration.”
    “Ultimately,” says Xiong, “we want to be able to harvest waste energy from all sorts of sources in order to power future technology.”
    Story Source:
    Materials provided by University of Massachusetts Amherst. Note: Content may be edited for style and length. More

  • in

    Using machine learning to forecast amine emissions

    Global warming is partly due to the vast amount of carbon dioxide that we release, mostly from power generation and industrial processes, such as making steel and cement. For a while now, chemical engineers have been exploring carbon capture, a process that can separate carbon dioxide and store it in ways that keep it out of the atmosphere.
    This is done in dedicated carbon-capture plants, whose chemical process involves amines, compounds that are already used to capture carbon dioxide from natural gas processing and refining plants. Amines are also used in certain pharmaceuticals, epoxy resins, and dyes.
    The problem is that amines could also be potentially harmful to the environment as well as a health hazard, making it essential to mitigate their impact. This requires accurate monitoring and predicting of a plant’s amine emissions, which has proven to be no easy feat since carbon-capture plants are complex and differ from one another.
    A group of scientists has come up with a machine learning solution for forecasting amine emissions from carbon-capture plants using experimental data from a stress test at an actual plant in Germany. The work was led by the groups of Professor Berend Smit at EPFL’s School of Basic Sciences and Professor Susana Garcia at The Research Centre for Carbon Solutions of Heriot-Watt University in Scotland.
    “The experiments were done in Niederhau?en, on one of the largest coal-fired power plants in Germany,” says Berend Smit. “And from this power plant, a slipstream is sent into a carbon capture pilot plant, where the next generation of amine solution has been tested for over a year. But one of the outstanding issues is that amines can be emitted with flue gas, and these amine emissions need to be controlled.”
    Professor Susana Garcia, together with the plant’s owner, RWE, and TNO in the Netherlands, developed a stress test to study amine emissions under different process conditions. Professor Garcia describes how the test went: “We developed an experimental campaign to understand how and when amine emissions would be generated. But some of our experiments also caused interventions of the plant’s operators to ensure the plant was operating safely.”
    These interventions led to the question of how to interpret the data. Are the amine emissions the result of the stress test itself, or have the interventions of the operators indirectly affected the emissions? This was further complicated by our general lack of understanding of the mechanisms behind amine emissions. “In short, we had an expensive and successful campaign that showed that amine emissions can be a problem, but no tools to further analyze the data,” says Smit.
    He continues: “When Susana Garcia mentioned this to me, it sounded indeed like an impossible problem to solve. But she also mentioned that they measured everything every five minutes, collecting many data. And, if there is anybody in my group that can solve impossible problems with data, it is Kevin.” Kevin Maik Jablonka, a PhD student, developed a machine learning approach that turned the amine emissions puzzle into a pattern-recognition problem.
    “We wanted to know what the emissions would be if we did not have the stress test but only the operators’ interventions,” explains Smit. This is a similar issue as we can have in finance; for example, if you want to evaluate the effect of changes in the tax code, you would like to disentangle the effect of the tax code from, say, interventions caused by the crisis in Ukraine.”
    In the next step, Jablonka used powerful machine learning to predict future amine emissions from the plant’s data. He says: “With this model, we could predict the emissions caused by the interventions of the operators and then disentangle them from those induced by the stress test. In addition, we could use the model to run all kinds of scenarios on reducing these emissions.”
    The conclusion was described as “surprising.” As it turned out, the pilot plant had been designed for pure amine, but the measuring experiments were carried out on a mixture of two amines: 2-amino-2-methyl-1-propanol and piperazine (CESAR1). The scientists found out that those two amines actually respond in opposite ways: reducing the emission of one actually increases the emissions of the other.
    “I am very enthusiastic about the potential impact of this work; it is a completely new way of looking at a complex chemical process,” says Smit. “This type of forecasting is not something one can do with any of the conventional approaches, so it may change the way we operate chemical plants.” More