More stories

  • in

    Environmental impact of hydrofracking vs. conventional gas/oil drilling: Research shows the differences may be minimal

    Crude oil production and natural gas withdrawals in the United States have lessened the country’s dependence on foreign oil and provided financial relief to U.S. consumers, but have also raised longstanding concerns about environmental damage, such as groundwater contamination.
    A researcher in Syracuse University’s College of Arts and Sciences, and a team of scientists from Penn State, have developed a new machine learning technique to holistically assess water quality data in order to detect groundwater samples likely impacted by recent methane leakage during oil and gas production. Using that model, the team concluded that unconventional drilling methods like hydraulic fracturing — or hydrofracking — do not necessarily incur more environmental problems than conventional oil and gas drilling.
    The two common ways to extract oil and gas in the U.S. are through conventional and unconventional methods. Conventional oil and gas are pumped from easily accessed sources using natural pressure. Conversely, unconventional oil and gas are acquired from hard-to-reach sources through a combination of horizontal drilling and hydraulic fracturing. Hydrofracking extracts natural gas, petroleum and brine from bedrock formations by injecting a mixture of sand, chemicals and water. By drilling into the earth and directing the high-pressure mixture into rock, the gas inside releases and flows out to the head of a well.
    Tao Wen, assistant professor of earth and environmental sciences (EES) at Syracuse, recently led a study comparing data from different states to see which method might result in greater contamination of groundwater. They specifically tested levels of methane, which is the primary component of natural gas.
    The team selected four U.S. states located in important shale zones to target for their study: Pennsylvania, Colorado, Texas and New York. One of those states — New York — banned the practice of hydrofracking in 2015 following a review by the NYS Department of Health which found significant uncertainties about health, including increased water and air pollution.
    Wen and his colleagues compiled a large groundwater chemistry dataset from multiple sources including federal agency reports, journal articles, and oil and gas companies. The majority of tested water samples in their study were collected from domestic water wells. Although methane itself is not toxic, Wen says that methane contamination detected in shallow groundwater could be a risk to the relevant homeowner as it could be an explosion hazard, could increase the level of other toxic chemical species like manganese and arsenic, and would contribute to global warming as methane is a greenhouse gas.
    Their model used sophisticated algorithms to analyze almost all of the retained geochemistry data in order to predict if a given groundwater sample was negatively impacted by recent oil and gas drilling.
    The data comparison showed that methane contamination cases in New York — a state without unconventional drilling but with a high volume of conventional drilling — were similar to that of Pennsylvania — a state with a high volume of unconventional drilling. Wen says this suggests that unconventional drilling methods like fracking do not necessarily lead to more environmental problems than conventional drilling, although this result might be alternatively explained by the different sizes of groundwater chemistry datasets compiled for these two states.
    The model also detected a higher rate of methane contamination cases in Pennsylvania than in Colorado and Texas. Wen says this difference could be attributed to different practices when drillers build/drill the oil and gas wells in different states. According to previous research, most of the methane released into the environment from gas wells in the U.S. occurs because the cement that seals the well is not completed along the full lengths of the production casing. However, no data exists to conclude if drillers in those three states use different technology. Wen says this requires further study and review of the drilling data if they become available.
    According to Wen, their machine learning model proved to be effective in detecting groundwater contamination, and by applying it to other states/counties with ongoing or planned oil and gas production it will be an important resource for determining the safest methods of gas and oil drilling.
    Wen and his colleagues from Penn State, including Mengqi Liu, a graduate student from the College of Information Sciences and Technology, Josh Woda, a graduate student from Department of Geosciences, Guanjie Zheng, former Ph.D. student from the College of Information Sciences and Technology, and Susan L. Brantley, distinguished professor in the Department of Geosciences and director of Earth and Environmental Systems Institute, recently had their findings published in the journal Water Research.
    The team’s work was funded by National Science Foundation IIS-16-39150, US Geological Survey (104b award G16AP00079), and College of Earth and Mineral Sciences Dean’s Fund for Postdoc-Facilitated Innovation at Penn State.
    Story Source:
    Materials provided by Syracuse University. Original written by Dan Bernardi. Note: Content may be edited for style and length. More

  • in

    Unbroken: New soft electronics don't break, even when punctured

    A team of Virginia Tech researchers from the Department of Mechanical Engineering and the Macromolecules Innovation Institute has created a new type of soft electronics, paving the way for devices that are self-healing, reconfigurable, and recyclable. These skin-like circuits are soft and stretchy, sustain numerous damage events under load without losing electrical conductivity, and can be recycled to generate new circuits at the end of a product’s life.
    Led by Assistant Professor Michael Bartlett, the team recently published its findings in Communications Materials, an open access journal from Nature Research.
    Current consumer devices, such as phones and laptops, contain rigid materials that use soldered wires running throughout. The soft circuit developed by Bartlett’s team replaces these inflexible materials with soft electronic composites and tiny, electricity-conducting liquid metal droplets. These soft electronics are part of a rapidly emerging field of technology that gives gadgets a level of durability that would have been impossible just a few years ago.
    The liquid metal droplets are initially dispersed in an elastomer, a type of rubbery polymer, as electrically insulated, discrete drops.
    “To make circuits, we introduced a scalable approach through embossing, which allows us to rapidly create tunable circuits by selectively connecting droplets,” postdoctoral researcher and first author Ravi Tutika said. “We can then locally break the droplets apart to remake circuits and can even completely dissolve the circuits to break all the connections to recycle the materials, and then start back at the beginning.”
    The circuits are soft and flexible, like skin, continuing to work even under extreme damage. If a hole is punched in these circuits, the metal droplets can still transfer power. Instead of cutting the connection completely as in the case of a traditional wire, the droplets make new connections around the hole to continue passing electricity.
    The circuits will also stretch without losing their electrical connection, as the team pulled the device to over 10 times its original length without failure during the research.
    At the end of a product’s life, the metal droplets and the rubbery materials can be reprocessed and returned to a liquid solution, effectively making them recyclable. From that point, they can be remade to start a new life, an approach that offers a pathway to sustainable electronics.
    While a stretchy smartphone has not yet been made, rapid development in the field also holds promise for wearable electronics and soft robotics. These emerging technologies require soft, robust circuitry to make the leap into consumer applications.
    “We’re excited about our progress and envision these materials as key components for emerging soft technologies,” Bartlett said. “This work gets closer to creating soft circuitry that could survive in a variety of real-world applications.”
    Story Source:
    Materials provided by Virginia Tech. Original written by Alex Parrish. Note: Content may be edited for style and length. More

  • in

    Backscatter breakthrough runs near-zero-power IoT communicators at 5G speeds everywhere

    The promise of 5G Internet of Things (IoT) networks requires more scalable and robust communication systems — ones that deliver drastically higher data rates and lower power consumption per device.
    Backscatter radios — passive sensors that reflect rather than radiate energy — are known for their low-cost, low-complexity, and battery-free operation, making them a potential key enabler of this future although they typically feature low data rates and their performance strongly depends on the surrounding environment.
    Researchers at the Georgia Institute of Technology, Nokia Bell Labs, and Heriot-Watt University have found a low-cost way for backscatter radios to support high-throughput communication and 5G-speed Gb/sec data transfer using only a single transistor when previously it required expensive and multiple stacked transistors.
    Employing a unique modulation approach in the 5G 24/28 Gigahertz (GHz) bandwidth, the researchers have shown that these passive devices can transfer data safely and robustly from virtually any environment. The findings were reported earlier this month in the journal Nature Electronics.
    Traditionally, mmWave communications, called the extremely high frequency band, is considered “the last mile” for broadband, with directive point-to-point and point-to-multipoint wireless links. This spectrum band offers many advantages, including wide available GHz bandwidth, which enables very large communication rates, and the ability to implement electrically large antenna arrays, enabling on-demand beamforming capabilities. However, such mmWave systems depend on high-cost components and systems.
    The Struggle for Simplicity Versus Cost
    “Typically, it was simplicity against cost. You could either do very simple things with one transistor or you need multiple transistors for more complex features, which made these systems very expensive,” said Emmanouil (Manos) Tentzeris, Ken Byers Professor in Flexible Electronics in Georgia Tech’s School of Electrical and Computer Engineering (ECE). “Now we’ve enhanced the complexity, making it very powerful but very low cost, so we’re getting the best of both worlds.” More

  • in

    Nanotech OLED electrode liberates 20% more light, could slash display power consumption

    A new electrode that could free up 20% more light from organic light-emitting diodes has been developed at the University of Michigan. It could help extend the battery life of smartphones and laptops, or make next-gen televisions and displays much more energy efficient.
    The approach prevents light from being trapped in the light-emitting part of an OLED, enabling OLEDs to maintain brightness while using less power. In addition, the electrode is easy to fit into existing processes for making OLED displays and light fixtures.
    “With our approach, you can do it all in the same vacuum chamber,” said L. Jay Guo, U-M professor of electrical and computer engineering and corresponding author of the study.
    Unless engineers take action, about 80% of the light produced by an OLED gets trapped inside the device. It does this due to an effect known as waveguiding. Essentially, the light rays that don’t come out of the device at an angle close to perpendicular get reflected back and guided sideways through the device. They end up lost inside the OLED.
    A good portion of the lost light is simply trapped between the two electrodes on either side of the light-emitter. One of the biggest offenders is the transparent electrode that stands between the light-emitting material and the glass, typically made of indium tin oxide (ITO). In a lab device, you can see trapped light shooting out the sides rather than traveling through to the viewer.
    “Untreated, it is the strongest waveguiding layer in the OLED,” Guo said. “We want to address the root cause of the problem.”
    By swapping out the ITO for a layer of silver just five nanometers thick, deposited on a seed layer of copper, Guo’s team maintained the electrode function while eliminating the waveguiding problem in the OLED layers altogether. More

  • in

    AI used to predict unknown links between viruses and mammals

    A new University of Liverpool study could help scientists mitigate the future spread of zoonotic and livestock diseases caused by existing viruses.
    Researchers have used a form or artificial intelligence (AI) called machine-learning to predict more than 20,000 unknown associations between known viruses and susceptible mammalian species. The findings, which are published in Nature Communications, could be used to help target disease surveillance programmes.
    Thousands of viruses are known to affect mammals, with recent estimates indicating that less than 1% of mammalian viral diversity has been discovered to date. Some of these viruses such as human and feline immunodeficiency viruses have a very narrow host range, whereas others such as rabies and West Nile viruses have very wide host ranges.
    “Host range is an important predictor of whether a virus is zoonotic and therefore poses a risk to humans. Most recently, SARS-CoV-2 has been found to have a relatively broad host range which may have facilitated its spill-over to humans. However, our knowledge of the host range of most viruses remains limited,” explains lead researcher Dr Maya Wardeh from the University’s Institute of Infection, Veterinary and Ecological Sciences.
    To address this knowledge gap, the researchers developed a novel machine learning framework to predict unknown associations between known viruses and susceptible mammalian species by consolidating three distinct perspectives — that of each virus, each mammal, and the network connecting them, respectively.
    Their results suggests that there are more than five times as many associations between known zoonotic viruses and wild and semi-domesticated mammals than previously thought. In particular, bats and rodents, which have been associated with recent outbreaks of emerging viruses such as coronaviruses and hantaviruses, were linked with increased risk of zoonotic viruses.
    The model also predicts a five-fold increase in associations between wild and semi-domesticated mammals and viruses of economically important domestic species such as livestock and pets.
    Dr Wardeh said: “As viruses continue to move across the globe, our model provides a powerful way to assess potential hosts they have yet to encounter. Having this foresight could help to identify and mitigate zoonotic and animal-disease risks, such as spill-over from animal reservoirs into human populations.”
    Dr Wardeh is currently expanding the approach to predict the ability of ticks and insects to transmit viruses to birds and mammals, which will enable prioritisation of laboratory-based vector-competence studies worldwide to help mitigate future outbreaks of vector-borne diseases.
    Story Source:
    Materials provided by University of Liverpool. Note: Content may be edited for style and length. More

  • in

    When did the first COVID-19 case arise?

    Using methods from conservation science, a new analysis suggests that the first case of COVID-19 arose between early October and mid-November, 2019 in China, with the most likely date of origin being November 17. David Roberts of the University of Kent, U.K., and colleagues present these findings in the open-access journal PLOS Pathogens.
    The origins of the ongoing COVID-19 pandemic remain unclear. The first officially identified case occurred in early December 2019. However, mounting evidence suggests that the original case may have emerged even earlier.
    To help clarify the timing of the onset of the pandemic, Roberts and colleagues repurposed a mathematical model originally developed by conservation scientists to determine the date of extinction of a species, based on recorded sightings of the species. For this analysis, they reversed the method to determine the date when COVID-19 most likely originated, according to when some of the earliest known cases occurred in 203 countries.
    The analysis suggests that the first case occurred in China between early October and mid-November of 2019. The first case most likely arose on November 17, and the disease spread globally by January 2020. These findings support growing evidence that the pandemic arose sooner and grew more rapidly than officially accepted.
    The analysis also identified when COVID-19 is likely to have spread to the first five countries outside of China, as well as other continents. For instance, it estimates that the first case outside of China occurred in Japan on January 3, 2020, the first case in Europe occurred in Spain on January 12, 2020, and the first case in North America occurred in the United States on January 16, 2020.
    The researchers note that their novel method could be applied to better understand the spread of other infectious diseases in the future. Meanwhile, better knowledge of the origins of COVID-19 could improve understanding of its continued spread.
    Roberts adds, “The method we used was originally developed by me and a colleague to date extinctions, however, here we use it to date the origination and spread of COVID-19. This novel application within the field of epidemiology offers a new opportunity to understand the emergence and spread of diseases as it only requires a small amount of data.”
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Nanotech and AI could hold key to unlocking global food security challenge

    ‘Precision agriculture’ where farmers respond in real time to changes in crop growth using nanotechnology and artificial intelligence (AI) could offer a practical solution to the challenges threatening global food security, a new study reveals.
    Climate change, increasing populations, competing demands on land for production of biofuels and declining soil quality mean it is becoming increasingly difficult to feed the world’s populations.
    The United Nations (UN) estimates that 840 million people will be affected by hunger by 2030, but researchers have developed a roadmap combining smart and nano-enabled agriculture with AI and machine learning capabilities that could help to reduce this number.
    Publishing their findings today in Nature Plants, an international team of researchers led by the University of Birmingham sets out the following steps needed to use AI to harness the power of nanomaterials safely, sustainably and responsibly: Understand the long-term fate of nanomaterials in agricultural environments — how nanomaterials can interact with roots, leaves and soil; Assess the long-term life cycle impact of nanomaterials in the agricultural ecosystem such as how how repeated application of nanomaterials will affect soils; Take a systems-level approach to nano-enabled agriculture — use existing data on soil quality, crop yield and nutrient-use efficiency (NUE) to predict how nanomaterials will behave in the environment; and Use AI and machine learning to identify key properties that will control the behaviour of nanomaterials in agricultural settings.Study co-author Iseult Lynch, Professor of Environmental Nanosciences at the University of Birmingham, commented: “Current estimates show nearly 690 million people are hungry — almost nine per cent of the planet’s population. Finding sustainable agricultural solutions to this problem requires us to take bold new approaches and integrate knowledge from diverse fields, such as materials science and informatics.
    “Precision agriculture, using nanotechnology and artificial intelligence, offers exciting opportunities for sustainable food production. We can link existing models for nutrient cycling and crop productivity with nanoinformatics approaches to help both crops and soil perform better — safely, sustainably and responsibly.”
    The main driver for innovation in agritech is the need to feed the increasing global population with a decreasing agricultural land area, whilst conserving soil health and protecting environmental quality. More

  • in

    Quantum simulation: Measurement of entanglement made easier

    Researchers have developed a method to make previously hardly accessible properties in quantum systems measurable. The new method for determining the quantum state in quantum simulators reduces the number of necessary measurements and makes work with quantum simulators much more efficient.
    In a few years, a new generation of quantum simulators could provide insights that would not be possible using simulations on conventional supercomputers. Quantum simulators are capable of processing a great amount of information since they quantum mechanically superimpose an enormously large number of bit states. For this reason, however, it also proves difficult to read this information out of the quantum simulator. In order to be able to reconstruct the quantum state, a very large number of individual measurements are necessary. The method used to read out the quantum state of a quantum simulator is called quantum state tomography. “Each measurement provides a ‘cross-sectional image’ of the quantum state. You then put these cross-sectional images together to form the complete quantum state,” explains theoretical physicist Christian Kokail from Peter Zoller’s team at the Institute of Quantum Optics and Quantum Information at the Austrian Academy of Sciences and the Department of Experimental Physics at the University of Innsbruck. The number of measurements needed in the lab increases very rapidly with the size of the system. “The number of measurements grows exponentially with the number of qubits,” the physicist says. The Innsbruck researchers have now succeeded in developing a much more efficient method for quantum simulators.
    Efficient method that delivers new insights
    Insights from quantum field theory allow quantum state tomography to be much more efficient, i.e., to be performed with significantly fewer measurements. “The fascinating thing is that it was not at all clear from the outset that the predictions from quantum field theory could be applied to our quantum simulation experiments,” says theoretical physicist Rick van Bijnen. “Studying older scientific papers from this field happened to lead us down this track.” Quantum field theory provides the basic framework of the quantum state in the quantum simulator. Only a few measurements are then needed to fit the details into this basic framework. Based on this, the Innsbruck researchers have developed a measurement protocol by which tomography of the quantum state becomes possible with a drastically reduced number of measurements. At the same time, the new method allows new insights into the structure of the quantum state to be obtained. The physicists tested the new method with experimental data from an ion trap quantum simulator of the Innsbruck research group led by Rainer Blatt and Christian Roos. “In the process, we were now able to measure properties of the quantum state that were previously not observable in this quality,” Kokail recounts.
    Verification of the result
    A verification protocol developed by the group together with Andreas Elben and Benoit Vermersch two years ago can be used to check whether the structure of the quantum state actually matches the expectations from quantum field theory. “We can use further random measurements to check whether the basic framework for tomography that we developed based on the theory actually fits or is completely wrong,” explains Christian Kokail. The protocol raises a red flag if the framework does not fit. Of course, this would also be an interesting finding for the physicists, because it would possibly provide clues for the not yet fully understood relationship with quantum field theory. At the moment, the physicists around Peter Zoller are developing quantum protocols in which the basic framework of the quantum state is not stored on a classical computer, but is realized directly on the quantum simulator.
    Story Source:
    Materials provided by University of Innsbruck. Note: Content may be edited for style and length. More