More stories

  • in

    Humans may not be able to handle as much heat as scientists thought

    More than 2,000 people dead from extreme heat and wildfires raging in Portugal and Spain. High temperature records shattered from England to Japan. Overnights that fail to cool.

    Brutal heat waves are quickly becoming the hallmark of the summer of 2022.

    And even as climate change continues to crank up the temperature, scientists are working fast to understand the limits of humans’ resilience to heat extremes. Recent research suggests that heat stress tolerance in people may be lower than previously thought. If true, millions more people could be at risk of succumbing to dangerous temperatures sooner than expected.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    “Bodies are capable of acclimating over a period of time” to temperature changes, says Vivek Shandas, an environmental planning and climate adaptation researcher at Portland State University in Oregon. Over geologic time, there have been many climate shifts that humans have weathered, Shandas says. “[But] we’re in a time when these shifts are happening much more quickly.”

    Just halfway through 2022, heat waves have already ravaged many countries. The heat arrived early in southern Asia: In April, Wardha, India, saw a high of 45° Celsius (113° Fahrenheit); in Nawabshah, Pakistan, in May recorded temperatures rose to 49.5° C (121.1° F).

    Extreme heat alerts blared across Europe beginning in June and continuing through July, the rising temperatures exacerbating drought and sparking wildfires. The United Kingdom shattered its hottest-ever record July 19 when temperatures reached 40.3° C in the English village of Coningsby. The heat fueled fires in France, forcing thousands to evacuate from their homes. 

    And the litany goes on: Japan experienced its worst June heat wave since record-keeping began in 1875, leading to the country’s highest-ever recorded June temperature of 40.2° C.  China’s coastal megacities, from Shanghai to Chengdu, were hammered by heat waves in July as temperatures in the region also rose above 40° C. And in the United States, a series of heat waves gripped the Midwest, the South and the West in June and July. Temperatures soared to 42° C in North Platte, Neb., and to 45.6° C in Phoenix.

    The current global rate of warming on Earth is unprecedented (SN: 7/24/19). And scientists have long predicted that human-caused climate change will increase the occurrence of heat waves. Globally, humans’ exposure to extreme heat tripled from 1983 to 2016, particularly in South Asia.

    The heat already is taking an increasing toll on human health. It can cause heat cramps, heat exhaustion and heat stroke, which is often fatal. Dehydration can lead to kidney and heart disease. Extreme heat can even change how we behave, increasing aggression and decreasing our ability to focus (SN: 8/18/21).

    Staying cool

    The human body has various ways to shed excess heat and keep the core of the body at an optimal temperature of about 37° C (98.6° F). The heart pumps faster, speeding up blood flow that carries heat to the skin (SN: 4/3/18). Air passing over the skin can wick away some of that heat. Evaporative cooling — sweating — also helps.

    But there’s a limit to how much heat humans can endure. In 2010, scientists estimated that theoretical heat stress limit to be at a “wet bulb” temperature of 35° C. Wet bulb temperatures depend on a combination of humidity and “dry bulb” air temperature measured by a thermometer. Those variables mean a place could hit a wet bulb temperature of 35° C in different ways — for instance, if the air is that temperature and there’s 100 percent humidity, or if the air temperature is 45° C and there’s 50 percent humidity. The difference is due to evaporative cooling.

    When water evaporates from the skin or another surface, it steals away energy in the form of heat, briefly cooling that surface. That means that in drier regions, the wet bulb temperature — where that ephemeral cooling effect happens readily — will be lower than the actual air temperature. In humid regions, however, wet and dry bulb temperatures are similar, because the air is so moist it’s difficult for sweat to evaporate quickly.

    So when thinking about heat stress on the body, scientists use wet bulb temperatures because they are a measure of how much cooling through evaporation is possible in a given climate, says Daniel Vecellio, a climate scientist at Penn State.

    “Both hot/dry and warm/humid environments can be equally dangerous,” Vecellio says — and this is where the body’s different cooling strategies come into play. In hot, dry areas, where the outside temperature may be much hotter than skin temperature, human bodies rely entirely on sweating to cool down, he says. In warm, humid areas, where the air temperature may actually be cooler than skin temperatures (but the humidity makes it seem warmer than it is), the body can’t sweat as efficiently. Instead, the cooler air passing over the skin can draw away the heat.

    How hot is too hot?

    Given the complexity of the body’s cooling system, and the diversity of human bodies, there isn’t really a one-size-fits-all threshold temperature for heat stress for everybody. “No one’s body runs at 100 percent efficiency,” Vecellio says. Different body sizes, the ability to sweat, age and acclimation to a regional climate all have a role.

    Still, for the last decade, that theoretical wet bulb 35° C number has been considered to be the point beyond which humans can no longer regulate their bodies’ temperatures. But recent laboratory-based research by Vecellio and his colleagues suggests that a general, real-world threshold for human heat stress is much lower, even for young and healthy adults.

    The researchers tracked heat stress in two dozen subjects ranging in age from 18 to 34, under a variety of controlled climates. In the series of experiments, the team varied humidity and temperature conditions within an environmental chamber, sometimes holding temperature constant while varying the humidity, and sometimes vice versa.

    The subjects exerted themselves within the chamber just enough to simulate minimal outdoor activity, walking on a treadmill or pedaling slowly on a bike with no resistance. During these experiments, which lasted for 1.5 to two hours, the researchers measured the subjects’ skin temperatures using wireless probes and assessed their core temperatures using a small telemetry pill that the subjects swallowed.

    In warm and humid conditions, the subjects in the study were unable to tolerate heat stress at wet bulb temperatures closer to 30° or 31° C, the team estimates. In hot and dry conditions, that wet bulb temperature was even lower, ranging from 25° to 28° C, the researchers reported in the February Journal of Applied Physiology. For context, in a very dry environment at about 10 percent humidity, a wet bulb temperature of 25° C would correspond to an air temperature of about 50° C (122° F).

    These results suggest that there is much more work to be done to understand what humans can endure under real-world heat and humidity conditions, but that the threshold may be much lower than thought, Vecellio says. The 2010 study’s theoretical finding of 35° C may still be “the upper limit,” he adds. “We’re showing the floor.”

    And that’s for young, healthy adults doing minimal activity. Thresholds for heat stress are expected to be lower for outdoor workers required to exert themselves, or for the elderly or children. Assessing laboratory limits for more at-risk people is the subject of ongoing work for Vecellio and his colleagues.

    A worker wipes away sweat in Toulouse, France, on July 13. An intense heat wave swept across Europe in mid-July, engulfing Spain, Portugal, France, England and other countries.VALENTINE CHAPUIS/AFP via Getty Images

    If the human body’s tolerance for heat stress is generally lower than scientists have realized, that could mean millions more people will be at risk from the deadliest heat sooner than scientists have realized. As of 2020, there were few reports of wet bulb temperatures around the world reaching 35° C, but climate simulations project that limit could be regularly exceeded in parts of South Asia and the Middle East by the middle of the century.

    Some of the deadliest heat waves in the last two decades were at lower wet bulb temperatures: Neither the 2003 European heat wave, which caused an estimated 30,000 deaths, nor the 2010 Russian heat wave, which killed over 55,000 people, exceeded wet bulb temperatures of 28° C.

    Protecting people

    How best to inform the public about heat risk is “the part that I find to be tricky,” says Shandas, who wasn’t involved in Vecellio’s research. Shandas developed the scientific protocol for the National Integrated Heat Health Information System’s Urban Heat Island mapping campaign in the United States.

    It’s very useful to have this physiological data from a controlled, precise study, Shandas says, because it allows us to better understand the science behind humans’ heat stress tolerance. But physiological and environmental variability still make it difficult to know how best to apply these findings to public health messaging, such as extreme heat warnings, he says. “There are so many microconsiderations that show up when we’re talking about a body’s ability to manage [its] internal temperature.”

    One of those considerations is the ability of the body to quickly acclimate to a temperature extreme. Regions that aren’t used to extreme heat may experience greater mortality, even at lower temperatures, simply because people there aren’t used to the heat. The 2021 heat wave in the Pacific Northwest wasn’t just extremely hot — it was extremely hot for that part of the world at that time of year, which makes it more difficult for the body to adapt, Shandas says (SN: 6/29/21).

    Heat that arrives unusually early and right on the heels of a cool period can also be more deadly, says Larry Kalkstein, a climatologist at the University of Miami and the chief heat science advisor for the Washington, D.C.–based nonprofit Adrienne Arsht-Rockefeller Foundation Resilience Center. “Often early season heat waves in May and June are more dangerous than those in August and September.”

    One way to improve communities’ resilience to the heat may be to treat heat waves like other natural disasters — including give them names and severity rankings (SN: 8/14/20). As developed by an international coalition known as the Extreme Heat Resilience Alliance, those rankings form the basis for a new type of heat wave warning that explicitly considers the factors that impact heat stress, such as wet bulb temperature and acclimation, rather than just temperature extremes.

    The rankings also consider factors such as cloud cover, wind and how hot the temperatures are overnight. “If it’s relatively cool overnight, there’s not as much negative health outcome,” says Kalkstein, who created the system. But overnight temperatures aren’t getting as low as they used to in many places. In the United States, for example, the average minimum temperatures at nighttime are now about 0.8° C warmer than they were during the first half of the 20th century, according to the country’s Fourth National Climate Assessment, released in 2018 (SN: 11/28/18).

    By naming heat waves like hurricanes, officials hope to increase citizens’ awareness of the dangers of extreme heat. Heat wave rankings could also help cities tailor their interventions to the severity of the event. Six cities are currently testing the system’s effectiveness: four in the United States and in Athens, Greece, and Seville, Spain. On July 24, with temperatures heading toward 42° C, Seville became the first city in the world to officially name a heat wave, sounding the alarm for Heat Wave Zoe.

    As 2022 continues to smash temperature records around the globe, such warnings may come not a moment too soon. More

  • in

    Anti-butterfly effect enables new benchmarking of quantum-computer performance

    Research drawing on the quantum “anti-butterfly effect” solves a longstanding experimental problem in physics and establishes a method for benchmarking the performance of quantum computers.
    “Using the simple, robust protocol we developed, we can determine the degree to which quantum computers can effectively process information, and it applies to information loss in other complex quantum systems, too,” said Bin Yan, a quantum theorist at Los Alamos National Laboratory.
    Yan is corresponding author of the paper on benchmarking information scrambling published today in Physical Review Letters. “Our protocol quantifies information scrambling in a quantum system and unambiguously distinguishes it from fake positive signals in the noisy background caused by quantum decoherence,” he said.
    Noise in the form of decoherence erases all the quantum information in a complex system such as a quantum computer as it couples with the surrounding environment. Information scrambling through quantum chaos, on the other hand, spreads information across the system, protecting it and allowing it to be retrieved.
    Coherence is a quantum state that enables quantum computing, and decoherence refers to the loss of that state as information leaks to the surrounding environment.
    “Our method, which draws on the quantum anti-butterfly effect we discovered two years ago, evolves a system forward and backward through time in a single loop, so we can apply it to any system with time-reversing the dynamics, including quantum computers and quantum simulators using cold atoms,” Yan said.
    The Los Alamos team demonstrated the protocol with simulations on IBM cloud-based quantum computers.
    The inability to distinguish decoherence from information scrambling has stymied experimental research into the phenomenon. First studied in black-hole physics, information scrambling has proved relevant across a wide range of research areas, including quantum chaos in many-body systems, phase transition, quantum machine learning and quantum computing. Experimental platforms for studying information scrambling include superconductors, trapped ions and cloud-based quantum computers.
    Practical application of the quantum anti-butterfly effect
    Yan and co-author Nikolai Sinitsyn published a paper in 2020 proving that evolving quantum processes backwards on a quantum computer to damage information in the simulated past causes little change when returned to the present. In contrast, a classical-physics system smears the information irrecoverably during the back-and-forth time loop.
    Building on this discovery, Yan, Sinitsyn and co-author Joseph Harris, a University of Edinburgh graduate student who worked on the current paper as a participant in the Los Alamos Quantum Computing Summer School, developed the protocol. It prepares a quantum system and subsystem, evolves the full system forward in time, causes a change in a different subsystem, then evolves the system backward for the same amount of time. Measuring the overlap of information between the two subsystems shows how much information has been preserved by scrambling and how much lost to decoherence.
    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    Engineering roboticists discover alternative physics

    A precursor step to understanding physics is identifying relevant variables. Columbia Engineers developed an AI program to tackle a longstanding problem: whether it is possible to identify state variables from only high-dimensional observational data. Using video recordings of a variety of physical dynamical systems, the algorithm discovered the intrinsic dimension of the observed dynamics and identified candidate sets of state variables — without prior knowledge of the underlying physics.
    Energy, Mass, Velocity. These three variables make up Einstein’s iconic equation E=MC2. But how did Einstein know about these concepts in the first place? A precursor step to understanding physics is identifying relevant variables. Without the concept of energy, mass, and velocity, not even Einstein could discover relativity. But can such variables be discovered automatically? Doing so could greatly accelerate scientific discovery.
    This is the question that researchers at Columbia Engineering posed to a new AI program. The program was designed to observe physical phenomena through a video camera, then try to search for the minimal set of fundamental variables that fully describe the observed dynamics. The study was published on July 25 in Nature Computational Science.
    The researchers began by feeding the system raw video footage of phenomena for which they already knew the answer. For example, they fed a video of a swinging double-pendulum known to have exactly four “state variables” — the angle and angular velocity of each of the two arms. After a few hours of analysis, the AI outputted the answer: 4.7.
    “We thought this answer was close enough,” said Hod Lipson, director of the Creative Machines Lab in the Department of Mechanical Engineering, where the work was primarily done. “Especially since all the AI had access to was raw video footage, without any knowledge of physics or geometry. But we wanted to know what the variables actually were, not just their number.”
    The researchers then proceeded to visualize the actual variables that the program identified. Extracting the variables themselves was not easy, since the program cannot describe them in any intuitive way that would be understandable to humans. After some probing, it appeared that two of the variables the program chose loosely corresponded to the angles of the arms, but the other two remain a mystery. “We tried correlating the other variables with anything and everything we could think of: angular and linear velocities, kinetic and potential energy, and various combinations of known quantities,” explained Boyuan Chen PhD ’22, now an assistant professor at Duke University, who led the work. “But nothing seemed to match perfectly.” The team was confident that the AI had found a valid set of four variables, since it was making good predictions, “but we don’t yet understand the mathematical language it is speaking,” he explained. More

  • in

    Seeing the light: Researchers develop new AI system using light to learn associatively

    Researchers at Oxford University’s Department of Materials, working in collaboration with colleagues from Exeter and Munster have developed an on-chip optical processor capable of detecting similarities in datasets up to 1,000 times faster than conventional machine learning algorithms running on electronic processors.
    The new research published in Optica took its inspiration from Nobel Prize laureate Ivan Pavlov’s discovery of classical conditioning. In his experiments, Pavlov found that by providing another stimulus during feeding, such as the sound of a bell or metronome, his dogs began to link the two experiences and would salivate at the sound alone. The repeated associations of two unrelated events paired together could produce a learned response — a conditional reflex.
    Co-first author Dr James Tan You Sian, who did this work as part of his DPhil in the Department of Materials, University of Oxford said: ‘Pavlovian associative learning is regarded as a basic form of learning that shapes the behaviour of humans and animals — but adoption in AI systems is largely unheard of. Our research on Pavlovian learning in tandem with optical parallel processing demonstrates the exciting potential for a variety of AI tasks.’
    The neural networks used in most AI systems often require a substantial number of data examples during a learning process — training a model to reliably recognise a cat could use up to 10,000 cat/non-cat images — at a computational and processing cost.
    Rather than relying on backpropagation favoured by neural networks to ‘fine-tune’ results, the Associative Monadic Learning Element (AMLE) uses a memory material that learns patterns to associate together similar features in datasets — mimicking the conditional reflex observed by Pavlov in the case of a ‘match’.
    The AMLE inputs are paired with the correct outputs to supervise the learning process, and the memory material can be reset using light signals. In testing, the AMLE could correctly identify cat/non-cat images after being trained with just five pairs of images.
    The considerable performance capabilities of the new optical chip over a conventional electronic chip are down to two key differences in design: a unique network architecture incorporating associative learning as a building block rather than using neurons and a neural network the use of ‘wavelength-division multiplexing’to send multiple optical signals on different wavelengths on a single channel to increase computational speed.The chip hardware uses light to send and retrieve data to maximise information density — several signals on different wavelengths are sent simultaneously for parallel processing which increases the detection speed of recognition tasks. Each wavelength increases the computational speed.
    Professor Wolfram Pernice, co-author from Münster University explained: ‘The device naturally captures similarities in datasets while doing so in parallel using light to increase the overall computation speed — which can far exceed the capabilities of conventional electronic chips.’
    An associative learning approach could complement neural networks rather than replace them clarified co-first author Professor Zengguang Cheng, now at Fudan University.
    ‘It is more efficient for problems that don’t need substantial analysis of highly complex features in the datasets’ said Professor Cheng. ‘Many learning tasks are volume based and don’t have that level of complexity — in these cases, associative learning can complete the tasks more quickly and at a lower computational cost.’
    ‘It is increasingly evident that AI will be at the centre of many innovations we will witness in the coming phase of human history. This work paves the way towards realising fast optical processors that capture data associations for particular types of AI computations, although there are still many exciting challenges ahead.’ said Professor Harish Bhaskaran, who led the study.
    Story Source:
    Materials provided by University of Oxford. Note: Content may be edited for style and length. More

  • in

    Improving image sensors for machine vision

    Image sensors measure light intensity, but angle, spectrum, and other aspects of light must also be extracted to significantly advance machine vision.
    In Applied Physics Letters, published by AIP Publishing, researchers at the University of Wisconsin-Madison, Washington University in St. Louis, and OmniVision Technologies highlight the latest nanostructured components integrated on image sensor chips that are most likely to make the biggest impact in multimodal imaging.
    The developments could enable autonomous vehicles to see around corners instead of just a straight line, biomedical imaging to detect abnormalities at different tissue depths, and telescopes to see through interstellar dust.
    “Image sensors will gradually undergo a transition to become the ideal artificial eyes of machines,” co-author Yurui Qu, from the University of Wisconsin-Madison, said. “An evolution leveraging the remarkable achievement of existing imaging sensors is likely to generate more immediate impacts.”
    Image sensors, which converts light into electrical signals, are composed of millions of pixels on a single chip. The challenge is how to combine and miniaturize multifunctional components as part of the sensor.
    In their own work, the researchers detailed a promising approach to detect multiple-band spectra by fabricating an on-chip spectrometer. They deposited photonic crystal filters made up of silicondirectly on top of the pixels to create complex interactions between incident light and the sensor.
    The pixels beneath the films record the distribution of light energy, from which light spectral information can be inferred. The device — less than a hundredth of a square inch in size — is programmable to meet various dynamic ranges, resolution levels, and almost any spectral regime from visible to infrared.
    The researchers built a component that detects angular information to measure depth and construct 3D shapes at subcellular scales. Their work was inspired by directional hearing sensors found in animals, like geckos, whose heads are too small to determine where sound is coming from in the same way humans and other animals can. Instead, they use coupled eardrums to measure the direction of sound within a size that is orders of magnitude smaller than the corresponding acoustic wavelength.
    Similarly, pairs of silicon nanowires were constructed as resonators to support optical resonance. The optical energy stored in two resonators is sensitive to the incident angle. The wire closest to the light sends the strongest current. By comparing the strongest and weakest currents from both wires, the angle of the incoming light waves can be determined.
    Millions of these nanowires can be placed on a 1-square-millimeter chip. The research could support advances in lensless cameras, augmented reality, and robotic vision.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    How to make jet fuel from sunlight, air and water vapor

    Jet fuel can now be siphoned from the air.

    Or at least that’s the case in Móstoles, Spain, where researchers demonstrated that an outdoor system could produce kerosene, used as jet fuel, with three simple ingredients: sunlight, carbon dioxide and water vapor. Solar kerosene could replace petroleum-derived jet fuel in aviation and help stabilize greenhouse gas emissions, the researchers report in the July 20 Joule.

    Burning solar-derived kerosene releases carbon dioxide, but only as much as is used to make it, says Aldo Steinfeld, an engineer at ETH Zurich. “That makes the fuel carbon neutral, especially if we use carbon dioxide captured directly from the air.”

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Kerosene is the fuel of choice for aviation, a sector responsible for around 5 percent of human-caused greenhouse gas emissions. Finding sustainable alternatives has proven difficult, especially for long-distance aviation, because kerosene is packed with so much energy, says chemical physicist Ellen Stechel of Arizona State University in Tempe who was not involved in the study.

    In 2015, Steinfeld and his colleagues synthesized solar kerosene in the laboratory, but no one had produced the fuel entirely in a single system in the field. So Steinfeld and his team positioned 169 sun-tracking mirrors to reflect and focus radiation equivalent to about 2,500 suns into a solar reactor atop a 15-meter-tall tower. The reactor has a window to let the light in, ports that supply carbon dioxide and water vapor as well as a material used to catalyze chemical reactions called porous ceria.

    Within the solar reactor, porous ceria (shown) gets heated by sunlight and reacts with carbon dioxide and water vapor to produce syngas, a mixture of hydrogen gas and carbon monoxide.ETH Zurich

    When heated with solar radiation, the ceria reacts with carbon dioxide and water vapor in the reactor to produce syngas — a mixture of hydrogen gas and carbon monoxide. The syngas is then piped to the tower’s base where a machine converts it into kerosene and other hydrocarbons.

    Over nine days of operation, the researchers found that the tower converted about 4 percent of the used solar energy into roughly 5,191 liters of syngas, which was used to synthesize both kerosene and diesel. This proof-of-principle setup produced about a liter of kerosene a day, Steinfeld says.

    “It’s a major milestone,” Stechel says, though the efficiency needs to be improved for the technology to be useful to industry. For context, a Boeing 747 passenger jet burns around 19,000 liters of fuel during takeoff and the ascent to cruising altitude. Recovering heat unused by the system and improving the ceria’s heat absorption could boost the tower’s efficiency to more than 20 percent, making it economically practical, the researchers say. More

  • in

    'IcePic' algorithm outperforms humans in predicting ice crystal formation

    Cambridge scientists have developed an artificially intelligent algorithm capable of beating scientists at predicting how and when different materials form ice crystals.
    The program — IcePic — could help atmospheric scientists improve climate change models in the future. Details are published today in the journal PNAS.
    Water has some unusual properties, such as expanding when it turns into ice. Understanding water and how it freezes around different molecules has wide-reaching implications in a broad range of areas, from weather systems that can affect whole continents to storing biological tissue samples in a hospital.
    The Celsius temperature scale was designed based on the premise that it is the transition temperature between water and ice; however, whilst ice always melts at 0°C, water doesn’t necessarily freeze at 0°C. Water can still be in liquid form at -40°C, and it is impurities in wate that enable ice to freeze at higher temperatures. One of the biggest aims of the field has been to predict the ability of different materials to promote the formation of ice — known as a material’s “ice nucleation ability.”
    Researchers at the University of Cambridge, have developed a ‘deep learning’ tool able to predict the ice nucleation ability of different materials — and which was able to beat scientists in an online ‘quiz’ in which they were asked to predict when ice crystals would form.
    Deep learning is how artificial intelligence (AI) learns to draw insights from raw data. It finds its own patterns in the data, freeing it of the need for human input so that it can process results faster and more precisely. In the case of IcePic, it can infer different ice crystal formation properties around different materials. IcePic has been trained on thousands of images so that it can look at completely new systems and infer accurate predictions from them.
    The team set up a quiz in which scientists were asked to predict when ice crystals would form in different conditions shown by 15 different images. These results were then measured against IcePic’s performance. When put to the test, IcePic was far more accurate in determining a material’s ice nucleation ability than over 50 researchers from across the globe. Moreover, it helped identify where humans were going wrong.
    Michael Davies, a PhD student in the ICE lab at the Yusuf Hamied Department of Chemistry, Cambridge, and University College London, London, first author of the study, said: “It was fascinating to learn that the images of water we showed IcePic contain enough information to actually predict ice nucleation.
    “Despite us — that is, human scientists — having a 75 year head start in terms of the science, IcePic was still able to do something we couldn’t.”
    Determining the formation of ice has become especially relevant in climate change research.
    Water continuously moves within the Earth and its atmosphere, condensing to form clouds, and precipitating in the form of rain and snow. Different foreign particles affect how ice forms in these clouds, for example, smoke particles from pollution compared to smoke particles from a volcano. Understanding how different conditions affect our cloud systems is essential for more accurate weather predictions.
    “The nucleation of ice is really important for the atmospheric science community and climate modelling,” said Davies. “At the moment there is no viable way to predict ice nucleation other than direct experiments or expensive simulations. IcePic should open up a lot more applications for discovery.”
    Story Source:
    Materials provided by University of Cambridge. Note: Content may be edited for style and length. More

  • in

    A new leap in understanding nickel oxide superconductors

    A new study shows that nickel oxide superconductors, which conduct electricity with no loss at higher temperatures than conventional superconductors do, contain a type of quantum matter called charge density waves, or CDWs, that can accompany superconductivity.
    The presence of CDWs shows that these recently discovered materials, also known as nickelates, are capable of forming correlated states — “electron soups” that can host a variety of quantum phases, including superconductivity, researchers from the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University reported in Nature Physics today.
    “Unlike in any other superconductor we know about, CDWs appear even before we dope the material by replacing some atoms with others to change the number of electrons that are free to move around,” said Wei-Sheng Lee, a SLAC lead scientist and investigator with the Stanford Institute for Materials and Energy Science (SIMES) who led the study.
    “This makes the nickelates a very interesting new system — a new playground for studying unconventional superconductors.”
    Nickelates and cuprates
    In the 35 years since the first unconventional “high-temperature” superconductors were discovered, researchers have been racing to find one that could carry electricity with no loss at close to room temperature. This would be a revolutionary development, allowing things like perfectly efficient power lines, maglev trains and a host of other futuristic, energy-saving technologies. More