More stories

  • in

    Board games are boosting math ability in young children

    Board games based on numbers, like Monopoly, Othello and Chutes and Ladders, make young children better at math, according to a comprehensive review of research published on the topic over the last 23 years.
    Board games are already known to enhance learning and development including reading and literacy.
    Now this new study, published in the peer-reviewed journal Early Years, finds, for three to nine-year-olds, the format of number-based board games helps to improve counting, addition, and the ability to recognize if a number is higher or lower than another.
    The researchers say children benefit from programs — or interventions — where they play board games a few times a week supervised by a teacher or another trained adult.
    “Board games enhance mathematical abilities for young children,” says lead author Dr. Jaime Balladares, from Pontificia Universidad Católica de Chile, in Santiago, Chile.
    “Using board games can be considered a strategy with potential effects on basic and complex math skills.

    “Board games can easily be adapted to include learning objectives related to mathematical skills or other domains.”
    Games where players take turns to move pieces around a board differ from those involving specific skills or gambling.
    Board game rules are fixed which limits a player’s activities, and the moves on the board usually determine the overall playing situation.
    However, preschools rarely use board games. This study aimed to compile the available evidence of their effects on children.
    The researchers set out to investigate the scale of the effects of physical board games in promoting learning in young children.

    They based their findings on a review of 19 studies published from 2000 onwards involving children aged from three to nine years. All except one study focused on the relationship between board games and mathematical skills.
    All children participating in the studies received special board game sessions which took place on average twice a week for 20 minutes over one-and-a-half months. Teachers, therapists, or parents were among the adults who led these sessions.
    In some of the 19 studies, children were grouped into either the number board game or to a board game that did not focus on numeracy skills. In others, all children participated in number board games but were allocated different types e.g. Dominoes.
    All children were assessed on their math performance before and after the intervention sessions which were designed to encourage skills such as counting out loud.
    The authors rated success according to four categories including basic numeric competency such as the ability to name numbers, and basic number comprehension e.g. ‘nine is greater than three’.
    The other categories were deepened number comprehension — where a child can accurately add and subtract — and interest in mathematics.
    In some cases, parents attended a training session to learn arithmetic that they could then use in the games.
    Results showed that math skills improved significantly after the sessions among children for more than half (52%) of the tasks analyzed.
    In nearly a third (32%) of cases, children in the intervention groups gained better results than those who did not take part in the board game intervention.
    The results also show that from analyzed studies to date, board games on the language or literacy areas, while implemented, did not include scientific evaluation (i.e. comparing control with intervention groups, or pre and post-intervention) to evaluate their impact on children.
    Designing and implementing board games along with scientific procedures to evaluate their efficacy, therefore, are “urgent tasks to develop in the next few years,” Dr. Balladares, who was previously at UCL, argues.
    And this, now, is the next project they are investigating.
    Dr. Balladares concludes: “Future studies should be designed to explore the effects that these games could have on other cognitive and developmental skills.
    “An interesting space for the development of intervention and assessment of board games should open up in the next few years, given the complexity of games and the need to design more and better games for educational purposes.” More

  • in

    Machine learning takes materials modeling into new era

    The arrangement of electrons in matter, known as the electronic structure, plays a crucial role in fundamental but also applied research such as drug design and energy storage. However, the lack of a simulation technique that offers both high fidelity and scalability across different time and length scales has long been a roadblock for the progress of these technologies. Researchers from the Center for Advanced Systems Understanding (CASUS) at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) in Görlitz, Germany, and Sandia National Laboratories in Albuquerque, New Mexico, USA, have now pioneered a machine learning-based simulation method (npj Computational Materials), that supersedes traditional electronic structure simulation techniques. Their Materials Learning Algorithms (MALA) software stack enables access to previously unattainable length scales.
    Electrons are elementary particles of fundamental importance. Their quantum mechanical interactions with one another and with atomic nuclei give rise to a multitude of phenomena observed in chemistry and materials science. Understanding and controlling the electronic structure of matter provides insights into the reactivity of molecules, the structure and energy transport within planets, and the mechanisms of material failure.
    Scientific challenges are increasingly being addressed through computational modeling and simulation, leveraging the capabilities of high-performance computing. However, a significant obstacle to achieving realistic simulations with quantum precision is the lack of a predictive modeling technique that combines high accuracy with scalability across different length and time scales. Classical atomistic simulation methods can handle large and complex systems, but their omission of quantum electronic structure restricts their applicability. Conversely, simulation methods which do not rely on assumptions such as empirical modeling and parameter fitting (first principles methods) provide high fidelity but are computationally demanding. For instance, density functional theory (DFT), a widely used first principles method, exhibits cubic scaling with system size, thus restricting its predictive capabilities to small scales.
    Hybrid approach based on deep learning
    The team of researchers now presented a novel simulation method called the Materials Learning Algorithms (MALA) software stack. In computer science, a software stack is a collection of algorithms and software components that are combined to create a software application for solving a particular problem. Lenz Fiedler, a Ph.D. student and key developer of MALA at CASUS, explains, “MALA integrates machine learning with physics-based approaches to predict the electronic structure of materials. It employs a hybrid approach, utilizing an established machine learning method called deep learning to accurately predict local quantities, complemented by physics algorithms for computing global quantities of interest.”
    The MALA software stack takes the arrangement of atoms in space as input and generates fingerprints known as bispectrum components, which encode the spatial arrangement of atoms around a Cartesian grid point. The machine learning model in MALA is trained to predict the electronic structure based on this atomic neighborhood. A significant advantage of MALA is its machine learning model’s ability to be independent of the system size, allowing it to be trained on data from small systems and deployed at any scale.
    In their publication, the team of researchers showcased the remarkable effectiveness of this strategy. They achieved a speedup of over 1,000 times for smaller system sizes, consisting of up to a few thousand atoms, compared to conventional algorithms. Furthermore, the team demonstrated MALA’s capability to accurately perform electronic structure calculations at a large scale, involving over 100,000 atoms. Notably, this accomplishment was achieved with modest computational effort, revealing the limitations of conventional DFT codes.
    Attila Cangi, the Acting Department Head of Matter under Extreme Conditions at CASUS, explains: “As the system size increases and more atoms are involved, DFT calculations become impractical, whereas MALA’s speed advantage continues to grow. The key breakthrough of MALA lies in its capability to operate on local atomic environments, enabling accurate numerical predictions that are minimally affected by system size. This groundbreaking achievement opens up computational possibilities that were once considered unattainable.”
    Boost for applied research expected
    Cangi aims to push the boundaries of electronic structure calculations by leveraging machine learning: “We anticipate that MALA will spark a transformation in electronic structure calculations, as we now have a method to simulate significantly larger systems at an unprecedented speed. In the future, researchers will be able to address a broad range of societal challenges based on a significantly improved baseline, including developing new vaccines and novel materials for energy storage, conducting large-scale simulations of semiconductor devices, studying material defects, and exploring chemical reactions for converting the atmospheric greenhouse gas carbon dioxide into climate-friendly minerals.”
    Furthermore, MALA’s approach is particularly suited for high-performance computing (HPC). As the system size grows, MALA enables independent processing on the computational grid it utilizes, effectively leveraging HPC resources, particularly graphical processing units. Siva Rajamanickam, a staff scientist and expert in parallel computing at the Sandia National Laboratories, explains, “MALA’s algorithm for electronic structure calculations maps well to modern HPC systems with distributed accelerators. The capability to decompose work and execute in parallel different grid points across different accelerators makes MALA an ideal match for scalable machine learning on HPC resources, leading to unparalleled speed and efficiency in electronic structure calculations.”
    Apart from the developing partners HZDR and Sandia National Laboratories, MALA is already employed by institutions and companies such as the Georgia Institute of Technology, the North Carolina A&T State University, Sambanova Systems Inc., and Nvidia Corp. More

  • in

    New Zealand kids spending one-third of after-school time on screens

    Regulations are urgently needed to protect children from harm in the unregulated online world, researchers at the University of Otago, New Zealand, say.
    The call comes as the researchers publish the results of their study into the after-school habits of 12-year-olds. Their research, published today in the New Zealand Medical Journal, finds children are spending a third of their after-school time on screens, including more than half their time after 8pm.
    Senior researcher Dr Moira Smith from the University’s Department of Public Health says this is considerably more than the current guidelines, which recommend less than two hours of screen time per day (outside school time) for school-aged children and adolescents.
    The results are from the innovative Kids’Cam project, with the 108 children involved wearing cameras that captured images every seven seconds, offering a unique insight into their everyday lives in 2014 and 2015.
    Children were mostly playing games and watching programmes. For ten per cent of the time the children were using more than one screen.
    Screen use harms children’s health and wellbeing.

    “It is associated with obesity, poor mental wellbeing, poor sleep and mental functioning and lack of physical activity,” Dr Smith says. “It also affects children’s ability to concentrate and regulate their behaviour and emotions.”
    Screen use is now a regular part of children’s everyday lives and is likely to have increased since the Kids’Cam data was collected.
    “Screen use rose rapidly during the COVID-19 pandemic, and children in 2023 are frequently spending time online, particularly on smartphones. According to the latest media use survey, YouTube and Netflix are the most popular websites for watching programmes, with one in three children under 14 using social media, most commonly TikTok, which is rated R13.”
    She says children are being exposed to ads for vaping, alcohol, gambling and junk food, and experiencing sexism, racism and bullying while online.
    “Cyberbullying is particularly high among children in Aotearoa, with one in four parents reporting their child has been subjected to bullying while online.”
    Dr Smith says current New Zealand legislation is outdated and fails to adequately deal with the online world children are being exposed to.
    “While screen use has many benefits, children need to be protected from harm in this largely unregulated space.”
    She says the Government is to be applauded for proposing more regulation of social media in its recent consultation document from the Department of Internal Affairs (DIA), which notes concern about children accessing inappropriate content while online.
    The Otago researchers are currently studying the online worlds of children in Aotearoa using screen capture technology, with the results expected to be published soon. More

  • in

    AI finds a way to people’s hearts (literally!)

    AI (artificial intelligence) may sound like a cold robotic system, but Osaka Metropolitan University scientists have shown that it can deliver heartwarming — or, more to the point, “heart-warning” — support. They unveiled an innovative use of AI that classifies cardiac functions and pinpoints valvular heart disease with unprecedented accuracy, demonstrating continued progress in merging the fields of medicine and technology to advance patient care. The results will be published in The Lancet Digital Health.
    Valvular heart disease, one cause of heart failure, is often diagnosed using echocardiography. This technique, however, requires specialized skills, so there is a corresponding shortage of qualified technicians. Meanwhile, chest radiography is one of the most common tests to identify diseases, primarily of the lungs. Even though the heart is also visible in chest radiographs, little was known heretofore about the ability of chest radiographs to detect cardiac function or disease. Chest radiographs, or chest X-Rays, are performed in many hospitals and very little time is required to conduct them, making them highly accessible and reproducible. Accordingly, the research team led by Dr. Daiju Ueda, from the Department of Diagnostic and Interventional Radiology at the Graduate School of Medicine of Osaka Metropolitan University, reckoned that if cardiac function and disease could be determined from chest radiographs, this test could serve as a supplement to echocardiography.
    Dr. Ueda’s team successfully developed a model that utilizes AI to accurately classify cardiac functions and valvular heart diseases from chest radiographs. Since AI trained on a single dataset faces potential bias, leading to low accuracy, the team aimed for multi-institutional data. Accordingly, a total of 22,551 chest radiographs associated with 22,551 echocardiograms were collected from 16,946 patients at four facilities between 2013 and 2021. With the chest radiographs set as input data and the echocardiograms set as output data, the AI model was trained to learn features connecting both datasets.
    The AI model was able to categorize precisely six selected types of valvular heart disease, with the Area Under the Curve, or AUC, ranging from 0.83 to 0.92. (AUC is a rating index that indicates the capability of an AI model and uses a value range from 0 to 1, with the closer to 1, the better.) The AUC was 0.92 at a 40% cut-off for detecting left ventricular ejection fraction — an important measure for monitoring cardiac function.
    “It took us a very long time to get to these results, but I believe this is significant research,” stated Dr. Ueda. “In addition to improving the efficiency of doctors’ diagnoses, the system might also be used in areas where there are no specialists, in night-time emergencies, and for patients who have difficulty undergoing echocardiography.” More

  • in

    Physicists generate the first snapshots of fermion pairs

    When your laptop or smartphone heats up, it’s due to energy that’s lost in translation. The same goes for power lines that transmit electricity between cities. In fact, around 10 percent of the generated energy is lost in the transmission of electricity. That’s because the electrons that carry electric charge do so as free agents, bumping and grazing against other electrons as they move collectively through power cords and transmission lines. All this jostling generates friction, and, ultimately, heat.
    But when electrons pair up, they can rise above the fray and glide through a material without friction. This “superconducting” behavior occurs in a range of materials, though at ultracold temperatures. If these materials can be made to superconduct closer to room temperature, they could pave the way for zero-loss devices, such as heat-free laptops and phones, and ultraefficient power lines. But first, scientists will have to understand how electrons pair up in the first place.
    Now, new snapshots of particles pairing up in a cloud of atoms can provide clues to how electrons pair up in a superconducting material. The snapshots were taken by MIT physicists and are the first images that directly capture the pairing of fermions — a major class of particles that includes electrons, as well as protons, neutrons, and certain types of atoms.
    In this case, the MIT team worked with fermions in the form of potassium-40 atoms, and under conditions that simulate the behavior of electrons in certain superconducting materials. They developed a technique to image a supercooled cloud of potassium-40 atoms, which allowed them to observe the particles pairing up, even when separated by a small distance. They could also pick out interesting patterns and behaviors, such as a the way pairs formed checkerboards, which were disturbed by lonely singles passing by.
    The observations, reported today in Science, can serve as a visual blueprint for how electrons may pair up in superconducting materials. The results may also help to describe how neutrons pair up to form an intensely dense and churning superfluid within neutron stars.
    “Fermion pairing is at the basis of superconductivity and many phenomena in nuclear physics,” says study author Martin Zwierlein, the Thomas A. Frank Professor of Physics at MIT. “But no one had seen this pairing in situ. So it was just breathtaking to then finally see these images onscreen, faithfully.”
    The study’s co-authors include Thomas Hartke, Botond Oreg, Carter Turnbaugh, and Ningyuan Jia, all members of MIT’s Department of Physics, the MIT-Harvard Center for Ultracold Atoms, and the Research Laboratory of Electronics.

    A decent view
    To directly observe electrons pair up is an impossible task. They are simply too small and too fast to capture with existing imaging techniques. To understand their behavior, physicists like Zwierlein have looked to analogous systems of atoms. Both electrons and certain atoms, despite their difference in size, are similar in that they are fermions — particles that exhibit a property known as “half-integer spin.” When fermions of opposite spin interact, they can pair up, as electrons do in superconductors, and as certain atoms do in a cloud of gas.
    Zwierlein’s group has been studying the behavior of potassium-40 atoms, which are known fermions, that can be prepared in one of two spin states. When a potassium atom of one spin interacts with an atom of another spin, they can form a pair, similar to superconducting electrons. But under normal, room-temperature conditions, the atoms interact in a blur that is difficult to capture.
    To get a decent view of their behavior, Zwierlein and his colleagues study the particles as a very dilute gas of about 1,000 atoms, that they place under ultracold, nanokelvin conditions that slow the atoms to a crawl. The researchers also contain the gas within an optical lattice, or a grid of laser light that the atoms can hop within, and that the researchers can use as a map to pinpoint the atoms’ precise locations.
    In their new study, the team made enhancements to their existing technique for imaging fermions that enabled them to momentarily freeze the atoms in place, then take snapshots separately of potassium-40 atoms with one particular spin or the other. The researchers could then overlay an image of one atom type over the other, and look to see where the two types paired up, and how.

    “It was bloody difficult to get to a point where we could actually take these images,” Zwierlein says. “You can imagine at first getting big fat holes in your imaging, your atoms running away, nothing is working. We’ve had terribly complicated problems to solve in the lab through the years, and the students had great stamina, and finally, to be able to see these images was absolutely elating.”
    Pair dance
    What the team saw was pairing behavior among the atoms that was predicted by the Hubbard model — a widely held theory believed to hold they key to the behavior of electrons in high-temperature superconductors, materials that exhibit superconductivity at relatively high (though still very cold) temperatures. Predictions of how electrons pair up in these materials have been tested through this model, but never directly observed until now.
    The team created and imaged different clouds of atoms thousands of times and translated each image into a digitized version resembling a grid. Each grid showed the location of atoms of both types (depicted as red versus blue in their paper). From these maps, they were able to see squares in the grid with either a lone red or blue atom, and squares where both a red and blue atom paired up locally (depicted as white), as well as empty squares that contained neither a red or blue atom (black).
    Already individual images show many local pairs, and red and blue atoms in close proximity. By analyzing sets of hundred of images, the team could show that atoms indeed show up in pairs, at times linking up in a tight pair within one square, and at other times forming looser pairs, separated by one or several grid spacings. This physical separation, or “nonlocal pairing,” was predicted by the Hubbard model but never directly observed.
    The researchers also observed that collections of pairs seemed to form a broader, checkerboard pattern, and that this pattern wobbled in and out of formation as one partner of a pair ventured outside its square and momentarily distorted the checkerboard of other pairings. This phenomenon, known as a “polaron,” was also predicted but never seen directly.
    “In this dynamic soup, the particles are constantly hopping on top of each other, moving away, but never dancing too far from each other,” Zwierlein notes.
    The pairing behavior between these atoms must also occur in superconducting electrons, and Zwierlein says the team’s new snapshots will help to inform scientists’ understanding of high-temperature superconductors, and perhaps provide insight into how these materials might be tuned to higher, more practical temperatures.
    “If you normalize our gas of atoms to the density of electrons in a metal, we think this pairing behavior should occur far above room temperature,” Zwierlein offers. “That gives a lot of hope and confidence that such pairing phenomena can in principle occur at elevated temperatures, and there’s no a priori limit to why there shouldn’t be a room-temperature superconductor one day.”
    This research was supported, in part, by the U.S. National Science Foundation, the U.S. Air Force Office of Scientific Research, and the Vannevar Bush Faculty Fellowship. More

  • in

    New design rule for high-entropy superionic solid-state conductors

    Solid electrolytes with high lithium-ion conductivity can be designed for millimeter-thick battery electrodes by increasing the complexity of their composite superionic crystals, report researchers from Tokyo Tech. This new design rule enables the synthesis of high-entropy active materials while preserving their superionic conduction.
    As the world transitions towards a greener and more sustainable energy economy, reliance on lithium (Li)-ion batteries is expected to rise. Scientists from across the globe are working towards designing smaller yet efficient batteries that can keep up with the ever-increasing demand for energy storage. In recent years, all-solid-state lithium batteries (ASSLBs) have captured research interest due to their unique use of solid electrolytes instead of conventional liquid ones. Solid electrolytes not only make the battery safer from leakage and fire-related hazards, but also provide superior energy and power characteristics. However, their stiffness results in poor wetting of the cathode surface and a lack of homogenous supply of Li ions to the cathode. This, in turn, leads to a loss of capacity in the solid-state battery. The issue becomes more pronounced in thick battery cathode electrode such as millimeter-thick one, which is a more advantageous electrode configuration for realizing inexpensive and high-energy-density battery package, compared to conventional electrode with typical thickness of < 0.1 mm. Fortunately, a recent study published in Science found a way to overcome this problem. The paper -- authored by a team of researchers led by Prof. Ryoji Kanno from Tokyo Institute of Technology (Tokyo Tech) -- describes a new strategy to produce solid electrolytes with enhanced Li-ion conductivity. Their work establishes a design rule for synthesizing high-entropy crystals of lithium superionic conductors via the multi-substitution approach. "Many studies have shown that inorganic ionic conductors tend to show better ion conductivity after multi-element substitution probably because of the flattened potential barrier of Li-ion migration, which is essential for better ion conductivity," points out Prof. Kanno. This was where they started their research. For the design of their new material, the team took inspiration from the chemical compositions of two well-known Li-based solid electrolytes: argyrodite-type (Li6PS5Cl) and LGPS-type (Li10GeP2S12) superionic crystals. They modified the LGPS-type Li9.54Si1.74P1.44S11.7Cl0.3 via multi-substitution and synthesized a series of crystals with composition Li9.54[Si1−δMδ]1.74P1.44S11.1Br0.3O0.6 (M = Ge, Sn; 0 ≤ δ ≤ 1). The researchers used a crystal with Ge = M and δ = 0.4 as a catholyte in an ASSLB with an 1- or 0.8- millimeter-thick cathode. The former and latter ASSLB exhibited discharge capacities of 26.4 mAh cm−2 at 25 °C (1 mm) and 17.3 mAh cm−2 at −10 °C (0.8 mm), respectively, with the area-specific capacity 1.8 and 5.3 times larger than those reported for previous state-of the-art ASSLBs, respectively. Theoretical calculations suggested that the enhanced conductivity of the solid electrolyte could be a result of the flattening of the energy barrier for ion migration, caused by a small degree of chemical substitution in the above-mentioned crystal. This study provides a new way for preparing high-entropy solid electrolytes for millimeter-thick electrodes while preserving their superionic conduction pathways. "In effect, the proposed design rule lays a solid groundwork for exploring new superionic conductors with superior charge-discharge performance, even at room temperature," concludes Prof. Kanno. More

  • in

    Number cruncher calculates whether whales are acting weirdly

    We humans can be a scary acquaintance for whales in the wild. This includes marine biologists tagging them with measuring devices to understand them better. These experiences can make whales behave erratically for a while. Such behaviour can affect research quality and highlights an animal ethics dilemma. Now, University of Copenhagen researchers have figured out how to solve the problems with math.
    Maybe you have tried taking a howling pooch or cranky cat to the vet. Regardless of your noblest intentions, your pet’s experience may have been equally unpleasant. Animals react to the unknown in their own way. The case is no different for cetaceans like narwhal and bowhead whales when they encounter human generated noises such as ship noise or mining blasts in the North Atlantic — or when they are caught by well-meaning marine biologists who just want to get to know them better.
    When biologists ‘tag’ whales with measuring devices, the animals react by behaving unusually — abnormally. For example, for a while after being tagged, they may perform many atypical shallow dives and quick jerks. Such behaviour is misleading when the goal is to study the animal’s normal and natural behaviour.
    The problem is getting help from an unusual corner.
    “Biologists seek to understand animals as natural beings, but their reactions turn into unnatural behaviour that creates noise in the dataset. Because of this, a lot of data from just after whales are tagged ends up getting discarded. In this study, we have proposed a mathematical approach using statistical methods that can determine exactly how much data to keep,” says PhD student Lars Reiter from the Department of Mathematics.
    Valuable for humans and animals alike
    With two statistical calculations, the researcher has found a way to estimate when whales like narwhals and bowhead whales will return to their natural behaviour after being tagged. It is a method that can also be used to study how animals respond to other types of disturbances.

    “This research is extremely valuable to us as marine biologists who are interested in the behaviour and well-being of whales. It provides us with a standardised approach by which to distinguish between natural behaviour and affected behaviour in whales. Thus far, we’ve made individual estimates that are more or less spot on,” says marine biologist Outi Tervo from the Greenland Institute of Natural Resources, who collaborated with the mathematicians on the study.
    The statistical method allows researchers to avoid discarding too much or too little data. If too much data is kept, it can interfere with the research results, and if too much is lost, it comes at cost to both the animals and humans.
    “It really matters in terms of research, but also financially. And not least, it means something for animal welfare. If we throw away data unnecessarily, more whales will eventually have to go through the experience for us to conduct this research, which is ultimately meant to benefit the animals,” says Outi Tervo.
    Idea came from a parliamentary election
    Whale behaviour does not go from abnormal to normal with a flick of its tail. Their behaviour normalizes gradually, typically over a day — and in a few cases over a longer period of time. During this transition, a whale’s behaviour manifests itself on both sides of an area designated as normal whale behaviour. So how do scientists figure out where to make the cut?

    “The idea came to me while I was standing in the voting booth during parliamentary elections. Borrowing from the logic of the electoral system, you can consider it as if the whales — or these data points which show the whale’s behaviour — vote on whether they are in or out of their normal range,” explains Lars Reiter.
    By recording 1 positive “vote” when the behaviour is within the normal range, and 1 negative “vote” when outside, the scientists can add up all the votes and find the moment at which the number of votes goes from predominantly negative to positive.
    The researchers use two approaches to determine normal whale behaviour. In part, they look at the whale’s diving pattern, as well as its acceleration and fine motor skills.
    How to calculate the behaviour of animals statistically
    Sometimes it hunts in the deep, while at others times, it cruises quietly at the surface. The activity that a whale is engaged in is crucial for understanding its normal energy level. Lars Reiter’s method takes this into account as something new:
    “Where previous research focused on the mean behavior, we instead situate a whale in an activity based on its movements — where it is assessed based on a normal value for acceleration that matches the speci?c activity being engaged in. We do this by using what are known as quantiles, instead of averages, because they allow us to focus on behavioural extremes. For example, hunting and resting are opposing extremes in terms of energy levels,” explains Lars Reiter.
    When the focus is on the whale’s diving profile, on the other hand, you look at the pattern formed by the whale’s overall activities. By combining depth and time, one can assess whether the distribution of different dive types is natural.
    Wiser about the animals’ hardships and better at avoiding them
    According to the marine biologist, the data-based approach represented by the statistical method also means that researchers can now develop better, more gentle ways of tagging.
    “Based on this study, we already know that the amount of time we spend putting the equipment on is an important factor for how much the animals are affected afterwards. Therefore, we can set up some time limits — where we stop and set the whale free if it takes more than X number of minutes allowed,” says Outi Tervo.
    A shift away from individual estimates to a mathematical standard could also mean better assessments from the veterinary oversight that tag-using research projects are required to go through.
    “The method will make it so that ethical approval from a veterinary inspection is more data-based and precise. So, there is no doubt that this research is a step forward for animal welfare,” says the marine biologist.
    * Extra info: An important instrument for a future with less ice and more people
    The natural Arctic habitat of narwhals and bowhead whales is changing due to climate change. Annual ice shrinkage and increasing human activity is taking place in areas that whales once had all to themselves. The researchers’ method can become an important instrument and contribute to a greater understanding of the consequences.
    “It allows us to study how whales are impacted by various human activities. They can be external sources of noise that we can situate in time and location, such as a blast or a ship passing by. Or sounds and activities that we emit ourselves. Lars’ algorithm lets us get a clear picture of how it all affects the animals,” says Outi Tervo.
    Increased activity will lead to more ocean noise, which is of concern to marine biologists with regards to how it will affect large marine animals like narwhal, which are incredibly sensitive to sound. Co-author and supervisor Professor Susanne Ditlevsen believes that the studies and new method will become more important in the years ahead.
    “Climate change is leading to increased anthropogenic activity in Arctic whale habitats. Melting ice means that areas which were once impassable can now be reached by humans. We would like to assess whether it scares and disturbs the animals, but it is not clear how. The new methods can be used to assess at what distance from the animal habitat should various activities take place,” says Susanne Ditlevsen.
    Facts: Statistical method with two mathematical calculations and one intersection.
    The statistical method can generally be understood as calculations with two types of tagging data — acceleration and depth, and a way of adding it up that finds the optimal intersection.
    1. Acceleration tells about the energy level and whale movements (“jerks”). The indicators for natural behaviour are divided according to whale activity, so that, for example, a high energy level is recorded as natural in connection with hunting, but not in connection with rest.
    2. The whale’s diving profile is measured in depth and time spent on a dive. Temporal impacts over a 40-hour period show a pattern of different types of dives — e.g., U-dives, where the whale stay at depth for some time, or V-dives, where the whale resurfaces quickly. The pattern is compared with normal values measured after the 40 hours.
    3. The cut-off point for when the whale is back in normal behaviour is found by counting the individual measurements as “voting for or against” normal behaviour. As such, the researchers find the optimal place to divide the research data into natural and influenced behaviour.
    About the study
    The study is part of a larger research collaboration between the Greenland Institute of Natural Resources and the University of Copenhagen’s Department of Mathematics, that focuses on the Arctic’s large marine mammals.
    The researchers include Lars Reiter Nielsen and Susanne Ditlevsen from the University of Copenhagen, Outi M. Tervo and Mads Peter Heide-Jørgensen from the Greenland Institute of Natural Resources and Susanna B. Blackwell from Greeneridge Sciences, Inc., Santa Barbara, USA More

  • in

    Researchers calculate economic value of temporary carbon reduction with ‘Social Value of Offsets’ formula

    A new study identifies how to calculate the economic value of temporarily reducing carbon emissions through carbon offsetting.
    The Social Value of Offsets (SVO) is an economic framework that will help policymakers calculate how much carbon should be stored in temporary offsets to make it equivalent to a permanent CO2 emission.
    Using the SVO metric the researchers estimate that an offset sequestering one ton of carbon for 50 years is equivalent to between 0.3 to 0.5 tons permanently locked away, taking into account a range of factors for different risks, permanence and climate scenarios.
    Offsets are a key part of Paris-compliant net zero strategies, but many offsetting projects fail and there is never a guarantee on how long an offset will sequester carbon for — making it difficult to measure the economic damage avoided.
    The study, published in Nature, sets out the risks and uncertainties of offsetting, which occur due to the unregulated nature of the global offsets market.
    Risk factors to projects in tropical forests, for example, can include the lack of strong institutions on the ground to monitor, enforce and account for emissions sequestered, as well as the possibility of fires and disease.

    There are also risks in how emissions reductions are reported as well that of ‘non-additionality’ — when emissions reductions would have happened irrespective of the offsetting.
    Other frameworks count the physical units of carbon but SVO is unique in that it is an economic framework where the value of temporary emissions reductions is measured as the value of the damages avoided to the economy during the length of the offsetting project.
    The researchers say this will potentially make it easier to compare offsetting schemes, allowing anyone offsetting their carbon emissions to be able to weigh up the risks involved and decide how much carbon they would need to offset in temporary schemes to make up for a permanent carbon emission.
    Professor Ben Groom, Dragon Capital Chair in Environmental Economics at the University of Exeter Business School, said: “Our analysis shows that a carbon emission today which is offset by a temporary project can be thought of as a postponed emission with the same warming effect when the project ends, but with less warming during the project.
    “The Social Value of Offsets (SVO) stems from the value of delaying emissions and damages, and this depends on how impermanent, risky or additional they are. Valuing offsets using the SVO then provides a means of comparing offsets with different qualities in terms of the economic damages avoided.”
    Professor Groom explains why delaying emissions is important, both in an economic and physical sense. “With a project that stores carbon and releases it 50 years later, the net carbon reduction is always going to be zero, so some may say it’s as if it never happened.”

    “But what that ignores is the flow of damages that you’ve avoided in the meantime, which could be important, because certain responses to climate change, like the melting of the ice caps, are responsive, depending on how long temperatures have been at a particular level.
    “Delaying emissions is also important because economic processes could be happening in the background that make carbon removal cheaper in the future so offsetting could act as a temporary solution allowing the action point to be delayed until a time when it is cheaper to act.
    “The question we’re answering with SVO is how valuable this temporary period in which you avoid damages is.”
    The IPCC has previously noted that meeting the objectives of the Paris Agreement will require some offsetting, though some organisations suggest that offsetting should be largely avoided due to the unregulated, impermanent and risky nature of the offset market.
    However, this study illustrates that in principle delaying emissions, even when offsetting projects are temporary and risky, is valuable in economic terms.
    The economists believe the SVO metric can play an important role in appraising net-zero climate policy and harmonising the offset market, and has policy applications beyond the valuation of offsets.
    These include calculating the benefits-to-cost ratio of an offset or any temporary carbon storage solution allowing for comparison to alternative technologies for mitigating climate change.
    The SVO formula can also be applied to Life-Cycle Analysis of biofuels as well as used to calculate the price of carbon debt, using the rule of thumb that a company that emits a ton of carbon today and commits to a permanent removal in 50 years’ time will pay 33% of the carbon price today to cover the damages of temporary atmospheric storage.
    The Social Value of Offsets, by Professor Ben Groom, Dragon Capital Chair in Environmental Economics at the University of Exeter Business School and Professor Frank Venmans from the Grantham Research Institute on Climate Change and the Environment at LSE, is published in Nature. More