More stories

  • in

    AI tool decodes brain cancer’s genome during surgery

    Scientists have designed an AI tool that can rapidly decode a brain tumor’s DNA to determine its molecular identity during surgery — critical information that under the current approach can take a few days and up to a few weeks.
    Knowing a tumor’s molecular type enables neurosurgeons to make decisions such as how much brain tissue to remove and whether to place tumor-killing drugs directly into the brain — while the patient is still on the operating table.
    A report on the work, led by Harvard Medical School researchers, is published July 7 in the journal Med.
    Accurate molecular diagnosis — which details DNA alterations in a cell — during surgery can help a neurosurgeon decide how much brain tissue to remove. Removing too much when the tumor is less aggressive can affect a patient’s neurologic and cognitive function. Likewise, removing too little when the tumor is highly aggressive may leave behind malignant tissue that can grow and spread quickly.
    “Right now, even state-of-the-art clinical practice cannot profile tumors molecularly during surgery. Our tool overcomes this challenge by extracting thus-far untapped biomedical signals from frozen pathology slides,” said study senior author Kun-Hsing Yu, assistant professor of biomedical informatics in the Blavatnik Institute at HMS.
    Knowing a tumor’s molecular identity during surgery is also valuable because certain tumors benefit from on-the-spot treatment with drug-coated wafers placed directly into the brain at the time of the operation, Yu said.

    “The ability to determine intraoperative molecular diagnosis in real time, during surgery, can propel the development of real-time precision oncology,” Yu added.
    The standard intraoperative diagnostic approach used now involves taking brain tissue, freezing it, and examining it under a microscope. A major drawback is that freezing the tissue tends to alter the appearance of cells under a microscope and can interfere with the accuracy of clinical evaluation. Furthermore, the human eye, even when using potent microscopes, cannot reliably detect subtle genomic variations on a slide.
    The new AI approach overcomes these challenges.
    The tool, called CHARM (Cryosection Histopathology Assessment and Review Machine), is freely available to other researchers. It still has to be clinically validated through testing in real-world settings and cleared by the FDA before deployment in hospitals, the research team said.
    Cracking cancer’s molecular code
    Recent advances in genomics have allowed pathologists to differentiate the molecular signatures — and the behaviors that such signatures portend — across various types of brain cancer as well as within specific types of brain cancer. For example, glioma — the most aggressive brain tumor and the most common form of brain cancer — has three main subvariants that carry different molecular markers and have different propensities for growth and spread.

    The new tool’s ability to expedite molecular diagnosis could be particularly valuable in areas with limited access to technology to perform rapid cancer genetic sequencing.
    Beyond the decisions made during surgery, knowledge of a tumor’s molecular type provides clues about its aggressiveness, behavior, and likely response to various treatments. Such knowledge can inform post-operative decisions.
    Furthermore, the new tool enables during-surgery diagnoses aligned with the World Health Organization’s recently updated classification system for diagnosing and grading the severity of gliomas, which calls for such diagnoses to be made based on a tumor’s genomic profile.
    Training CHARM
    CHARM was developed using 2,334 brain tumor samples from 1,524 people with glioma from three different patient populations. When tested on a never-before-seen set of brain samples, the tool distinguished tumors with specific molecular mutations at 93 percent accuracy and successfully classified three major types of gliomas with distinct molecular features that carry different prognoses and respond differently to treatments.
    Going a step further, the tool successfully captured visual characteristics of the tissue surrounding the malignant cells. It was capable of spotting telltale areas with greater cellular density and more cell death within samples, both of which signal more aggressive glioma types.
    The tool was also able to pinpoint clinically important molecular alterations in a subset of low-grade gliomas, a subtype of glioma that is less aggressive and therefore less likely to invade surrounding tissue. Each of these changes also signals different propensity for growth, spread, and treatment response.
    The tool further connected the appearance of the cells — the shape of their nuclei, the presence of edema around the cells — with the molecular profile of the tumor. This means that the algorithm can pinpoint how a cell’s appearance relates to the molecular type of a tumor.
    This ability to assess the broader context around the image renders the model more accurate and closer to how a human pathologist would visually assess a tumor sample, Yu said.
    The researchers say that while the model was trained and tested on glioma samples, it could be successfully retrained to identify other brain cancer subtypes.
    Scientists have already designed AI models to profile other types of cancer — colon, lung, breast — but gliomas have remained particularly challenging due to their molecular complexity and huge variation in tumor cells’ shape and appearance.
    The CHARM tool would have to be retrained periodically to reflect new disease classifications as they emerge from new knowledge, Yu said.
    “Just like human clinicians who must engage in ongoing education and training, AI tools must keep up with the latest knowledge to remain at peak performance.”
    Authorship, funding, disclosures
    Coinvestigators included MacLean P. Nasrallah, Junhan Zhao, Cheng Che Tsai, David Meredith, Eliana Marostica, Keith L. Ligon, and Jeffrey A. Golden.
    This work was supported in part by the National Institute of General Medical Sciences grant R35GM142879, the Google Research Scholar Award, the Blavatnik Center for Computational Biomedicine Award, the Partners Innovation Discovery Grant, and the Schlager Family Award for Early-Stage Digital Health Innovations. More

  • in

    Board games are boosting math ability in young children

    Board games based on numbers, like Monopoly, Othello and Chutes and Ladders, make young children better at math, according to a comprehensive review of research published on the topic over the last 23 years.
    Board games are already known to enhance learning and development including reading and literacy.
    Now this new study, published in the peer-reviewed journal Early Years, finds, for three to nine-year-olds, the format of number-based board games helps to improve counting, addition, and the ability to recognize if a number is higher or lower than another.
    The researchers say children benefit from programs — or interventions — where they play board games a few times a week supervised by a teacher or another trained adult.
    “Board games enhance mathematical abilities for young children,” says lead author Dr. Jaime Balladares, from Pontificia Universidad Católica de Chile, in Santiago, Chile.
    “Using board games can be considered a strategy with potential effects on basic and complex math skills.

    “Board games can easily be adapted to include learning objectives related to mathematical skills or other domains.”
    Games where players take turns to move pieces around a board differ from those involving specific skills or gambling.
    Board game rules are fixed which limits a player’s activities, and the moves on the board usually determine the overall playing situation.
    However, preschools rarely use board games. This study aimed to compile the available evidence of their effects on children.
    The researchers set out to investigate the scale of the effects of physical board games in promoting learning in young children.

    They based their findings on a review of 19 studies published from 2000 onwards involving children aged from three to nine years. All except one study focused on the relationship between board games and mathematical skills.
    All children participating in the studies received special board game sessions which took place on average twice a week for 20 minutes over one-and-a-half months. Teachers, therapists, or parents were among the adults who led these sessions.
    In some of the 19 studies, children were grouped into either the number board game or to a board game that did not focus on numeracy skills. In others, all children participated in number board games but were allocated different types e.g. Dominoes.
    All children were assessed on their math performance before and after the intervention sessions which were designed to encourage skills such as counting out loud.
    The authors rated success according to four categories including basic numeric competency such as the ability to name numbers, and basic number comprehension e.g. ‘nine is greater than three’.
    The other categories were deepened number comprehension — where a child can accurately add and subtract — and interest in mathematics.
    In some cases, parents attended a training session to learn arithmetic that they could then use in the games.
    Results showed that math skills improved significantly after the sessions among children for more than half (52%) of the tasks analyzed.
    In nearly a third (32%) of cases, children in the intervention groups gained better results than those who did not take part in the board game intervention.
    The results also show that from analyzed studies to date, board games on the language or literacy areas, while implemented, did not include scientific evaluation (i.e. comparing control with intervention groups, or pre and post-intervention) to evaluate their impact on children.
    Designing and implementing board games along with scientific procedures to evaluate their efficacy, therefore, are “urgent tasks to develop in the next few years,” Dr. Balladares, who was previously at UCL, argues.
    And this, now, is the next project they are investigating.
    Dr. Balladares concludes: “Future studies should be designed to explore the effects that these games could have on other cognitive and developmental skills.
    “An interesting space for the development of intervention and assessment of board games should open up in the next few years, given the complexity of games and the need to design more and better games for educational purposes.” More

  • in

    Machine learning takes materials modeling into new era

    The arrangement of electrons in matter, known as the electronic structure, plays a crucial role in fundamental but also applied research such as drug design and energy storage. However, the lack of a simulation technique that offers both high fidelity and scalability across different time and length scales has long been a roadblock for the progress of these technologies. Researchers from the Center for Advanced Systems Understanding (CASUS) at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) in Görlitz, Germany, and Sandia National Laboratories in Albuquerque, New Mexico, USA, have now pioneered a machine learning-based simulation method (npj Computational Materials), that supersedes traditional electronic structure simulation techniques. Their Materials Learning Algorithms (MALA) software stack enables access to previously unattainable length scales.
    Electrons are elementary particles of fundamental importance. Their quantum mechanical interactions with one another and with atomic nuclei give rise to a multitude of phenomena observed in chemistry and materials science. Understanding and controlling the electronic structure of matter provides insights into the reactivity of molecules, the structure and energy transport within planets, and the mechanisms of material failure.
    Scientific challenges are increasingly being addressed through computational modeling and simulation, leveraging the capabilities of high-performance computing. However, a significant obstacle to achieving realistic simulations with quantum precision is the lack of a predictive modeling technique that combines high accuracy with scalability across different length and time scales. Classical atomistic simulation methods can handle large and complex systems, but their omission of quantum electronic structure restricts their applicability. Conversely, simulation methods which do not rely on assumptions such as empirical modeling and parameter fitting (first principles methods) provide high fidelity but are computationally demanding. For instance, density functional theory (DFT), a widely used first principles method, exhibits cubic scaling with system size, thus restricting its predictive capabilities to small scales.
    Hybrid approach based on deep learning
    The team of researchers now presented a novel simulation method called the Materials Learning Algorithms (MALA) software stack. In computer science, a software stack is a collection of algorithms and software components that are combined to create a software application for solving a particular problem. Lenz Fiedler, a Ph.D. student and key developer of MALA at CASUS, explains, “MALA integrates machine learning with physics-based approaches to predict the electronic structure of materials. It employs a hybrid approach, utilizing an established machine learning method called deep learning to accurately predict local quantities, complemented by physics algorithms for computing global quantities of interest.”
    The MALA software stack takes the arrangement of atoms in space as input and generates fingerprints known as bispectrum components, which encode the spatial arrangement of atoms around a Cartesian grid point. The machine learning model in MALA is trained to predict the electronic structure based on this atomic neighborhood. A significant advantage of MALA is its machine learning model’s ability to be independent of the system size, allowing it to be trained on data from small systems and deployed at any scale.
    In their publication, the team of researchers showcased the remarkable effectiveness of this strategy. They achieved a speedup of over 1,000 times for smaller system sizes, consisting of up to a few thousand atoms, compared to conventional algorithms. Furthermore, the team demonstrated MALA’s capability to accurately perform electronic structure calculations at a large scale, involving over 100,000 atoms. Notably, this accomplishment was achieved with modest computational effort, revealing the limitations of conventional DFT codes.
    Attila Cangi, the Acting Department Head of Matter under Extreme Conditions at CASUS, explains: “As the system size increases and more atoms are involved, DFT calculations become impractical, whereas MALA’s speed advantage continues to grow. The key breakthrough of MALA lies in its capability to operate on local atomic environments, enabling accurate numerical predictions that are minimally affected by system size. This groundbreaking achievement opens up computational possibilities that were once considered unattainable.”
    Boost for applied research expected
    Cangi aims to push the boundaries of electronic structure calculations by leveraging machine learning: “We anticipate that MALA will spark a transformation in electronic structure calculations, as we now have a method to simulate significantly larger systems at an unprecedented speed. In the future, researchers will be able to address a broad range of societal challenges based on a significantly improved baseline, including developing new vaccines and novel materials for energy storage, conducting large-scale simulations of semiconductor devices, studying material defects, and exploring chemical reactions for converting the atmospheric greenhouse gas carbon dioxide into climate-friendly minerals.”
    Furthermore, MALA’s approach is particularly suited for high-performance computing (HPC). As the system size grows, MALA enables independent processing on the computational grid it utilizes, effectively leveraging HPC resources, particularly graphical processing units. Siva Rajamanickam, a staff scientist and expert in parallel computing at the Sandia National Laboratories, explains, “MALA’s algorithm for electronic structure calculations maps well to modern HPC systems with distributed accelerators. The capability to decompose work and execute in parallel different grid points across different accelerators makes MALA an ideal match for scalable machine learning on HPC resources, leading to unparalleled speed and efficiency in electronic structure calculations.”
    Apart from the developing partners HZDR and Sandia National Laboratories, MALA is already employed by institutions and companies such as the Georgia Institute of Technology, the North Carolina A&T State University, Sambanova Systems Inc., and Nvidia Corp. More

  • in

    New Zealand kids spending one-third of after-school time on screens

    Regulations are urgently needed to protect children from harm in the unregulated online world, researchers at the University of Otago, New Zealand, say.
    The call comes as the researchers publish the results of their study into the after-school habits of 12-year-olds. Their research, published today in the New Zealand Medical Journal, finds children are spending a third of their after-school time on screens, including more than half their time after 8pm.
    Senior researcher Dr Moira Smith from the University’s Department of Public Health says this is considerably more than the current guidelines, which recommend less than two hours of screen time per day (outside school time) for school-aged children and adolescents.
    The results are from the innovative Kids’Cam project, with the 108 children involved wearing cameras that captured images every seven seconds, offering a unique insight into their everyday lives in 2014 and 2015.
    Children were mostly playing games and watching programmes. For ten per cent of the time the children were using more than one screen.
    Screen use harms children’s health and wellbeing.

    “It is associated with obesity, poor mental wellbeing, poor sleep and mental functioning and lack of physical activity,” Dr Smith says. “It also affects children’s ability to concentrate and regulate their behaviour and emotions.”
    Screen use is now a regular part of children’s everyday lives and is likely to have increased since the Kids’Cam data was collected.
    “Screen use rose rapidly during the COVID-19 pandemic, and children in 2023 are frequently spending time online, particularly on smartphones. According to the latest media use survey, YouTube and Netflix are the most popular websites for watching programmes, with one in three children under 14 using social media, most commonly TikTok, which is rated R13.”
    She says children are being exposed to ads for vaping, alcohol, gambling and junk food, and experiencing sexism, racism and bullying while online.
    “Cyberbullying is particularly high among children in Aotearoa, with one in four parents reporting their child has been subjected to bullying while online.”
    Dr Smith says current New Zealand legislation is outdated and fails to adequately deal with the online world children are being exposed to.
    “While screen use has many benefits, children need to be protected from harm in this largely unregulated space.”
    She says the Government is to be applauded for proposing more regulation of social media in its recent consultation document from the Department of Internal Affairs (DIA), which notes concern about children accessing inappropriate content while online.
    The Otago researchers are currently studying the online worlds of children in Aotearoa using screen capture technology, with the results expected to be published soon. More

  • in

    AI finds a way to people’s hearts (literally!)

    AI (artificial intelligence) may sound like a cold robotic system, but Osaka Metropolitan University scientists have shown that it can deliver heartwarming — or, more to the point, “heart-warning” — support. They unveiled an innovative use of AI that classifies cardiac functions and pinpoints valvular heart disease with unprecedented accuracy, demonstrating continued progress in merging the fields of medicine and technology to advance patient care. The results will be published in The Lancet Digital Health.
    Valvular heart disease, one cause of heart failure, is often diagnosed using echocardiography. This technique, however, requires specialized skills, so there is a corresponding shortage of qualified technicians. Meanwhile, chest radiography is one of the most common tests to identify diseases, primarily of the lungs. Even though the heart is also visible in chest radiographs, little was known heretofore about the ability of chest radiographs to detect cardiac function or disease. Chest radiographs, or chest X-Rays, are performed in many hospitals and very little time is required to conduct them, making them highly accessible and reproducible. Accordingly, the research team led by Dr. Daiju Ueda, from the Department of Diagnostic and Interventional Radiology at the Graduate School of Medicine of Osaka Metropolitan University, reckoned that if cardiac function and disease could be determined from chest radiographs, this test could serve as a supplement to echocardiography.
    Dr. Ueda’s team successfully developed a model that utilizes AI to accurately classify cardiac functions and valvular heart diseases from chest radiographs. Since AI trained on a single dataset faces potential bias, leading to low accuracy, the team aimed for multi-institutional data. Accordingly, a total of 22,551 chest radiographs associated with 22,551 echocardiograms were collected from 16,946 patients at four facilities between 2013 and 2021. With the chest radiographs set as input data and the echocardiograms set as output data, the AI model was trained to learn features connecting both datasets.
    The AI model was able to categorize precisely six selected types of valvular heart disease, with the Area Under the Curve, or AUC, ranging from 0.83 to 0.92. (AUC is a rating index that indicates the capability of an AI model and uses a value range from 0 to 1, with the closer to 1, the better.) The AUC was 0.92 at a 40% cut-off for detecting left ventricular ejection fraction — an important measure for monitoring cardiac function.
    “It took us a very long time to get to these results, but I believe this is significant research,” stated Dr. Ueda. “In addition to improving the efficiency of doctors’ diagnoses, the system might also be used in areas where there are no specialists, in night-time emergencies, and for patients who have difficulty undergoing echocardiography.” More

  • in

    Physicists generate the first snapshots of fermion pairs

    When your laptop or smartphone heats up, it’s due to energy that’s lost in translation. The same goes for power lines that transmit electricity between cities. In fact, around 10 percent of the generated energy is lost in the transmission of electricity. That’s because the electrons that carry electric charge do so as free agents, bumping and grazing against other electrons as they move collectively through power cords and transmission lines. All this jostling generates friction, and, ultimately, heat.
    But when electrons pair up, they can rise above the fray and glide through a material without friction. This “superconducting” behavior occurs in a range of materials, though at ultracold temperatures. If these materials can be made to superconduct closer to room temperature, they could pave the way for zero-loss devices, such as heat-free laptops and phones, and ultraefficient power lines. But first, scientists will have to understand how electrons pair up in the first place.
    Now, new snapshots of particles pairing up in a cloud of atoms can provide clues to how electrons pair up in a superconducting material. The snapshots were taken by MIT physicists and are the first images that directly capture the pairing of fermions — a major class of particles that includes electrons, as well as protons, neutrons, and certain types of atoms.
    In this case, the MIT team worked with fermions in the form of potassium-40 atoms, and under conditions that simulate the behavior of electrons in certain superconducting materials. They developed a technique to image a supercooled cloud of potassium-40 atoms, which allowed them to observe the particles pairing up, even when separated by a small distance. They could also pick out interesting patterns and behaviors, such as a the way pairs formed checkerboards, which were disturbed by lonely singles passing by.
    The observations, reported today in Science, can serve as a visual blueprint for how electrons may pair up in superconducting materials. The results may also help to describe how neutrons pair up to form an intensely dense and churning superfluid within neutron stars.
    “Fermion pairing is at the basis of superconductivity and many phenomena in nuclear physics,” says study author Martin Zwierlein, the Thomas A. Frank Professor of Physics at MIT. “But no one had seen this pairing in situ. So it was just breathtaking to then finally see these images onscreen, faithfully.”
    The study’s co-authors include Thomas Hartke, Botond Oreg, Carter Turnbaugh, and Ningyuan Jia, all members of MIT’s Department of Physics, the MIT-Harvard Center for Ultracold Atoms, and the Research Laboratory of Electronics.

    A decent view
    To directly observe electrons pair up is an impossible task. They are simply too small and too fast to capture with existing imaging techniques. To understand their behavior, physicists like Zwierlein have looked to analogous systems of atoms. Both electrons and certain atoms, despite their difference in size, are similar in that they are fermions — particles that exhibit a property known as “half-integer spin.” When fermions of opposite spin interact, they can pair up, as electrons do in superconductors, and as certain atoms do in a cloud of gas.
    Zwierlein’s group has been studying the behavior of potassium-40 atoms, which are known fermions, that can be prepared in one of two spin states. When a potassium atom of one spin interacts with an atom of another spin, they can form a pair, similar to superconducting electrons. But under normal, room-temperature conditions, the atoms interact in a blur that is difficult to capture.
    To get a decent view of their behavior, Zwierlein and his colleagues study the particles as a very dilute gas of about 1,000 atoms, that they place under ultracold, nanokelvin conditions that slow the atoms to a crawl. The researchers also contain the gas within an optical lattice, or a grid of laser light that the atoms can hop within, and that the researchers can use as a map to pinpoint the atoms’ precise locations.
    In their new study, the team made enhancements to their existing technique for imaging fermions that enabled them to momentarily freeze the atoms in place, then take snapshots separately of potassium-40 atoms with one particular spin or the other. The researchers could then overlay an image of one atom type over the other, and look to see where the two types paired up, and how.

    “It was bloody difficult to get to a point where we could actually take these images,” Zwierlein says. “You can imagine at first getting big fat holes in your imaging, your atoms running away, nothing is working. We’ve had terribly complicated problems to solve in the lab through the years, and the students had great stamina, and finally, to be able to see these images was absolutely elating.”
    Pair dance
    What the team saw was pairing behavior among the atoms that was predicted by the Hubbard model — a widely held theory believed to hold they key to the behavior of electrons in high-temperature superconductors, materials that exhibit superconductivity at relatively high (though still very cold) temperatures. Predictions of how electrons pair up in these materials have been tested through this model, but never directly observed until now.
    The team created and imaged different clouds of atoms thousands of times and translated each image into a digitized version resembling a grid. Each grid showed the location of atoms of both types (depicted as red versus blue in their paper). From these maps, they were able to see squares in the grid with either a lone red or blue atom, and squares where both a red and blue atom paired up locally (depicted as white), as well as empty squares that contained neither a red or blue atom (black).
    Already individual images show many local pairs, and red and blue atoms in close proximity. By analyzing sets of hundred of images, the team could show that atoms indeed show up in pairs, at times linking up in a tight pair within one square, and at other times forming looser pairs, separated by one or several grid spacings. This physical separation, or “nonlocal pairing,” was predicted by the Hubbard model but never directly observed.
    The researchers also observed that collections of pairs seemed to form a broader, checkerboard pattern, and that this pattern wobbled in and out of formation as one partner of a pair ventured outside its square and momentarily distorted the checkerboard of other pairings. This phenomenon, known as a “polaron,” was also predicted but never seen directly.
    “In this dynamic soup, the particles are constantly hopping on top of each other, moving away, but never dancing too far from each other,” Zwierlein notes.
    The pairing behavior between these atoms must also occur in superconducting electrons, and Zwierlein says the team’s new snapshots will help to inform scientists’ understanding of high-temperature superconductors, and perhaps provide insight into how these materials might be tuned to higher, more practical temperatures.
    “If you normalize our gas of atoms to the density of electrons in a metal, we think this pairing behavior should occur far above room temperature,” Zwierlein offers. “That gives a lot of hope and confidence that such pairing phenomena can in principle occur at elevated temperatures, and there’s no a priori limit to why there shouldn’t be a room-temperature superconductor one day.”
    This research was supported, in part, by the U.S. National Science Foundation, the U.S. Air Force Office of Scientific Research, and the Vannevar Bush Faculty Fellowship. More

  • in

    New design rule for high-entropy superionic solid-state conductors

    Solid electrolytes with high lithium-ion conductivity can be designed for millimeter-thick battery electrodes by increasing the complexity of their composite superionic crystals, report researchers from Tokyo Tech. This new design rule enables the synthesis of high-entropy active materials while preserving their superionic conduction.
    As the world transitions towards a greener and more sustainable energy economy, reliance on lithium (Li)-ion batteries is expected to rise. Scientists from across the globe are working towards designing smaller yet efficient batteries that can keep up with the ever-increasing demand for energy storage. In recent years, all-solid-state lithium batteries (ASSLBs) have captured research interest due to their unique use of solid electrolytes instead of conventional liquid ones. Solid electrolytes not only make the battery safer from leakage and fire-related hazards, but also provide superior energy and power characteristics. However, their stiffness results in poor wetting of the cathode surface and a lack of homogenous supply of Li ions to the cathode. This, in turn, leads to a loss of capacity in the solid-state battery. The issue becomes more pronounced in thick battery cathode electrode such as millimeter-thick one, which is a more advantageous electrode configuration for realizing inexpensive and high-energy-density battery package, compared to conventional electrode with typical thickness of < 0.1 mm. Fortunately, a recent study published in Science found a way to overcome this problem. The paper -- authored by a team of researchers led by Prof. Ryoji Kanno from Tokyo Institute of Technology (Tokyo Tech) -- describes a new strategy to produce solid electrolytes with enhanced Li-ion conductivity. Their work establishes a design rule for synthesizing high-entropy crystals of lithium superionic conductors via the multi-substitution approach. "Many studies have shown that inorganic ionic conductors tend to show better ion conductivity after multi-element substitution probably because of the flattened potential barrier of Li-ion migration, which is essential for better ion conductivity," points out Prof. Kanno. This was where they started their research. For the design of their new material, the team took inspiration from the chemical compositions of two well-known Li-based solid electrolytes: argyrodite-type (Li6PS5Cl) and LGPS-type (Li10GeP2S12) superionic crystals. They modified the LGPS-type Li9.54Si1.74P1.44S11.7Cl0.3 via multi-substitution and synthesized a series of crystals with composition Li9.54[Si1−δMδ]1.74P1.44S11.1Br0.3O0.6 (M = Ge, Sn; 0 ≤ δ ≤ 1). The researchers used a crystal with Ge = M and δ = 0.4 as a catholyte in an ASSLB with an 1- or 0.8- millimeter-thick cathode. The former and latter ASSLB exhibited discharge capacities of 26.4 mAh cm−2 at 25 °C (1 mm) and 17.3 mAh cm−2 at −10 °C (0.8 mm), respectively, with the area-specific capacity 1.8 and 5.3 times larger than those reported for previous state-of the-art ASSLBs, respectively. Theoretical calculations suggested that the enhanced conductivity of the solid electrolyte could be a result of the flattening of the energy barrier for ion migration, caused by a small degree of chemical substitution in the above-mentioned crystal. This study provides a new way for preparing high-entropy solid electrolytes for millimeter-thick electrodes while preserving their superionic conduction pathways. "In effect, the proposed design rule lays a solid groundwork for exploring new superionic conductors with superior charge-discharge performance, even at room temperature," concludes Prof. Kanno. More

  • in

    Number cruncher calculates whether whales are acting weirdly

    We humans can be a scary acquaintance for whales in the wild. This includes marine biologists tagging them with measuring devices to understand them better. These experiences can make whales behave erratically for a while. Such behaviour can affect research quality and highlights an animal ethics dilemma. Now, University of Copenhagen researchers have figured out how to solve the problems with math.
    Maybe you have tried taking a howling pooch or cranky cat to the vet. Regardless of your noblest intentions, your pet’s experience may have been equally unpleasant. Animals react to the unknown in their own way. The case is no different for cetaceans like narwhal and bowhead whales when they encounter human generated noises such as ship noise or mining blasts in the North Atlantic — or when they are caught by well-meaning marine biologists who just want to get to know them better.
    When biologists ‘tag’ whales with measuring devices, the animals react by behaving unusually — abnormally. For example, for a while after being tagged, they may perform many atypical shallow dives and quick jerks. Such behaviour is misleading when the goal is to study the animal’s normal and natural behaviour.
    The problem is getting help from an unusual corner.
    “Biologists seek to understand animals as natural beings, but their reactions turn into unnatural behaviour that creates noise in the dataset. Because of this, a lot of data from just after whales are tagged ends up getting discarded. In this study, we have proposed a mathematical approach using statistical methods that can determine exactly how much data to keep,” says PhD student Lars Reiter from the Department of Mathematics.
    Valuable for humans and animals alike
    With two statistical calculations, the researcher has found a way to estimate when whales like narwhals and bowhead whales will return to their natural behaviour after being tagged. It is a method that can also be used to study how animals respond to other types of disturbances.

    “This research is extremely valuable to us as marine biologists who are interested in the behaviour and well-being of whales. It provides us with a standardised approach by which to distinguish between natural behaviour and affected behaviour in whales. Thus far, we’ve made individual estimates that are more or less spot on,” says marine biologist Outi Tervo from the Greenland Institute of Natural Resources, who collaborated with the mathematicians on the study.
    The statistical method allows researchers to avoid discarding too much or too little data. If too much data is kept, it can interfere with the research results, and if too much is lost, it comes at cost to both the animals and humans.
    “It really matters in terms of research, but also financially. And not least, it means something for animal welfare. If we throw away data unnecessarily, more whales will eventually have to go through the experience for us to conduct this research, which is ultimately meant to benefit the animals,” says Outi Tervo.
    Idea came from a parliamentary election
    Whale behaviour does not go from abnormal to normal with a flick of its tail. Their behaviour normalizes gradually, typically over a day — and in a few cases over a longer period of time. During this transition, a whale’s behaviour manifests itself on both sides of an area designated as normal whale behaviour. So how do scientists figure out where to make the cut?

    “The idea came to me while I was standing in the voting booth during parliamentary elections. Borrowing from the logic of the electoral system, you can consider it as if the whales — or these data points which show the whale’s behaviour — vote on whether they are in or out of their normal range,” explains Lars Reiter.
    By recording 1 positive “vote” when the behaviour is within the normal range, and 1 negative “vote” when outside, the scientists can add up all the votes and find the moment at which the number of votes goes from predominantly negative to positive.
    The researchers use two approaches to determine normal whale behaviour. In part, they look at the whale’s diving pattern, as well as its acceleration and fine motor skills.
    How to calculate the behaviour of animals statistically
    Sometimes it hunts in the deep, while at others times, it cruises quietly at the surface. The activity that a whale is engaged in is crucial for understanding its normal energy level. Lars Reiter’s method takes this into account as something new:
    “Where previous research focused on the mean behavior, we instead situate a whale in an activity based on its movements — where it is assessed based on a normal value for acceleration that matches the speci?c activity being engaged in. We do this by using what are known as quantiles, instead of averages, because they allow us to focus on behavioural extremes. For example, hunting and resting are opposing extremes in terms of energy levels,” explains Lars Reiter.
    When the focus is on the whale’s diving profile, on the other hand, you look at the pattern formed by the whale’s overall activities. By combining depth and time, one can assess whether the distribution of different dive types is natural.
    Wiser about the animals’ hardships and better at avoiding them
    According to the marine biologist, the data-based approach represented by the statistical method also means that researchers can now develop better, more gentle ways of tagging.
    “Based on this study, we already know that the amount of time we spend putting the equipment on is an important factor for how much the animals are affected afterwards. Therefore, we can set up some time limits — where we stop and set the whale free if it takes more than X number of minutes allowed,” says Outi Tervo.
    A shift away from individual estimates to a mathematical standard could also mean better assessments from the veterinary oversight that tag-using research projects are required to go through.
    “The method will make it so that ethical approval from a veterinary inspection is more data-based and precise. So, there is no doubt that this research is a step forward for animal welfare,” says the marine biologist.
    * Extra info: An important instrument for a future with less ice and more people
    The natural Arctic habitat of narwhals and bowhead whales is changing due to climate change. Annual ice shrinkage and increasing human activity is taking place in areas that whales once had all to themselves. The researchers’ method can become an important instrument and contribute to a greater understanding of the consequences.
    “It allows us to study how whales are impacted by various human activities. They can be external sources of noise that we can situate in time and location, such as a blast or a ship passing by. Or sounds and activities that we emit ourselves. Lars’ algorithm lets us get a clear picture of how it all affects the animals,” says Outi Tervo.
    Increased activity will lead to more ocean noise, which is of concern to marine biologists with regards to how it will affect large marine animals like narwhal, which are incredibly sensitive to sound. Co-author and supervisor Professor Susanne Ditlevsen believes that the studies and new method will become more important in the years ahead.
    “Climate change is leading to increased anthropogenic activity in Arctic whale habitats. Melting ice means that areas which were once impassable can now be reached by humans. We would like to assess whether it scares and disturbs the animals, but it is not clear how. The new methods can be used to assess at what distance from the animal habitat should various activities take place,” says Susanne Ditlevsen.
    Facts: Statistical method with two mathematical calculations and one intersection.
    The statistical method can generally be understood as calculations with two types of tagging data — acceleration and depth, and a way of adding it up that finds the optimal intersection.
    1. Acceleration tells about the energy level and whale movements (“jerks”). The indicators for natural behaviour are divided according to whale activity, so that, for example, a high energy level is recorded as natural in connection with hunting, but not in connection with rest.
    2. The whale’s diving profile is measured in depth and time spent on a dive. Temporal impacts over a 40-hour period show a pattern of different types of dives — e.g., U-dives, where the whale stay at depth for some time, or V-dives, where the whale resurfaces quickly. The pattern is compared with normal values measured after the 40 hours.
    3. The cut-off point for when the whale is back in normal behaviour is found by counting the individual measurements as “voting for or against” normal behaviour. As such, the researchers find the optimal place to divide the research data into natural and influenced behaviour.
    About the study
    The study is part of a larger research collaboration between the Greenland Institute of Natural Resources and the University of Copenhagen’s Department of Mathematics, that focuses on the Arctic’s large marine mammals.
    The researchers include Lars Reiter Nielsen and Susanne Ditlevsen from the University of Copenhagen, Outi M. Tervo and Mads Peter Heide-Jørgensen from the Greenland Institute of Natural Resources and Susanna B. Blackwell from Greeneridge Sciences, Inc., Santa Barbara, USA More