More stories

  • in

    Study shows how math, science identity in students affects college, career outcomes

    If you ask someone if they are a math or science person, they may quickly tell you yes or no. It turns out that how people answer that question in ninth grade and even earlier not only can tell you what subjects they prefer in school, but how likely they are to go on to study STEM subjects in college and work in those fields as adults. The results of a new study from the University of Kansas suggest the importance of fostering positive attitudes toward math and science early in students’ life to address gender and socioeconomic gaps in STEM.
    KU researchers analyzed a nationwide data set that asked students if they consider themselves a math and/or science person in ninth grade in 2009. The survey then followed up with those students in 11th grade to ask the same question, then three years after graduation to see who had enrolled in science, technology, engineering and math (STEM) majors, and whether they intended to have a related career when they turned 30. The results not only support the importance of student attitudes on academic outcomes, they also suggest efforts should be focused more on cultivating positive attitudes earlier in student careers, before they get to college, where most of such efforts happen currently.
    Rafael Quintana, assistant professor of educational psychology, and Argun Saatcioglu, professor of educational policy and sociology, both at KU, conducted a study in which they analyzed data from the High School Longitudinal Study of 2009. The data set includes responses from more than 21,000 students from about 940 schools across the United States. The study was published in the journal Socius: Sociological Research for a Dynamic World.
    Results showed that the odds of enrolling in a STEM major were 1.78 times larger for students with a science identity in ninth grade and 1.66 times larger for those with a math identity than those who did not identify with the subjects. The odds of expecting a career in STEM was 1.69 times larger and 1.6 times larger for those with high science and math identities, respectively.
    Those numbers are illustrative of how having positive experiences with math and science early can be influential both in higher education and later in life, the researchers said.
    “What do we mean when we say education has long-lasting effects? That’s something we want to think about longitudinally,” Quintana said. “Those early experiences get ‘under the skin,’ as they are related to later outcomes independently of how these attitudes developed later. What this suggests is one, the importance of identity beliefs for career-related decisions, and two, that early experiences can have long-lasting, potentially irreversible effects.”
    The data also showed that, when controlling for all other variables, the odds of expecting a career in a STEM field was about 50% lower for women than men and that there was a significant interaction between science identity in school and gender when predicting STEM occupation. In other words, it was more consequential for men to identify with science in ninth grade, as they were more likely to go on to a career in the sciences. Research has long noted a gender gap and socioeconomic inequalities in STEM, but most efforts have focused on how to address them among college students. While those efforts are just, Quintana said, the study results suggest it is important to take measures to address math and science inequities earlier in life as well.
    Schools can play a long-term role in helping students believe they can have a career in STEM and visualize such a possibility. By providing equitable access to math and science programs, they can also provide chances to those who may not otherwise get them, the researchers said.
    “We want schools to matter and have a consequential effect,” Saatcioglu said. “If you can get kids thinking they are a math or science person through positive experiences, that can have long-term effects. If you can get students to feel that way, it can be beneficial. The key in this study was Rafael was able to isolate the long-term effects of attitudes from ninth grade.”
    The attitudes students hold in early high school are key, as they have a cascading effect.
    “For example, individuals’ self- perceptions can affect the courses they take, the effort and time they spend on specific subjects and the interests and aspirations they develop,” the authors wrote. “These attitudes and behaviors can shape individuals career trajectories independently of their future identity beliefs. This ramification of causal effects is what generates the cascading and potentially irreversible consequences of early-life experiences.”
    Quintana, who uses longitudinal data analysis to study problems in education and human development, said he also hopes to revisit the data in the future to see where those in the data set are now, and how many are still working in STEM fields. Such analysis could also be applied to understand other early educational experiences such as bullying and how they influence later choices, attitudes and career pathways. More

  • in

    Climate change could turn some blue lakes to green or brown

    Some picturesque blue lakes may not be so blue in the future, thanks to climate change.

    In the first global tally of lake color, researchers estimate that roughly one-third of Earth’s lakes are blue. But, should average summer air temperatures rise by a few degrees, some of those crystal waters could turn a murky green or brown, the team reports in the Sept. 28 Geophysical Research Letters.

    The changing hues could alter how people use those waters and offer clues about the stability of lake ecosystems. Lake color depends in part on what’s in the water, but factors such as water depth and surrounding land use also matter. Compared with blue lakes, green or brown lakes have more algae, sediment and organic matter, says Xiao Yang, a hydrologist at Southern Methodist University in Dallas.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Yang and colleagues used satellite photos from 2013 to 2020 to analyze the color of more than 85,000 lakes around the world. Because storms and seasons can temporarily affect a lake’s color, the researchers focused on the most frequent color observed for each lake over the seven-year period. The researchers also created an interactive online map that can be used to explore the colors of these lakes.

    The approach is “super cool,” says Dina Leech, an aquatic ecologist at Longwood University in Farmville, Va., who was not involved with the study. These satellite data are “just so powerful.”

    The scientists then looked at local climates during that time to see how they may be linked to lake color around the world. For many small or remote water bodies, records of temperature and precipitation don’t exist. Instead, the researchers also relied on climate “hindcasts” calculated for every spot on the globe, which are pieced together from relatively sparse records. 

    Lakes in places with average summer air temperatures that were below 19° Celsius were more likely to be blue than lakes with warmer summers, the researchers found. But up to 14 percent of the blue lakes they studied are near that threshold. If average summer temperatures increase another 3 degrees Celsius — an amount that scientists think is plausible by the end of the century — those 3,800 lakes could turn green or brown (SN: 8/9/21). That’s because warmer water helps algae bloom more, which changes the properties of the water, giving it a green-brown tint, Yang says.

    Extrapolating beyond this sample of lakes is a bit tricky. “We don’t even know how many lakes there are in the world,” says study coauthor Catherine O’Reilly, an aquatic ecologist at Illinois State University in Normal. Many lakes are too small to reliably detect via satellite, but by some estimates, tens of thousands of larger lakes could lose their blue hue.

    If some lakes do become less blue, people will probably lose some of the resources they have come to value, O’Reilly says. Lakes are often used for drinking water, food or recreation. If the water is more clogged with algae, it could be unappealing for play or more costly to clean for drinking.

    But the color changes wouldn’t necessarily mean that the lakes are any less healthy. “[Humans] don’t value lots of algae in a lake, but if you’re a certain type of fish species, you might be like ‘this is great,’” O’Reilly says.

    Lake color can hint at the stability of a lake’s ecosystem, with shifting shades indicating changing conditions for the critters living in the water. One benefit of the new study is that it gives scientists a baseline for assessing how climate change is affecting Earth’s freshwater resources. Continued monitoring of lakes could help scientists detect future changes.

    “[The study] sets a marker that we can compare future results to,” says Mike Pace, an aquatic ecologist at the University of Virginia in Charlottesville, who was not involved with the study. “That’s, to me, the great power of this study.” More

  • in

    BESSY II: Localization of d-electrons in transition metals determined

    Transition metals and non-ferrous metals such as copper, nickel and cobalt are not only suitable as materials in engineering and technology, but also for a wide range of applications in electrochemistry and catalysis. Their chemical and physical properties are related to the occupation of the outer d-orbital shells around the atomic nuclei. The energetic levels of the electrons as well as their localisation or delocalisation can be studied at the X-ray source BESSY II, which offers powerful synchrotron radiation.
    Copper, Nickel, Cobalt
    The team of the Uppsala-Berlin Joint Lab (UBjL) around Prof. Alexander Föhlisch and Prof. Nils Mårtensson has now published new results on copper, nickel and cobalt samples. They confirmed known findings for copper, whose d-electrons are atomically localised, and for nickel, in which localised electrons coexist with delocalised electrons. In the case of the element cobalt, which is used for batteries and as an alloy in fuel cells, however, previous findings were contradictory because the measurement accuracy was not sufficient to make clear statements.
    Spectroscopy combined with highly sensitive detectors
    At BESSY II the Uppsala-Berlin joint Lab has set up an instrument which enables measurements with the necessary precision. To determine electronic localisation or delocalisation, Auger photo-electron coincidence spectroscopy (APECS) is used. APECS requires the newly developed “Angle resolved Time of Flight” (ArTOF) electron spectrometers, whose detection efficiency exceeds that of standard hemispherical analysers by orders of magnitude. Equipped with two ArTOF electron spectrometers, the CoESCA@UE52-PGM end station supervised by UBjL scientist Dr. Danilo Kühn is unique worldwide.
    Analysing (catalytical) materials
    In the case of the element cobalt, the measurements now revealed that the d-electrons of cobalt can be regarded as highly delocalised. “This is an important step for a quantitative determination of electronic localisation on a variety of materials, catalysts and (electro)chemical processes,” Föhlisch points out.
    Story Source:
    Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length. More

  • in

    The road to future AI is paved with trust

    The place of artificial intelligence, AI, in our everyday life is increasing and many researchers believe that what we have seen so far is only the beginning. However, AI must be trustworthy in all situations. Linköping University is coordinating TAILOR, a EU project that has drawn up a research-based roadmap intended to guide research funding bodies and decision-makers towards the trustworthy AI of the future.
    “The development of artificial intelligence is in its infancy. When we look back at what we are doing today in 50 years, we will find it pretty primitive. In other words, most of the field remains to be discovered. That’s why it’s important to lay the foundation of trustworthy AI now,” says Fredrik Heintz, professor of artificial intelligence at LiU, and coordinator of the TAILOR project.
    TAILOR is one of six research networks set up by the EU to strengthen research capacity and develop the AI of the future. The foundation of trustworthy AI is being laid by TAILOR, by drawing up a framework, guidelines and a specification of the needs of the AI research community. “TAILOR” is an abbreviation of Foundations of Trustworthy AI — integrating, learning, optimisation and reasoning.
    The roadmap now presented by TAILOR is the first step on the way to standardisation, where the idea is that decision-makers and research funding bodies can gain insight into what is required to develop trustworthy AI. Fredrik Heintz believes that it is a good idea to show that many research problems must be solved before this can be achieved.
    The researchers have defined three criteria for trustworthy AI: it must conform to laws and regulations, it must satisfy several ethical principles, and its implementation it must be robust and safe. Fredrik Heintz points out that these criteria pose major challenges, in particular the implementation of the ethical principles.
    “Take justice, for example. Does this mean an equal distribution of resources or that all actors receive the resources needed to bring them all to the same level? We are facing major long-term questions, and it will take time before they are answered. Remember — the definition of justice has been debated by philosophers and scholars for hundreds of years,” says Fredrik Heintz.
    The project will focus on large comprehensive research questions, and will attempt to find standards that all who work with AI can adopt. But Fredrik Heintz is convinced that we can only achieve this if basic research into AI is given priority.
    “People often regard AI as a technology issue, but what’s really important is whether we gain societal benefit from it. If we are to obtain AI that can be trusted and that functions well in society, we must make sure that it is centred on people,” says Fredrik Heintz.
    Many of the legal proposals written within the EU and its member states are written by legal specialists. But Fredrik Heintz believes that they lack expert knowledge within AI, which is a problem.
    “Legislation and standards must be based on knowledge. This is where we researchers can contribute, providing information about the current forefront of research, and making well-grounded decisions possible. It’s important that experts have the opportunity to influence questions of this type,” says Fredrik Heintz.
    The complete roadmap is available at: Strategic Research and Innovation Roadmap of trustworthy AI
    Story Source:
    Materials provided by Linköping University. Original written by Anders Törneholm. Note: Content may be edited for style and length. More

  • in

    Computers calling time on isolation

    Across the world, many people infected with Covid-19 have been made to completely isolate from others in order to avoid passing on the infection. Some countries still recommend minimum isolation periods for as long as 10 days from when patients start to develop Covid-19 symptoms.
    Professor Shingo Iwami, affiliated with Kyoto University’s Mathematical Biology Laboratory at the Institute Advanced Study of Human Biology (WPI-ASHBi) says, “Although a long time for isolation reduces the overall risk of patients passing on the infection, there will always be patients who recover early and have to accept several days of redundant isolation while no longer posing an infection risk. We would like to calculate a way to reduce this unnecessary disruption in people’s lives as well as the broader losses for the economy.”
    Writing in the journal Nature Communications, an international team of scientists, led by Iwami, has reported a simulation of the potential risks and benefits of ending an individual’s isolation early using antigen tests instead of isolating patients for a fixed time. They call for more sensitive and regular antigen testing to help reduce isolation periods for patients recovering from Covid-19.
    The team decided to base their model on antigen rather than PCR testing, trading sensitivity for short turn-around time, low cost, and practicality. Iwami explains that although antigen tests do have a risk of generating “false-negatives” and fail to detect individuals who could still be infectious, there are clear benefits to getting results within an hour rather than waiting a day.
    Their model accounts for the sensitivity of antigen tests as well as factors like the amount of virus in a patient that makes them infectious. These are then balanced against the acceptable risk of missing unrecovered and potentially infectious patients, by letting them out of isolation early.
    Using their model, the team compared different scenarios to identify the best strategy. For example, the model projects that letting a recovering patient leave isolation after 2 consecutive negative results on 2 days in a row would spend 3.9 days of redundant isolation after their recovery. But under these conditions 1 in 40 patients would continue to pose an infection risk.
    More conservative approaches might increase the burden on patients by requiring more than 2 consecutive negative test results of antigen tests.
    Iwami says, “The epidemic has still not completely subsided, and we are living with a lot of uncertainty with regard to new variants of the virus. Antigen tests could help, but there is also a real need for worldwide systematic guidelines that simultaneously reduce risks and burdens. We hope this simulator will help doctors and policy makers meet those demands.”
    Story Source:
    Materials provided by Kyoto University. Note: Content may be edited for style and length. More

  • in

    Collective effort needed to help children thrive following exposure to online risks

    Helping children become more ‘digitally resilient’ needs to be a collective effort if they are to learn how to “thrive online,” according to new research led by the University of East Anglia.
    Digital resilience is the capability to learn how to recognise, manage and recover from online risks — such as bullying and inappropriate content — and has the potential to buffer how these experiences may impact young people’s wellbeing. Until now, research has not examined how digital resilience can be built and shown by children beyond focusing on the individual child.
    This new study argues that activating digital resilience needs to be undertaken as a “collective endeavour,” involving the child, their parents/carers within home environments, youth workers, teachers, and schools at a community level, along with governments, policymakers, and internet corporations at a societal level.
    It finds that digital resilience operates across these different levels, which are critical to help children learn how to recognise, manage, recover and, depending on the support available, grow following experiences of online risks.
    Importantly, digital resilience across these levels and areas are not mutually exclusive but reinforce and operate on each other. As a result, say the researchers, collective responsibility must be at the heart of work in this area.
    The study focused on digital resilience among pre-teens — those aged 8 to 12 years old, who are transitioning into early adolescence and seeking more independence at home, school, within society and, increasingly, through online experiences. More

  • in

    Biomarkers used to track benefits of anti-aging therapies can be misleading, suggests nematode study

    We all grow old and die, but we still don’t know why. Diet, exercise and stress all effect our lifespan, but the underlying processes that drive ageing remain a mystery. Often, we measure age by counting our years since birth and yet our cells know nothing of chronological time — our organs and tissues may age more rapidly or slowly regardless of what we’d expect from counting the number of orbits we tale around the sun.
    For this reason, many scientists search to develop methods to measure the “biological age” of our cells — which can be different from our chronological age. In theory, such biomarkers of ageing could provide a measure of health that could revolutionize how we practice medicine. Individuals could use a biomarker of ageing to track their biological age over time and measure the effect of diet, exercise, and drugs and predict their effects to extend lifespan or improve quality of life. Medicines could be designed and identified based on their effect on biological age. In other words, we could start to treat ageing itself.
    However, no accurate and highly predictive test for biological age has been validated to date. In part, this is because we still don’t know what causes ageing and so can’t measure it. Definitive progress in the field will require validating biomarkers throughout a patient’s lifetime, an impractical feat given human life expectancy.
    To understand the irreducible components of ageing, and how these can be measured and tested, researchers turn to laboratory animals. Unlike humans, the nematode C. elegans lives for an average of two weeks, making it easier to collect behavioural and lifespan data that would otherwise require centuries.
    The nematode C. elegans begin adulthood vigorously exploring their environment. Over time, they slow and stop crawling, a behavioural stage known as vigorous movement cessation (VMC). VMC is a biomarker of ageing and a proxy for nematode health. Studies of genetically identical nematodes have shown it is a powerful predictor of a worm’s lifespan, but at the same time, interventions designed to alter ageing can disproportionately affect VMC in comparison to lifespan and vice versa. Researchers at the Centre for Genomic Regulation (CRG) in Barcelona seek to understand why this happens and what this means for the ageing process in humans.
    A team lead by Dr. Nicholas Stroustrup, Group Leader at the CRG’s Systems Biology research programme, has developed the ‘Lifespan Machine’, a device that can follow the life and death of tens of thousands of nematodes at once. The worms live in a petri dish under the watchful eye of a scanner that monitors their entire lives. By imaging the nematodes once per hour for months, the device gathers data at unprecedented statistical resolution and scale. More

  • in

    Improving hospital stays and outcomes for older patients with dementia through AI

    By using artificial intelligence, Houston Methodist researchers are able to predict hospitalization outcomes of geriatric patients with dementia on the first or second day of hospital admission. This early assessment of outcomes means more timely interventions, better care coordination, more judicious resource allocation, focused care management and timely treatment for these more vulnerable, high-risk patients.
    Because geriatric patients with dementia have longer hospital stays and incur higher health care costs than other patients, the team sought to solve this problem by identifying modifiable risk factors and developing an artificial intelligence model that improves patient outcomes, enhances their quality of life and reduces their hospital readmission risk, as well as reducing hospitalization costs once the model is put into practice.
    The study, appearing online Sept. 29 in Alzheimer’s & Dementia: Translational Research and Clinical Interventions, a journal of the Alzheimer’s Association, looked at the hospital records of 8,407 geriatric patients with dementia over 10 years within Houston Methodist’s system of eight hospitals, identifying risk factors for poor outcomes among subgroups of patients with different types of dementia that stem from diseases such as Alzheimer’s, Parkinson’s, vascular dementia and Huntington’s, among others. From this data, the researchers developed a machine learning model to quickly recognize the predictive risk factors and their ranked importance for undesirable hospitalization outcomes early in the course of these patients’ hospital stays.
    With an accuracy of 95.6%, their model outperformed all other prevalent methods of risk assessment for these multiple types of dementia. The researchers add that none of the other current methods have applied AI to comprehensively predict hospitalization outcomes of elderly patients with dementia in this way nor do they identify specific risk factors that can be modifiable by additional clinical procedures or precautions to reduce the risks.
    “The study showed that if we can identify geriatric patients with dementia as soon as they are hospitalized and recognize the significant risk factors, then we can implement some suitable interventions right away,” said Eugene C. Lai, M.D., Ph.D., the Robert W. Hervey Distinguished Endowed Chair for Parkinson’s Research and Treatment in the Stanley H. Appel Department of Neurology. “By mitigating and correcting the modifiable risk factors for undesirable outcomes immediately, we are able to improve outcomes and shorten their hospital stays.”
    Lai, a neurologist, has worked for many years with these patients and wanted to look at ways to better understand how they’re managed and their behavior when hospitalized, so clinicians could improve care and quality of life for them. He approached Stephen T.C. Wong, Ph.D., P.E., a bioinformatics expert and Director of the T. T. and W. F. Chao Center for BRAIN at Houston Methodist, with this idea, because he had previously collaborated with Wong and knew his team had access to the large clinical data warehouse of Houston Methodist patients and the ability to use AI to analyze big data.
    Risk factors for each type of dementia were identified, including those amenable to interventions. Top identified hospitalization outcome risk factors included encephalopathy, number of medical problems at admission, pressure ulcers, urinary tract infections, falls, admission source, age, race and anemia, with several overlaps in multi-dementia groups.
    Ultimately, the researchers aim to implement mitigation measures to guide clinical interventions to reduce these negative outcomes. Wong says the emerging strategy of applying powerful AI predictions to trigger the implementation of “smart” clinical paths in hospitals is novel and will not only improve clinical outcomes and patient experiences, but also reduce hospitalization costs.
    “Our next steps will be to implement the validated AI model into a mobile app for the ICU and main hospital staff to alert them to geriatric patients with dementia who are at high risk of poor hospitalization outcomes and to guide them on interventional steps to reduce such risks,” said Wong, the paper’s corresponding author and the John S. Dunn Presidential Distinguished Chair in Biomedical Engineering with the Houston Methodist Research Institute. “We will work with hospital IT to integrate this app seamlessly into EPIC as part of a system-wide implementation for routine clinical use.”
    He said this will follow the same smart clinical pathway strategy they have been working on to integrate two other novel AI apps his team developed into the EPIC system for routine clinical use to guide interventions that reduce the risk of patient falls with injuries and better assess breast cancer risk to reduce unnecessary biopsies and overdiagnoses.
    Wong and Lai’s collaborators on this study were Xin Wang, Chika F. Ezeana, Lin Wang, Mamta Puppala, Yunjie He, Xiaohui Yu, Zheng Yin and Hong Zhao, all with the T.T. & W.F. Chao Center for BRAIN at the Houston Methodist Academic Institute, and Yan-Siang Huang with the Far Eastern Memorial Hospital in Taiwan.
    This study was supported by grants from the National Institutes of Health (R01AG057635 and R01AG069082), the T.T. and W.F. Chao Foundation, John S. Dunn Research Foundation, Houston Methodist Cornerstone Award and the Paul Richard Jeanneret Research Fund. More