More stories

  • in

    New transit station in Japan significantly reduced cumulative health expenditures

    The declining population in Osaka is related to an aging society that is driving up health expenditures. Dr. Haruka Kato, a junior associate professor at Osaka Metropolitan University, teamed up with the Future Co-creation Laboratory at Japan System Techniques Co., Ltd. to conduct natural experiments on how a new train station might impact healthcare expenditures.
    JR-Sojiji Station opened in March 2018 in a suburban city on the West Japan Railway line connecting Osaka and Kyoto. The researchers used a causal impact algorithm to analyze the medical expenditure data gathered from the time series medical dataset REZULT provided by Japan System Techniques.
    Their results indicate that opening this mass transit station was significantly associated with a decrease in average healthcare expenditures per capita by approximately 99,257.31 Japanese yen (USD 929.99) over four years, with US dollar figures based on March 2018 exchange rates. In addition, the 95% confidence interval indicated the four-year decreasing expenditure of JPY 136,194.37 ($1276.06) to JPY 62,119.02 ($582.02). This study’s findings are consistent with previous studies suggesting that increased access to transit might increase physical activity among transit users. The results provided evidence for the effectiveness of opening a mass transit station from the viewpoint of health expenditures.
    “From the perspective of evidence-based policymaking, there is a need to assess the social impact of urban designs,” said Dr. Kato. “Our findings are an important achievement because they enable us to assess this impact from the perspective of health care expenditures, as in the case of JR-Sojiji Station.” More

  • in

    Artificial intelligence tool detects male-female-related differences in brain structure

    Artificial intelligence (AI) computer programs that process MRI results show differences in how the brains of men and women are organized at a cellular level, a new study shows. These variations were spotted in white matter, tissue primarily located in the human brain’s innermost layer, which fosters communication between regions.
    Men and women are known to experience multiple sclerosis, autism spectrum disorder, migraines, and other brain issues at different rates and with varying symptoms. A detailed understanding of how biological sex impacts the brain is therefore viewed as a way to improve diagnostic tools and treatments. However, while brain size, shape, and weight have been explored, researchers have only a partial picture of the brain’s layout at the cellular level.
    Led by researchers at NYU Langone Health, the new study used an AI technique called machine learning to analyze thousands of MRI brain scans from 471 men and 560 women. Results revealed that the computer programs could accurately distinguish between biological male and female brains by spotting patterns in structure and complexity that were invisible to the human eye. The findings were validated by three different AI models designed to identify biological sex using their relative strengths in either zeroing in on small portions of white matter or analyzing relationships across larger regions of the brain.
    “Our findings provide a clearer picture of how a living, human brain is structured, which may in turn offer new insight into how many psychiatric and neurological disorders develop and why they can present differently in men and women,” said study senior author and neuroradiologist Yvonne Lui, MD.
    Lui, a professor and vice chair for research in the Department of Radiology at NYU Grossman School of Medicine, notes that previous studies of brain microstructure have largely relied on animal models and human tissue samples. In addition, the validity of some of these past findings has been called into question for relying on statistical analyses of “hand-drawn” regions of interest, meaning researchers needed to make many subjective decisions about the shape, size, and location of the regions they choose. Such choices can potentially skew the results, says Lui.
    The new study results, publishing online May 14 in the journal Scientific Reports, avoided that problem by using machine learning to analyze entire groups of images without asking the computer to inspect any specific spot, which helped to remove human biases, the authors say.
    For the research, the team started by feeding AI programs existing data examples of brain scans from healthy men and women and also telling the machine programs the biological sex of each brain scan. Since these models were designed to use complex statistical and mathematical methods to get “smarter” over time as they accumulated more data, they eventually “learned” to distinguish biological sex on their own. Importantly, the programs were restricted from using overall brain size and shape to make their determinations, says Lui.

    According to the results, all of the models correctly identified the sex of subject scans between 92% and 98% of the time. Several features in particular helped the machines make their determinations, including how easily and in what direction water could move through brain tissue.
    “These results highlight the importance of diversity when studying diseases that arise in the human brain,” said study co-lead author Junbo Chen, MS, a doctoral candidate at NYU Tandon School of Engineering.
    “If, as has been historically the case, men are used as a standard model for various disorders, researchers may miss out on critical insight,” added study co-lead author Vara Lakshmi Bayanagari, MS, a graduate research assistant at NYU Tandon School of Engineering.
    Bayanagari cautions that while the AI tools could report differences in brain-cell organization, they could not reveal which sex was more likely to have which features. She adds that the study classified sex based on genetic information and only included MRIs from cis-gendered men and women.
    According to the authors, the team next plans to explore the development of sex-related brain structure differences over time to better understand environmental, hormonal, and social factors that could play a role in these changes.
    Funding for the study was provided by the National Institutes of Health grants R01NS119767, R01NS131458, and P41EB017183, as well as by the United States Department of Defense grant W81XWH2010699.
    In addition to Lui, Chen, and Bayanagari, other NYU Langone Health and NYU researchers involved in the study were Sohae Chung, PhD, and Yao Wang, PhD. More

  • in

    Using artificial intelligence to speed up and improve the most computationally-intensive aspects of plasma physics in fusion

    The intricate dance of atoms fusing and releasing energy has fascinated scientists for decades. Now, human ingenuity and artificial intelligence are coming together at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) to solve one of humankind’s most pressing issues: generating clean, reliable energy from fusing plasma.
    Unlike traditional computer code, machine learning — a type of artificially intelligent software — isn’t simply a list of instructions. Machine learning is software that can analyze data, infer relationships between features, learn from this new knowledge and adapt. PPPL researchers believe this ability to learn and adapt could improve their control over fusion reactions in various ways. This includes perfecting the design of vessels surrounding the super-hot plasma, optimizing heating methods and maintaining stable control of the reaction for increasingly long periods.
    The Lab’s artificial intelligence research is already yielding significant results. In a new paper published in Nature Communications, PPPL researchers explain how they used machine learning to avoid magnetic perturbations, or disruptions, which destabilize fusion plasma.
    “The results are particularly impressive because we were able to achieve them on two different tokamaks using the same code,” said PPPL Staff Research Physicist SangKyeun Kim, the lead author of the paper. A tokamak is a donut-shaped device that uses magnetic fields to hold a plasma.
    “There are instabilities in plasma that can lead to severe damage to the fusion device. We can’t have those in a commercial fusion vessel. Our work advances the field and shows that artificial intelligence could play an important role in managing fusion reactions going forward, avoiding instabilities while allowing the plasma to generate as much fusion energy as possible,” said Egemen Kolemen, associate professor in the department of mechanical and aerospace engineering, jointly appointed with the Andlinger Center for Energy and the Environment and the PPPL.
    Important decisions must be made every millisecond to control a plasma and keep a fusion reaction going. Kolemen’s system can make those decisions far faster than a human and automatically adjust the settings for the fusion vessel so the plasma is properly maintained. The system can predict disruptions, figure out what settings to change and then make those changes all before the instabilities occur.
    Kolemen notes that the results are also impressive because, in both cases, the plasma was in a high-confinement mode. Also known as H-mode, this occurs when a magnetically confined plasma is heated enough that the confinement of the plasma suddenly and significantly improves, and the turbulence at the plasma’s edge effectively disappears. H-mode is the hardest mode to stabilize but also the mode that will be necessary for commercial power generation.

    The system was successfully deployed on two tokamaks, DIII-D and KSTAR, which both achieved H-mode without instabilities. This is the first time that researchers achieved this feat in a reactor setting that is relevant to what will be needed to deploy fusion power on a commercial scale.
    Machine learning code that detects and eliminates plasma instabilities was deployed in the two tokamaks shown above: DIII-D and KSTAR. (Credit: General Atomics and Korean Institute of Fusion Energy)
    PPPL has a significant history of using artificial intelligence to tame instabilities. PPPL Principal Research Physicist William Tang and his team were the first to demonstrate the ability to transfer this process from one tokamak to another in 2019.
    “Our work achieved breakthroughs using artificial intelligence and machine learning together with powerful, modern high-performance computing resources to integrate vast quantities of data in thousandths of a second and develop models for dealing with disruptive physics events well before their onset,” Tang said. “You can’t effectively combat disruptions in more than a few milliseconds. That would be like starting to treat a fatal cancer after it’s already too far along.”
    The work was detailed in an influential paper published in Nature in 2019. Tang and his team continue to work in this area, with an emphasis on eliminating real-time disruptions in tokamaks using machine learning models trained on properly verified and validated observational data.
    A new twist on stellarator design
    PPPL’s artificial intelligence projects for fusion extend beyond tokamaks. PPPL’s Head of Digital Engineering, Michael Churchill, uses machine learning to improve the design of another type of fusion reactor, a stellarator. If tokamaks look like donuts, stellarators could be seen as the crullers of the fusion world with a more complex, twisted design.

    “We need to leverage a lot of different codes when we’re validating the design of a stellarator. So the question becomes, ‘What are the best codes for stellarator design and the best ways to use them?'” Churchill said. “It’s a balancing act between the level of detail in the calculations and how quickly they produce answers.”
    Current simulations for tokamaks and stellarators come close to the real thing but aren’t yet twins. “We know that our simulations are not 100% true to the real world. Many times, we know that there are deficiencies. We think that it captures a lot of the dynamics that you would see on a fusion machine, but there’s quite a bit that we don’t.”
    Churchill said ideally, you want a digital twin: a system with a feedback loop between simulated digital models and real-world data captured in experiments. “In a useful digital twin, that physical data could be used and leveraged to update the digital model in order to better predict what future performance would be like.”
    Unsurprisingly, mimicking reality requires a lot of very sophisticated code. The challenge is that the more complicated the code, the longer it typically takes to run. For example, a commonly used code called X-Point Included Gyrokinetic Code (XGC) can only run on advanced supercomputers, and even then, it doesn’t run quickly. “You’re not going to run XGC every time you run a fusion experiment unless you have a dedicated exascale supercomputer. We’ve probably run it on 30 to 50 plasma discharges [of the thousands we have run],” Churchill said.
    That’s why Churchill uses artificial intelligence to accelerate different codes and the optimization process itself. “We would really like to do higher-fidelity calculations but much faster so that we can optimize quickly,” he said.
    Coding to optimize code
    Similarly, Research Physicist Stefano Munaretto’s team is using artificial intelligence to accelerate a code called HEAT, which was originally developed by the DOE’s Oak Ridge National Laboratory and the University of Tennessee-Knoxville for PPPL’s tokamak NSTX-U.
    HEAT is being updated so that the plasma simulation will be 3D, matching the 3D computer-aided design (CAD) model of the tokamak divertor. Located at the base of the fusion vessel, the divertor extracts heat and ash generated during the reaction. A 3D plasma model should enhance understanding of how different plasma configurations can impact heat fluxes or the movement patterns of heat in the tokamak. Understanding the movement of heat for a specific plasma configuration can provide insights into how heat will likely travel in a future discharge with a similar plasma.
    By optimizing HEAT, the researchers hope to quickly run the complex code between plasma shots, using information about the last shot to decide the next.
    “This would allow us to predict the heat fluxes that will appear in the next shot and to potentially reset the parameters for the next shot so the heat flux isn’t too intense for the divertor,” Munaretto said. “This work could also help us design future fusion power plants.”
    PPPL Associate Research Physicist Doménica Corona Rivera has been deeply involved in the effort to optimize HEAT. The key is narrowing down a wide range of input parameters to just four or five so the code will be streamlined yet highly accurate. “We have to ask, ‘Which of these parameters are meaningful and are going to really be impacting heat?'” said Corona Rivera. Those are the key parameters used to train the machine learning program.
    With support from Churchill and Munaretto, Corona Rivera has already greatly reduced the time it takes to run the code to consider the heat while keeping the results roughly 90% in sync with those from the original version of HEAT. “It’s instantaneous,” she said.
    Finding the right conditions for ideal heating
    Researchers are also trying to find the best conditions to heat the ions in the plasma by perfecting a technique known as ion cyclotron radio frequency heating (ICRF). This type of heating focuses on heating up the big particles in the plasma — the ions.
    Plasma has different properties, such as density, pressure, temperature and the intensity of the magnetic field. These properties change how the waves interact with the plasma particles and determine the waves’ paths and areas where the waves will heat the plasma. Quantifying these effects is crucial to controlling the radio frequency heating of the plasma so that researchers can ensure the waves move efficiently through the plasma to heat it in the right areas.
    The problem is that the standard codes used to simulate the plasma and radio wave interactions are very complicated and run too slowly to be used to make real-time decisions.
    “Machine learning brings us great potential here to optimize the code,” said Álvaro Sánchez Villar, an associate research physicist at PPPL. “Basically, we can control the plasma better because we can predict how the plasma is going to evolve, and we can correct it in real-time.”
    The project focuses on trying different kinds of machine learning to speed up a widely used physics code. Sánchez Villar and his team showed multiple accelerated versions of the code for different fusion devices and types of heating. The models can find answers in microseconds instead of minutes with minimal impact on the accuracy of the results. Sánchez Villar and his team were also able to use machine learning to eliminate challenging scenarios with the optimized code.
    Sánchez Villar says the code’s accuracy, “increased robustness” and acceleration make it well suited for integrated modeling, in which many physics codes are used together, and real-time control applications, which are crucial for fusion research.
    Enhancing our understanding of the plasma’s edge
    PPPL Principal Research Physicist Fatima Ebrahimi is the principal investigator on a four-year project for the DOE’s Advanced Scientific Computing Research program, part of the Office of Science, which uses experimental data from various tokamaks, plasma simulation data and artificial intelligence to study the behavior of the plasma’s edge during fusion. The team hopes their findings will reveal the most effective ways to confine a plasma on a commercial-scale tokamak.
    While the project has multiple goals, the aim is clear from a machine learning perspective. “We want to explore how machine learning can help us take advantage of all our data and simulations so we can close the technological gaps and integrate a high-performance plasma into a viable fusion power plant system,” Ebrahimi said.
    There is a wealth of experimental data gathered from tokamaks worldwide while the devices operated in a state free from large-scale instabilities at the plasma’s edge known as edge-localized modes (ELMs). Such momentary, explosive ELMs need to be avoided because they can damage the inner components of a tokamak, draw impurities from the tokamak walls into the plasma and make the fusion reaction less efficient. The question is how to achieve an ELM-free state in a commercial-scale tokamak, which will be much larger and run much hotter than today’s experimental tokamaks.
    Ebrahimi and her team will combine the experimental results with information from plasma simulations that have already been validated against experimental data to create a hybrid database. The database will then be used to train machine learning models about plasma management, which can then be used to update the simulation.
    “There is some back and forth between the training and the simulation,” Ebrahimi explained. By running a high-fidelity simulation of the machine learning model on supercomputers, the researchers can then hypothesize about scenarios beyond those covered by the existing data. This could provide valuable insights into the best ways to manage the plasma’s edge on a commercial scale.
    This research was conducted with the following DOE grants: DE-SC0020372, DE-SC0024527, DE-AC02-09CH11466, DE-SC0020372, DE-AC52-07NA27344, DE-AC05-00OR22725, DE-FG02-99ER54531, DE-SC0022270, DE-SC0022272, DE-SC0019352, DEAC02-09CH11466 and DE-FC02-04ER54698. This research was also supported by the research and design program of KSTAR Experimental Collaboration and Fusion Plasma Research (EN2401-15) through the Korea Institute of Fusion Energy.
    This story includes contributions by John Greenwald. More

  • in

    Speedy, secure, sustainable — that’s the future of telecom

    Advanced information processing technologies offer greener telecommunications and strong data security for millions, a study led by University of Maryland (UMD) researchers revealed.
    A new device that can process information using a small amount of light could enable energy-efficient and secure communications. Work led by You Zhou, an assistant professor in UMD’s Department of Materials Science and Engineering (MSE), in collaboration with researchers at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory, was published today in the journal Nature Photonics.
    Optical switches, the devices responsible for sending information via telephone signals, rely on light as a transmission medium and on electricity as a processing tool, requiring an extra set of energy to interpret the data. A new alternative engineered by Zhou uses only light to power a full transmission, which could improve speed and energy efficiency for telecommunications and computation platforms.
    Early tests of this technology have shown significant energy improvements. While conventional optical switches require between 10 to 100 femtojoules to enable a communication transmission, Zhou’s device consumes one hundred times less energy, which is only one tenth to one femtojoule. Building a prototype that enables information processing using small amounts of light, via a material’s property known as “non-linear response,” paved the way for new opportunities in his research group.
    “Achieving strong non-linearity was unexpected, which opened a new direction that we were not previously exploring: quantum communications,” said Zhou.
    To build the device, Zhou used the Quantum Material Press (QPress) at the Center for Functional Nanomaterials (CFN), a DOE Office of Science user facility at Brookhaven Lab that offers free access to world-class equipment for scientists conducting open research. The QPress is an automated tool for synthesizing quantum materials with layers as thin as a single atom.
    “We have been collaborating with Zhou’s group for several years. They are one of the earliest adopters of our QPress modules, which include an exfoliator, cataloger, and stacker,” said co-author Suji Park, a staff scientist in the Electronic Nanomaterials Group at CFN. “Specifically, we have provided high-quality exfoliated flakes tailored to their requests, and we worked together closely to optimize the exfoliation conditions for their materials. This partnership has significantly enhanced their sample fabrication process.”
    Next up, Zhou’s research team aims to increase energy efficiency down to the smallest amount of electromagnetic energy, a main challenge in enabling the so-called quantum communications, which offer a promising alternative for data security.

    In the wake of rising cyberattacks, building sophisticated protection against hackers has grown scientific interest. Data transmitted over conventional communication channels can be read and copied without leaving a trace, which cost thousands of breaches for 350 million users last year, according to a recent Statista report.
    Quantum communications, on the other hand, offer a promising alternative as they encode the information using light, which cannot be intercepted without altering its quantum state. Zhou’s method to improve materials’ nonlinearity is a step closer to enabling those technologies.
    This study was supported by the DOE Office of Science and the National Science Foundation.
    Editor’s Note: This news release is being jointly issued by the University of Maryland and Brookhaven National Laboratory. More

  • in

    Artificial intelligence tool to improve heart failure care

    UVA Health researchers have developed a powerful new risk assessment tool for predicting outcomes in heart failure patients. The researchers have made the tool publicly available for free to clinicians.
    The new tool improves on existing risk assessment tools for heart failure by harnessing the power of machine learning (ML) and artificial intelligence (AI) to determine patient-specific risks of developing unfavorable outcomes with heart failure.
    “Heart failure is a progressive condition that affects not only quality of life but quantity as well. All heart failure patients are not the same. Each patient is on a spectrum along the continuum of risk of suffering adverse outcomes,” said researcher Sula Mazimba, MD, a heart failure expert. “Identifying the degree of risk for each patient promises to help clinicians tailor therapies to improve outcomes.”
    About Heart Failure
    Heart failure occurs when the heart is unable to pump enough blood for the body’s needs. This can lead to fatigue, weakness, swollen legs and feet and, ultimately, death. Heart failure is a progressive condition, so it is extremely important for clinicians to be able to identify patients at risk of adverse outcomes.
    Further, heart failure is a growing problem. More than 6 million Americans already have heart failure, and that number is expected to increase to more than 8 million by 2030. The UVA researchers developed their new model, called CARNA, to improve care for these patients. (Finding new ways to improve care for patients across Virginia and beyond is a key component of UVA Health’s first-ever 10-year strategic plan.)
    The researchers developed their model using anonymized data drawn from thousands of patients enrolled in heart failure clinical trials previously funded by the National Institutes of Health’s National Heart, Lung and Blood Institute. Putting the model to the test, they found it outperformed existing predictors for determining how a broad spectrum of patients would fare in areas such as the need for heart surgery or transplant, the risk of rehospitalization and the risk of death.

    The researchers attribute the model’s success to the use of ML/AI and the inclusion of “hemodynamic” clinical data, which describe how blood circulates through the heart, lungs and the rest of the body.
    “This model presents a breakthrough because it ingests complex sets of data and can make decisions even among missing and conflicting factors,” said researcher Josephine Lamp, of the University of Virginia School of Engineering’s Department of Computer Science. “It is really exciting because the model intelligently presents and summarizes risk factors reducing decision burden so clinicians can quickly make treatment decisions.”
    By using the model, doctors will be better equipped to personalize care to individual patients, helping them live longer, healthier lives, the researchers hope.
    “The collaborative research environment at the University of Virginia made this work possible by bringing together experts in heart failure, computer science, data science and statistics,” said researcher Kenneth Bilchick, MD, a cardiologist at UVA Health. “Multidisciplinary biomedical research that integrates talented computer scientists like Josephine Lamp with experts in clinical medicine will be critical to helping our patients benefit from AI in the coming years and decades.”
    Findings Published
    The researchers have made their new tool available online for free at https://github.com/jozieLamp/CARNA.
    In addition, they have published the results of their evaluation of CARNA in the American Heart Journal. The research team consisted of Lamp, Yuxin Wu, Steven Lamp, Prince Afriyie, Nicholas Ashur, Bilchick, Khadijah Breathett, Younghoon Kwon, Song Li, Nishaki Mehta, Edward Rojas Pena, Lu Feng and Mazimba. The researchers have no financial interest in the work.
    The project was based on one of the winning submissions to the National Heart, Lung and Blood Institute’s Big Data Analysis Challenge: Creating New Paradigms for Heart Failure Research. The work was supported by the National Science Foundation Graduate Research Fellowship, grant 842490, and NHLBI grants R56HL159216, K01HL142848 and L30HL148881.
    To keep up with the latest medical research news from UVA, subscribe to the Making of Medicine blog. More

  • in

    An easy pill to swallow — new 3D printing research paves way for personalized medication

    A new technique for 3D printing medication has enabled the printing of multiple drugs in a single tablet, paving the way for personalised pills that can deliver timed doses.
    Researchers from the University of Nottingham’s, Centre for Additive Manufacturing have led research alongside the School of Pharmacy that has fabricated personalised medicine using Multi-Material InkJet 3D Printing (MM-IJ3DP). The research has been published in Materials Today Advances.
    The team have developed a cutting-edge method that enables the fabrication of customised pharmaceutical tablets with tailored drug release profiles, ensuring more precise and effective treatment options for patients.
    Using Multi-Material InkJet 3D Printing (MM-IJ3DP), tablets can be printed that release drugs at a controlled rate, determined by the tablet’s design. This is made possible by a novel ink formulation based on molecules that are sensitive to ultraviolet light. When printed, these molecules form a water-soluble structure.
    The drug release rate is controlled by the unique interior structure of the tablet, allowing for timing the dosage release. This method can print multiple drugs in a single tablet, allowing for complex medication regimens to be simplified into a single dose.
    Dr Yinfeng He, Assistant Professor in the Faculty of Engineering’s Centre for Additive Manufacturing led the research, he said: “This is an exciting step forwards in the development of personalised medication. This breakthrough not only highlights the potential of 3D printing in revolutionizing drug delivery but also opens up new avenues for the development of next-generation personalized medicines.”
    “While promising, the technology faces challenges, including the need for more formulations that support a wider range of materials. The ongoing research aims to refine these aspects, enhancing the feasibility of MM-IJ3DP for widespread application.” Professor Ricky Wildman added.
    This technology will be particularly beneficial in creating medication that needs to release drugs at specific times, making it ideal for treating diseases, where timing and dosage accuracy are crucial. The ability to print 56 pills in a single batch demonstrates the scalability of this technology, providing a strong potential for the production of personalised medicines.
    Professor Felicity Rose at the University of Nottingham’s School of Pharmacy was one of the co-authors on the research, she says: “The future of prescribed medication lies in a personalised approach, and we know that up 50% of people in the UK alone don’t take their medicines correctly and this has an impact on poorer health outcomes with conditions not being controlled or properly treated. A single pill approach would simplify taking multiple medications at different times and this research is an exciting step towards that.” More

  • in

    Century of statistical ecology reviewed

    Crunching numbers isn’t exactly how Neil Gilbert, a postdoctoral researcher at Michigan State University, envisioned a career in ecology.
    “I think it’s a little funny that I’m doing this statistical ecology work because I was always OK at math, but never particularly enjoyed it,” he explained. “As an undergrad, I thought, I’ll be an ecologist — that means that I can be outside, looking at birds, that sort of thing.”
    As it turns out,” he chuckled, “ecology is a very quantitative discipline.”
    Now, working in the Zipkin Quantitative Ecology lab, Gilbert is the lead author on a new article in a special collection of the journal Ecology that reviews the past century of statistical ecology.
    Statistical ecology, or the study of ecological systems using mathematical equations, probability and empirical data, has grown over the last century. As increasingly large datasets and complex questions took center stage in ecological research, new tools and approaches were needed to properly address them.
    To better understand how statistical ecology changed over the last century, Gilbert and his fellow authors examined a selection of 36 highly cited papers on statistical ecology — all published in Ecology since its inception in 1920.
    The team’s paper examines work on statistical models across a range of ecological scales from individuals to populations, communities, ecosystems and beyond. The team also reviewed publications providing practical guidance on applying models. Gilbert noted that because, “many practicing ecologists lack extensive quantitative training,” such publications are key to shaping studies.

    Ecology is an advantageous place for such papers, because it is one of, “the first internationally important journals in the field. It has played an outsized role in publishing important work,” said lab leader Elise Zipkin, a Red Cedar Distinguished Associate Professor in the Department of Integrative Biology.
    “It has a reputation of publishing some of the most influential papers on the development and application of analytical techniques from the very beginning of modern ecological research.”
    The team found a persistent evolution of models and concepts in the field, especially over the past few decades, driven by refinements in techniques and exponential increases in computational power.
    “Statistical ecology has exploded in the last 20 to 30 years because of advances in both data availability and the continued improvement of high-performance computing clusters,” Gilbert explained.
    Included among the 36 reviewed papers were a landmark 1945 study by Lee R. Dice on predicting the co-occurrence of species in space — Ecology’s most highly cited paper of all time — and an influential 2002 paper led by Darryl MacKenzie on occupancy models. Ecologists use these models to identify the range and distribution of species in an environment.
    Mackenzie’s work on species detection and sampling, “played an outsized role in the study of species distributions,” says Zipkin. MacKenzie’s paper, which was cited more than 5,400 times, spawned various software packages that are now widely used by ecologists, she explained. More

  • in

    Coming out to a chatbot?

    Today, there are dozens of large language model (LLM) chatbots aimed at mental health care — addressing everything from loneliness among seniors to anxiety and depression in teens.
    But the efficacy of these apps is unclear. Even more unclear is how well these apps work in supporting specific, marginalized groups like LGBTQ+ communities.
    A team of researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences, Emory University, Vanderbilt University and the University of California Irvine, found that while large language models can offer fast, on-demand support, they frequently fail to grasp the specific challenges that many members of the LGBTQ+ community face.
    That failure could lead the chatbot to give at best unhelpful and at worst dangerous advice.
    The paper is being presented this week at the ACM (Association of Computing Machinery) conference on Human Factors in Computing System in Honolulu, Hawai’i.
    The researchers interviewed 31 participants — 18 identifying as LGBTQ+ and 13 as non-LGBTQ+ — about their usage of LLM-based chatbots for mental health support and how the chatbots supported their individual needs.
    On one hand, many participants reported that the chatbots offered a sense of solidarity and a safe space to explore and express their identities. Some used the chatbots for practice coming out to friends and family, others to practice asking someone out for the first time.

    But many of the participants also noted the programs’ shortfalls.
    One participant wrote, “I don’t think I remember any time that it gave me a solution. It will just be like empathetic. Or maybe, if I would tell it that I’m upset with someone being homophobic. It will suggest, maybe talking to that person. But most of the time it just be like, ‘I’m sorry that happened to you.'”
    “The boilerplate nature of the chatbots’ responses highlights their failure to recognize the complex and nuanced LGBTQ+ identities and experiences, making the chatbots’ suggestions feel emotionally disengaged,” said Zilin Ma, a PhD student at SEAS and co-first author of the paper.
    Because these chatbots tend to be sycophantic, said Ma, they’re actually very bad at simulating hostility, which makes them ill-suited to practice potentially fraught conversations like coming out.
    They also gave some participants staggeringly bad advice — telling one person to quit their job after experiencing workplace homophobia, without considering their financial or personal consequences.
    Ma, who is in the lab of Krzysztof Gajos, the Gordon McKay Professor of Computer Science, stressed that while there are ways to improve these programs, it is not a panacea.

    “There are ways we could improve these limitations by fine tuning the LLMs for contexts relevant to LGBTQ+ users or implementing context-sensitive guardrails or regularly updating feedback loops, but we wonder if this tendency to implement technology at every aspect of social problem is the right approach,” said Ma. “We can optimize all these LLMs all we want but there are aspects of LGBTQ+ mental health that cannot be solved with LLM chatbots — such as discrimination, bullying, the stress of coming out or the lack of representation. For that, we need a holistic support system for LGBTQ+ people.”
    One area where LLM chatbots could be useful is in the training of human counselors or online community moderators.
    “Rather than having teens in crisis talk to the chatbot directly, you could use the chatbot to train counselors,” said Ma. “Then you have a real human to talk to, but it empowers the counselors with technology, which is a socio-technical solution which I think works well in this case.”
    “Research in public health suggests that interventions that directly target the affected individuals — like the chatbots for improving individual well-being — risk leaving the most vulnerable people behind,” said Gajos. “It is harder but potentially more impactful to change the communities themselves through training counselors or online community moderators.”
    The research was co-authored by Yiyang Mei, Yinru Long, Zhaoyuan “Nick” Su and Gajos. More