More stories

  • in

    Researchers harness the power of artificial intelligence to match patients with the most effective antidepressant for their unique needs

    Researchers in George Mason University’s College of Public Health have leveraged the power of artificial intelligence (AI) analytical models to match a patient’s medical history to the most effective antidepressant, allowing patients to find symptom relief sooner. The free website, MeAgainMeds.com, provides evidence-based recommendations, allowing clinicians and patients to find the optimal antidepressant the first time.
    “Many people with depression must try multiple antidepressants before finding the right one that alleviates their symptoms. Our website reduces the number of medications that patients are asked to try. The system recommends to the patient what has worked for at least 100 other patients with the same exact relevant medical history,” said Farrokh Alemi, principal investigator and professor of health informatics at George Mason University’s College of Public Health.
    AI helped to simplify the very complex task of making thousands of guidelines easily accessible to patients and clinicians. The guidelines that researchers created are complicated because of the amount of clinical information that is relevant in prescribing an antidepressant; AI seamlessly simplifies the task.
    With AI at its core, MeAgainMeds.com analyzes clinician or patient responses to a few anonymous medical history questions to determine which oral antidepressant would best meet the specific needs. The website does not ask for any personal identifiable information and it does not prescribe medication changes. Patients are advised to visit their primary health care provider for any changes in medication.
    In 2018, the Centers for Disease Control reported that more than 13% of adults use antidepressants, and the number has only increased since the pandemic and other epidemics since 2020. This website could help millions of people find relief more quickly.
    Alemi and his team analyzed 3,678,082 patients who took 10,221,145 antidepressants. The oral antidepressants analyzed were amitriptyline, bupropion, citalopram, desvenlafaxine, doxepin, duloxetine, escitalopram, fluoxetine, mirtazapine, nortriptyline, paroxetine, sertraline, trazodone, and venlafaxine. From the data, they created 16,770 subgroups of at least 100 cases, using reactions to prior antidepressants, current medication, history of physical illness, history of mental illness, key procedures, and other information. The subgroups and remission rates drive the AI to produce an evidence-based medication recommendation.
    “By matching patients to the subgroups, clinicians can prescribe the medication that works best for people with similar medical history,” said Alemi. The researchers and website recommend that patients who use the site take the information to their clinicians, who will ultimately decide whether to prescribe the recommended medicine.
    Alemi and his team tested a protype version of the site in 2023, which they advertised on social media. At that time, 1,500 patients used the website. Their goal is to improve the website and expand its user base. The initial research was funded by the Commonwealth of Virginia and by the Robert Wood Johnson Foundation.
    The researchers’ most recent paper in a series of papers on response to antidepressants analyzed 2,467 subgroups of patients who had received psychotherapy. “Effectiveness of Antidepressants in Combination with Psychotherapy” was published online in The Journal of Mental Health Policy and Economics in March 2024. Additional authors include Tulay G Soylu from Temple University, and Mary Cannon and Conor McCandless from Royal College of Surgeons in Dublin, Ireland. More

  • in

    AI saving humans from the emotional toll of monitoring hate speech

    A team of researchers at the University of Waterloo have developed a new machine-learning method that detects hate speech on social media platforms with 88 per cent accuracy, saving employees from hundreds of hours of emotionally damaging work.
    The method, dubbed the Multi-Modal Discussion Transformer (mDT), can understand the relationship between text and images as well as put comments in greater context, unlike previous hate speech detection methods. This is particularly helpful in reducing false positives, which are often incorrectly flagged as hate speech due to culturally sensitive language.
    “We really hope this technology can help reduce the emotional cost of having humans sift through hate speech manually,” said Liam Hebert, a Waterloo computer science PhD student and the first author of the study. “We believe that by taking a community-centred approach in our applications of AI, we can help create safer online spaces for all.”
    Researchers have been building models to analyze the meaning of human conversations for many years, but these models have historically struggled to understand nuanced conversations or contextual statements. Previous models have only been able to identify hate speech with as much as 74 per cent accuracy, below what the Waterloo research was able to accomplish.
    “Context is very important when understanding hate speech,” Hebert said. “For example, the comment ‘That’s gross!’ might be innocuous by itself, but its meaning changes dramatically if it’s in response to a photo of pizza with pineapple versus a person from a marginalized group.
    “Understanding that distinction is easy for humans, but training a model to understand the contextual connections in a discussion, including considering the images and other multimedia elements within them, is actually a very hard problem.”
    Unlike previous efforts, the Waterloo team built and trained their model on a dataset consisting not only of isolated hateful comments but also the context for those comments. The model was trained on 8,266 Reddit discussions with 18,359 labelled comments from 850 communities.
    “More than three billion people use social media every day,” Hebert said. “The impact of these social media platforms has reached unprecedented levels. There’s a huge need to detect hate speech on a large scale to build spaces where everyone is respected and safe.”
    The research, Multi-Modal Discussion Transformer: Integrating Text, Images and Graph Transformers to Detect Hate Speech on Social Media, was recently published in the proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence. More

  • in

    Social media use and sleep duration connected to brain activity in teens

    A new study to be presented at the SLEEP 2024 annual meeting found a distinct relationship between sleep duration, social media usage, and brain activation across brain regions that are key for executive control and reward processing.
    Results show a correlation between shorter sleep duration and greater social media usage in teens. The analysis points to involvement of areas within the frontolimbic brain regions, such as the inferior and middle frontal gyri, in these relationships. The inferior frontal gyrus, key in inhibitory control, may play a crucial role in how adolescents regulate their engagement with rewarding stimuli such as social media. The middle frontal gyrus, involved in executive functions and critical in assessing and responding to rewards, is essential in managing decisions related to the balancing of immediate rewards from social media with other priorities like sleep. These results suggest a nuanced interaction between specific brain regions during adolescence and their influence on behavior and sleep in the context of digital media usage.
    “As these young brains undergo significant changes, our findings suggest that poor sleep and high social media engagement could potentially alter neural reward sensitivity,” said Orsolya Kiss, who has a doctorate in cognitive psychology and is a research scientist at SRI International in Menlo Park, California. “This intricate interplay shows that both digital engagement and sleep quality significantly influence brain activity, with clear implications for adolescent brain development.”
    This study involved data from 6,516 adolescents, ages 10-14 years, from the Adolescent Brain Cognitive Development Study. Sleep duration was assessed from the Munich Chronotype questionnaire, and recreational social media use through the Youth Screen Time Survey. Brain activities were analyzed from functional MRI scans during the monetary incentive delay task, targeting regions associated with reward processing. The study used three different sets of models and switched predictors and outcomes each time. Results were adjusted for age, COVID-19 pandemic timing, and socio-demographic characteristics.
    Kiss noted that these results provide new insights into how two significant aspects of modern adolescent life — social media usage and sleep duration — interact and impact brain development.
    “Understanding the specific brain regions involved in these interactions helps us identify potential risks and benefits associated with digital engagement and sleep habits,” Kiss said. “This knowledge is especially important as it could guide the development of more precise, evidence-based interventions aimed at promoting healthier habits.”
    The American Academy of Sleep Medicine recommends that teenagers 13 to 18 years of age should sleep 8 to 10 hours on a regular basis. The AASM also encourages adolescents to disconnect from all electronic devices at least 30 minutes to an hour before bedtime.
    This study was supported by grants from the National Institutes of Health. The research abstract was published recently in an online supplement of the journal Sleep and will be presented Sunday, June 2, and Wednesday, June 5, during SLEEP 2024 in Houston. SLEEP is the annual meeting of the Associated Professional Sleep Societies, a joint venture of the American Academy of Sleep Medicine and the Sleep Research Society. More

  • in

    The AI paradox: Building creativity to protect against AI

    Cultivating creativity in schools is vital for a future driven by artificial intelligence (AI). But while teachers embrace creativity as an essential 21st century skill, a lack of valid and reliable creativity tests means schools struggle to assess student achievement.
    Now, a new machine-learning model developed by the University of South Australia is providing teachers with access to high-quality, fit-for-purpose creativity tests, that can score assessments in a fraction of the time and a fraction of the cost.
    Applied to the current empirical creativity test — Test of Creative Thinking — Drawing Production (TCT-DP) — the new algorithm marks a test in a single millisecond, as opposed to the standard 15-minute human-marked test.
    The development could save teachers thousands of hours in an already overloaded schedule.
    Lead researcher, UniSA’s Prof David Cropley says the algorithm presents a game changing innovation for schools.
    “Creativity is an essential skill for the next generation, particularly because it is a skill that cannot be automated,” Prof Cropley says.
    “But because there is a lack of affordable and efficient tools to measure creativity in schools, students are either not being tested, or are being graded subjectively, which is inconsistent and unreliable.

    “The TCT-DP test has long been acknowledged as the premier tool to assess creativity in school aged children, but as it is expensive, slow, and labour-intensive, it’s out of reach for most schools.
    “Our algorithm changes this. Not only is the cost of running the algorithm reduced by a factor of more than 20, but the results are fast and incredibly accurate.
    “For example, a manually scored test for a school with 1000 students would cost approximately $25,000 and require about 10-weeks to receive test results; with UniSA’s algorithm, the same testing could be conducted for approximately $1000 with results delivered in 1-2 days.
    “This puts the test within direct reach of schools and teachers, giving them the means to assess creativity accurately and cheaply.”
    Co-researcher, UniSA’s Dr Rebecca Marrone says the capacity to test and measure creativity has additional benefits for students who are sometimes overlooked.
    “Testing for creativity opens up an avenue beyond more traditional intelligence testing,” Dr Marrone says.
    “Testing for creativity helps identify students who may have abilities that do not show up on traditional approaches to testing in school. For example, a child who does poorly on traditional IQ tests, but is highly creative, could easily slip through the cracks.
    “Developing creativity also protects children on the lower end of the achievement spectrum by training them in a skill that is not vulnerable to automation, which can help buffer them against the effects of digital transformation.”
    The algorithm is currently being developed as a desktop app for teachers to use in the classroom. Ahead of this, classroom teachers interested in using the TCT-DP are invited to contact the UniSA team to discuss their needs. More

  • in

    High groundwater depletion risk in South Korea in 2080s

    Groundwater is literally the water found beneath the Earth’s surface. It forms when precipitation such as rain and snow seeps into the soil, replenishing rivers and lakes. This resource supplies our drinking water. However, a recent study has alarmed the scientific community by predicting that approximately three million people in currently untapped areas of Korea could face groundwater depletion by 2080.
    A research team, led by Professor Jonghun Kam from Division of Environmental Science and Engineering and Dr. Chang-Kyun Park from the Institute of Environmental and Energy Technology (currently working for LG Energy Solution) at Pohang University of Science and Technology (POSTECH), used an advanced statistical method, to analyze surface and deep groundwater level data from 2009 to 2020, revealing critical spatiotemporal patterns in groundwater levels. Their findings were published in the international journal “Science of the Total Environment.”
    Groundwater is crucial for ecosystems and socioeconomic development, particularly in mountainous regions where water systems are limited. However, recent social and economic activities along with urban development have led to significant groundwater overuse. Additionally, rising land temperatures are altering regional water flows and supplies, necessitating water policies that consider both natural and human impacts to effectively address climate change.
    In a recent study, researchers used an advanced statistical method called “cyclostationary empirical orthogonal function analysis (CSEOF)” to analyze water level data from nearly 200 surface and deep groundwater stations in the southern Korean Peninsula from 2009 to 2020. This analysis helped them identify important spatiotemporal patterns in groundwater levels.
    The first and second principal components revealed that water level patterns mirrored recurring seasonal changes and droughts. While shallow-level groundwater is more sensitive to the seasonality of precipitation than the drought occurrence, deep-level groundwater is more sensitive to the drought occurrence than seasonality of precipitation. This indicates that both shallow-level and deep-level groundwater are crucial for meeting community water needs and mitigating drought effects.
    The third principal component highlighted a decline in groundwater levels in the western Korean Peninsula since 2009. The researchers projected that if this decline in deep groundwater continues, at least three million people in untapped or newly developed areas, primarily in the southwestern part of the peninsula, could face unprecedented groundwater level as a new normal (defined as groundwater depletion) by 2080. If the research team’s predictions are correct, the impact would be particularly severe in drought-prone, untapped areas where groundwater is heavily relied upon.
    Professor Jonghun Kam of POSTECH stated, “By leveraging long-term, multi-layer groundwater level data on Korea and advanced statistical techniques, we successfully analyzed the changing patterns of deep- and shallow-level groundwater levels and predicted the risk of groundwater depletion.” He added, “An integrated national development plan is essential, one that considers not only regional development plans but also balanced water resource management plans.” More

  • in

    The thinnest lens on Earth, enabled by excitons

    Lenses are used to bend and focus light. Normal lenses rely on their curved shape to achieve this effect, but physicists from the University of Amsterdam and Stanford University have made a flat lens of only three atoms thick which relies on quantum effects. This type of lens could be used in future augmented reality glasses.
    When you imagine a lens, you probably picture a piece of curved glass. This type of lens works because light is refracted (bent) when it enters the glass, and again when it exits, allowing us to make things appear larger or closer than they actually are. We have used curved lenses for more than two millennia, allowing us to study the movements of distant planets and stars, to reveal tiny microorganisms, and to improve our vision.
    Ludovico Guarneri, Thomas Bauer, and Jorik van de Groep of the University of Amsterdam, together with colleagues from Stanford University in California, took a different approach. Using a single layer of a unique material called tungsten disulphide (WS2 for short), they constructed a flat lens that is half a millimetre wide, but just 0.0000006 millimetres, or 0.6 nanometres, thick. This makes it the thinnest lens on Earth!
    Rather than relying on a curved shape, the lens is made of concentric rings of WS2 with gaps in between. This is called a ‘Fresnel lens’ or ‘zone plate lens’, and it focuses light using diffraction rather than refraction. The size of, and distance between the rings (compared to the wavelength of the light hitting it) determines the lens’s focal length. The design used here focuses red light 1 mm from the lens.
    Quantum enhancement
    A unique feature of this lens is that its focussing efficiency relies on quantum effects within WS2. These effects allow the material to efficiently absorb and re-emit light at specific wavelengths, giving the lens the built-in ability to work better for these wavelengths.
    This quantum enhancement works as follows. First, WS2 absorbs light by sending an electron to a higher energy level. Due to the ultra-thin structure of the material, the negatively charged electron and the positively charged ‘hole’ it leaves behind in the atomic lattice stay bound together by the electrostatic attraction between them, forming what is known as an ‘exciton’. These excitons quickly disappear again by the electron and hole merging together and sending out light. This re-emitted light contributes to the lens’s efficiency.

    The scientists detected a clear peak in lens efficiency for the specific wavelengths of light sent out by the excitons. While the effect is already observed at room temperature, the lenses are even more efficient when cooled down. This is because excitons do their work better at lower temperatures.
    Augmented reality
    Another one of the lens’s unique features is that, while some of the light passing through it makes a bright focal point, most light passes through unaffected. While this may sound like a disadvantage, it actually opens new doors for use in technology of the future. “The lens can be used in applications where the view through the lens should not be disturbed, but a small part of the light can be tapped to collect information. This makes it perfect for wearable glasses such as for augmented reality,” explains Jorik van de Groep, one of the authors of the paper.
    The researchers are now setting their sights on designing and testing more complex and multifunctional optical coatings whose function (such as focussing light) can be adjusted electrically. “Excitons are very sensitive to the charge density in the material, and therefore we can change the refractive index of the material by applying a voltage,” says Van de Groep. The future of excitonic materials is bright! More

  • in

    Generative AI to protect image privacy

    Image privacy could be protected with the use of generative artificial intelligence. Researchers from Japan, China and Finland created a system which replaces parts of images that might threaten confidentiality with visually similar but AI-generated alternatives. Named “generative content replacement,” in tests, 60% of viewers couldn’t tell which images had been altered. The researchers intend for this system to provide a more visually cohesive option for image censoring, which helps to preserve the narrative of the image while protecting privacy. This research was presented at the Association for Computing Machinery’s CHI Conference on Human Factors in Computing Systems, held in Honolulu, Hawaii, in the U.S., in May 2024.
    With just a few text prompts, generative AI can offer a quick fix for a tricky school essay, a new business strategy or endless meme fodder. The advent of generative AI into daily life has been swift, and the potential scale of its role and influence are still being grappled with. Fears over its impact on future job security, online safety and creative originality have led to strikes from Hollywood writers, court cases over faked photos and heated discussions about authenticity.
    However, a team of researchers has proposed using a sometimes controversial feature of generative AI — its ability to manipulate images — as a way to solve privacy issues.
    “We found that the existing image privacy protection techniques are not necessarily able to hide information while maintaining image aesthetics. Resulting images can sometimes appear unnatural or jarring. We considered this a demotivating factor for people who might otherwise consider applying privacy protection,” explained Associate Professor Koji Yatani from the Graduate School of Engineering at the University of Tokyo. “So, we decided to explore how we can achieve both — that is, robust privacy protection and image useability — at the same time by incorporating the latest generative AI technology.”
    The researchers created a computer system which they named generative content replacement (GCR). This tool identifies what might constitute a privacy threat and automatically replaces it with a realistic but artificially created substitute. For example, personal information on a ticket stub could be replaced with illegible letters, or a private building exchanged for a fake building or other landscape features.
    “There are a number of commonly used image protection methods, such as blurring, color filling or just removing the affected part of the image. Compared to these, our results show that generative content replacement can better maintain the story of the original images and higher visual harmony,” said Yatani. “We found that participants couldn’t detect GCR in 60% of images.”
    For now, the GCR system requires a lot of computation resources, so it won’t be available on any personal devices just yet. The tested system was fully automatic, but the team has since developed a new interface to allow users to customize images, giving more control over the final outcome.
    Although some may be concerned about the risks of this type of realistic image alteration, where the lines between original and altered imagery become more ambiguous, the team is positive about its advantages. “For public users, we believe that the greatest benefit of this research is providing a new option for image privacy protection,” said Yatani. “GCR offers a novel method for protecting against privacy threats, while maintaining visual coherence for storytelling purposes and enabling people to more safely share their content.” More

  • in

    Researchers apply quantum computing methods to protein structure prediction

    Researchers from Cleveland Clinic and IBM recently published findings in the Journal of Chemical Theory and Computation that could lay the groundwork for applying quantum computing methods to protein structure prediction. This publication is the first peer-reviewed quantum computing paper from the Cleveland Clinic-IBM Discovery Accelerator partnership.
    For decades, researchers have leveraged computational approaches to predict protein structures. A protein folds itself into a structure that determines how it functions and binds to other molecules in the body. These structures determine many aspects of human health and disease.
    By accurately predicting the structure of a protein, researchers can better understand how diseases spread and thus how to develop effective therapies. Cleveland Clinic postdoctoral fellow Bryan Raubenolt, Ph.D., and IBM researcher Hakan Doga, Ph.D., spearheaded a team to discover how quantum computing can improve current methods.
    In recent years, machine learning techniques have made significant progress in protein structure prediction. These methods are reliant on training data (a database of experimentally determined protein structures) to make predictions. This means that they are constrained by how many proteins they have been taught to recognize. This can lead to lower levels of accuracy when the programs/algorithms encounter a protein that is mutated or very different from those on which they were trained, which is common with genetic disorders.
    The alternative method is to simulate the physics of protein folding. Simulations allow researchers to look at a given protein’s various possible shapes and find the most stable one. The most stable shape is critical for drug design.
    The challenge is that these simulations are nearly impossible on a classical computer, beyond a certain protein size. In a way, increasing the size of the target protein is comparable to increasing the dimensions of a Rubik’s cube. For a small protein with 100 amino acids, a classical computer would need the time equal to the age of the universe to exhaustively search all the possible outcomes, says Dr. Raubenolt.
    To help overcome these limitations, the research team applied a mix of quantum and classical computing methods. This framework could allow quantum algorithms to address the areas that are challenging for state-of-the-art classical computing, including protein size, intrinsic disorder, mutations and the physics involved in proteins folding. The framework was validated by accurately predicting the folding of a small fragment of a Zika virus protein on a quantum computer, compared to state-of-the-art classical methods.

    The quantum-classical hybrid framework’s initial results outperformed both a classical physics-based method and AlphaFold2. Although the latter is designed to work best with larger proteins, it nonetheless demonstrates this framework’s ability to create accurate models without directly relying on substantial training data.
    The researchers used a quantum algorithm to first model the lowest energy conformation for the fragment’s backbone, which is typically the most computationally demanding step of the calculation. Classical approaches were then used to convert the results obtained from the quantum computer, reconstruct the protein with its sidechains, and perform final refinement of the structure with classical molecular mechanics force fields. The project shows one of the ways that problems can be deconstructed into parts, with quantum computing methods addressing some parts and classical computing others, for increased accuracy.
    “One of the most unique things about this project is the number of disciplines involved,” says Dr. Raubenolt. “Our team’s expertise ranges from computational biology and chemistry, structural biology, software and automation engineering, to experimental atomic and nuclear physics, mathematics, and of course quantum computing and algorithm design. It took the knowledge from each of these areas to create a computational framework that can mimic one of the most important processes for human life.”
    The team’s combination of classical and quantum computing methods is an essential step for advancing our understanding of protein structures, and how they impact our ability to treat and prevent disease. The team plans to continue developing and optimizing quantum algorithms that can predict the structure of larger and more sophisticated proteins.
    “This work is an important step forward in exploring where quantum computing capabilities could show strengths in protein structure prediction,” says Dr. Doga. “Our goal is to design quantum algorithms that can find how to predict protein structures as realistically as possible.” More