More stories

  • in

    New internet addiction spectrum: Where are you on the scale?

    Young people (24 years and younger) spend an average of six hours a day online, primarily using their smartphones, according to research from the University of Surrey. Older people (those 24 years and older) spend 4.6 hours online.
    Surrey’s study, which involved 796 participants, introduces a new internet addiction spectrum, categorising internet users into five groups: Casual Users (14.86%): This group mainly goes online for specific tasks and logs off without lingering. They show no signs of addiction and are generally older, with an average age of 33.4 years. They are the least interested in exploring new apps. Initial Users (22.86%): These individuals often find themselves online longer than they initially planned and are somewhat neglectful of household chores but don’t consider themselves addicted. They are moderately interested in apps and have an average age of 26.1 years. Experimenters (21.98%): This group feels uneasy or anxious when not connected to the internet. Once they go online, they feel better. Experimenters are more willing to try out new apps and technology, and their average age is between 22.8 and 24.3 years. Addicts-in-Denial (17.96%): These users display addictive behaviours like forming new relationships online and neglecting real-world responsibilities to be online. However, they won’t admit to feeling uneasy when they’re not connected. They are also quite confident in using mobile technology. Addicts (22.36%): This group openly acknowledges their internet addiction and recognises its negative impact on their lives. They are the most confident in using new apps and technology. Their time online is significantly greater than that of the Casual Users. More

  • in

    Engineering study employs deep learning to explain extreme events

    Identifying the underlying cause of extreme events such as floods, heavy downpours or tornados is immensely difficult and can take a concerted effort by scientists over several decades to arrive at feasible physical explanations.
    Extreme events cause significant deviation from expected behavior and can dictate the overall outcome for a number of scientific problems and practical situations. For example, practical scenarios where a fundamental understanding of extreme events can be of vital importance include rogue waves in the ocean that could endanger ships and offshore structures or increasingly frequent “1,000-year rains,” such as the life-threatening deluge in April that deposited 20 inches of rainfall within a seven-hour period in the Fort Lauderdale area.
    At the core of uncovering such extreme events is the physics of fluids — specifically turbulent flows, which exhibit a wide range of interesting behavior in time and space. In fluid dynamics, a turbulent flow refers to an irregular flow whereby eddies, swirls and flow instabilities occur. Because of the random nature and irregularity of turbulent streams, they are notoriously difficult to understand or to apply order through equations.
    Researchers from Florida Atlantic University’s College of Engineering and Computer Science leveraged a computer-vision deep learning technique and adapted it for nonlinear analysis of extreme events in wall-bounded turbulent flows, which are pervasive in numerous physics and engineering applications and impact wind and hydrokinetic energy, among others.
    The study focused on recognizing and regulating organized structures within wall-bounded turbulent flows using a variety of machine learning techniques to overcome the non-linear nature of this phenomenon.
    Results, published in the journal Physical Review Fluids, demonstrate that the technique the researchers employed can be invaluable for accurately identifying the sources of extreme events in a completely data-driven manner. The framework they formulated is sufficiently general to be extendable to other scientific domains, where the underlying spatial dynamics governing the evolution of critical phenomena may not be known beforehand.
    Using a neural network architecture called Convolutional Neural Network (CNN) that specializes in uncovering spatial relationships, researchers trained a network to estimate the relative intensity of ejection structures within turbulent flow simulation without any a-priori knowledge of the underlying flow dynamics. More

  • in

    Is AI in the eye of the beholder?

    Someone’s prior beliefs about an artificial intelligence agent, like a chatbot, have a significant effect on their interactions with that agent and their perception of its trustworthiness, empathy, and effectiveness, according to a new study.
    Researchers from MIT and Arizona State University found that priming users — by telling them that a conversational AI agent for mental health support was either empathetic, neutral, or manipulative — influenced their perception of the chatbot and shaped how they communicated with it, even though they were speaking to the exact same chatbot.
    Most users who were told the AI agent was caring believed that it was, and they also gave it higher performance ratings than those who believed it was manipulative. At the same time, less than half of the users who were told the agent had manipulative motives thought the chatbot was actually malicious, indicating that people may try to “see the good” in AI the same way they do in their fellow humans.
    The study revealed a feedback loop between users’ mental models, or their perception of an AI agent, and that agent’s responses. The sentiment of user-AI conversations became more positive over time if the user believed the AI was empathetic, while the opposite was true for users who thought it was nefarious.
    “From this study, we see that to some extent, the AI is the AI of the beholder,” says Pat Pataranutaporn, a graduate student in the Fluid Interfaces group of the MIT Media Lab and co-lead author of a paper describing this study. “When we describe to users what an AI agent is, it does not just change their mental model, it also changes their behavior. And since the AI responds to the user, when the person changes their behavior, that changes the AI, as well.”
    Pataranutaporn is joined by co-lead author and fellow MIT graduate student Ruby Liu; Ed Finn, associate professor in the Center for Science and Imagination at Arizona State University; and senior author Pattie Maes, professor of media technology and head of the Fluid Interfaces group at MIT.
    The study, published in Nature Machine Intelligence, highlights the importance of studying how AI is presented to society, since the media and popular culture strongly influence our mental models. The authors also raise a cautionary flag, since the same types of priming statements in this study could be used to deceive people about an AI’s motives or capabilities. More

  • in

    A more effective experimental design for engineering a cell into a new state

    A strategy for cellular reprogramming involves using targeted genetic interventions to engineer a cell into a new state. The technique holds great promise in immunotherapy, for instance, where researchers could reprogram a patient’s T-cells so they are more potent cancer killers. Someday, the approach could also help identify life-saving cancer treatments or regenerative therapies that repair disease-ravaged organs.
    But the human body has about 20,000 genes, and a genetic perturbation could be on a combination of genes or on any of the over 1,000 transcription factors that regulate the genes. Because the search space is vast and genetic experiments are costly, scientists often struggle to find the ideal perturbation for their particular application.
    Researchers from MIT and Harvard University developed a new, computational approach that can efficiently identify optimal genetic perturbations based on a much smaller number of experiments than traditional methods.
    Their algorithmic technique leverages the cause-and-effect relationship between factors in a complex system, such as genome regulation, to prioritize the best intervention in each round of sequential experiments.
    The researchers conducted a rigorous theoretical analysis to determine that their technique did, indeed, identify optimal interventions. With that theoretical framework in place, they applied the algorithms to real biological data designed to mimic a cellular reprogramming experiment. Their algorithms were the most efficient and effective.
    “Too often, large-scale experiments are designed empirically. A careful causal framework for sequential experimentation may allow identifying optimal interventions with fewer trials, thereby reducing experimental costs,” says co-senior author Caroline Uhler, a professor in the Department of Electrical Engineering and Computer Science (EECS) who is also co-director of the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, and a researcher at MIT’s Laboratory for Information and Decision Systems (LIDS) and Institute for Data, Systems and Society (IDSS).
    Joining Uhler on the paper, which appears today in Nature Machine Intelligence, are lead author Jiaqi Zhang, a graduate student and Eric and Wendy Schmidt Center Fellow; co-senior author Themistoklis P. Sapsis, professor of mechanical and ocean engineering at MIT and a member of IDSS; and others at Harvard and MIT. More

  • in

    Researchers propose a unified, scalable framework to measure agricultural greenhouse gas emissions

    Increased government investment in climate change mitigation is prompting agricultural sectors to find reliable methods for measuring their contribution to climate change. With that in mind, a team led by scientists at the University of Illinois Urbana-Champaign proposed a supercomputing solution to help measure individual farm field-level greenhouse gas emissions.
    Although locally tested in the Midwest, the new approach can be scaled up to national and global levels and help the industry grasp the best practices for reducing emissions.
    The new study, directed by natural resources and environmental sciences professor Kaiyu Guan, synthesized more than 25 of the group’s previous studies to quantify greenhouse gas emissions produced by U.S. farmland. The findings — completed in collaboration with partners from the University of Minnesota, Lawrence Berkeley National Laboratory and Project Drawdown, a climate solutions nonprofit organization — are published in the journal Earth Science Reviews.
    “There are many farming practices that can go a long way to reduce greenhouse gas emissions, but the scientific community has struggled to find a consistent method for measuring how well these practices work,” Guan said.
    Guan’s team built a solution based on “agricultural carbon outcomes,” which it defines as the related changes in greenhouse gas emissions from farmers adopting climate mitigation practices like cover cropping, precision nitrogen fertilizer management and use of controlled drainage techniques.
    “We developed what we call a ‘system of systems’ solution, which means we integrated a variety of sensing techniques and combined them with advanced ecosystem models,” said Bin Peng, co-author of the study and a senior research scientist at the U. of I. Institute for Sustainability, Energy and Environment. “For example, we fuse ground-based imaging with satellite imagery and process that data with algorithms to generate information about crop emissions before and after farmers adopt various mitigation practices.”
    “Artificial intelligence also plays a critical role in realizing our ambitious goals to quantify every field’s carbon emission,” said Zhenong Jin, a professor at the University of Minnesota who co-led the study. “Unlike traditional model-data fusion approaches, we used knowledge-guided machine learning, which is a new way to bring together the power of sensing data, domain knowledge and artificial intelligence techniques.”
    The study also details how emissions and agricultural practices data can be cross-checked against economic, policy and carbon market data to find best-practice and realistic greenhouse gas mitigation solutions locally to globally — especially in economies struggling to farm in an environmentally conscious manner. More

  • in

    Groundbreaking mathematical proof: New insights into typhoon dynamics unveiled

    In a remarkable breakthrough in the field of Mathematical Science, Professor Kyudong Choi from the Department of Mathematical Sciences at UNIST has provided an irrefutable proof that certain spherical vortices exist in a stable state. This groundbreaking discovery holds significant implications for predicting weather anomalies and advancing weather prediction technologies.
    A vortex is a rotating region of fluid, such as air or water, characterized by intense rotation. Common examples include typhoons and tornadoes frequently observed in news reports. Professor Choi’s mathematical proof establishes the stability of specific types of vortex structures that can be encountered in real-world fluid flows.
    The study builds upon the foundational Euler equation formulated by Leonhard Euler in 1757 to describe the flow of eddy currents. In 1894, British mathematician M. Hill mathematically demonstrated that a ball-shaped vortex could maintain its shape indefinitely while moving along its axis.
    Professor Choi’s research confirms that Hill’s spherical vortex maximizes kinetic energy under certain conditions through the application of variational methods. By incorporating functional analysis and partial differential equation theory from mathematical analysis, this study extends previous investigations on two-dimensional fluid flows to encompass three-dimensional fluid dynamics with axial symmetry conditions.
    One notable feature identified by Hill is the presence of strong upward airflow at the front of the spherical vortex — an attribute often observed in phenomena like typhoons and tornadoes. Professor Choi’s findings serve as a starting point for further studies involving measurements related to residual time associated with these ascending air currents.
    “Research on vortex stability has gained international attention,” stated Professor Choi. “[A]nd it holds long-term potential for advancements in today’s weather forecasting technology.”
    Supported by funding from Korea Research Foundation under the Ministry of Science and ICT as well as UNIST, this study was published ahead of official release on July 24th via the online edition of Communications on Pure and Applied Mathematics. More

  • in

    Can ChatGPT help us form personal narratives?

    Research has shown that personal narratives — the stories we tell ourselves about our lives — can play a critical role in identity and help us make sense of the past and present. Research has also shown that by helping people reinterpret narratives, therapists can guide patients toward healthier thoughts and behaviors.
    Now, researchers from the Positive Psychology Center at the University of Pennsylvania have tested the ability of ChatGPT-4 to generate individualized personal narratives based on stream-of-consciousness thoughts and demographic details from participants, and showed that people found the language model’s responses accurate.
    In a new study in The Journal of Positive Psychology, Abigail Blyler and Martin Seligman found that 25 of the 26 participants rated the AI-generated responses as completely or mostly accurate, 19 rated the narratives as very or somewhat surprising, and 19 indicated they learned something new about themselves. Seligman, the Zellerbach Family Professor of Psychology, is the director of the Positive Psychology Center, and Blyler is his research manager.
    “This is a rare moment in the history of scientific psychology: Artificial intelligence now promises much more effective psychotherapy and coaching,” Seligman says.
    For each participant, the researchers fed ChatGPT-4 recorded stream-of-consciousness thoughts, which Blyler likened to diary entries with thoughts as simple as “I’m hungry” or “I’m tired.” In a second study published concurrently in The Journal of Positive Psychology, they fed five narratives rated “completely accurate” into ChatGPT-4, asked for specific interventions, and found that the chatbot generated highly plausible coaching strategies and interventions.
    “Since coaching and therapy typically involve a great deal of initial time spent fleshing out such an identity, deriving this automatically from 50 thoughts represents a major savings,” the authors write.
    Abigail Blyler and Martin Seligman, Zellerbach Family Professor of Psychology and director of Penn’s Positive Psychology Center. More

  • in

    Making elbow room: Giant molecular rotors operate in solid crystal

    Solid materials are generally known to be rigid and unmoving, but scientists are turning this idea on its head by exploring ways to incorporate moving parts into solids. This can enable the development of exotic new materials such as amphidynamic crystals — crystals which contain both rigid and mobile components — whose properties can be altered by controlling molecular rotation within the material.
    A major challenge to achieving motion in crystals — and in solids in general — is the tightly packed nature of their structure. This restricts dynamic motion to molecules of a limited size. However, a team led by Associate Professor Mingoo Jin from the Institute for Chemical Reaction Design and Discovery (WPI-ICReDD), Hokkaido University has set a size record for such dynamic motion, demonstrating the largest molecular rotor shown to be operational in the solid-state.
    A molecular rotor consists of a central rotating molecule that is connected by axis molecules to stationary stator molecules, similar to the way that a wheel and axle are connected to a car frame. Such systems have been previously reported, but the crystalline material in this study features an operational rotor consisting of the molecule pentiptycene, which is nearly 40% larger in diameter than previous rotors in the solid-state, marking a significant advancement.
    To enable rotation of such a large molecule, it was necessary to create enough free space within the solid. The team synthesized concave, umbrella-like metal complexes that could shield the rotor molecule from unwanted interactions with other molecules in the crystal. They were able to create sufficient space to accommodate the giant rotor by attaching an especially large, bulky molecule to the metal atom of the stator.
    “I got the idea from an egg, which makes a large space and protects its inside with a circular hardcover,” said Jin. “To bring this feature to a molecule, I envisioned encapsulating the rotator space by using bulky concave shaped stators.”
    A comparison of experimental and simulated nuclear magnetic resonance spectra of the crystal suggested that the giant molecular rotor rotates in 90-degree intervals at a frequency in the range of 100-400 kHz.
    This work expands what is possible for molecular motion in the solid-state. It provides a blueprint for exploring new avenues in the development of amphidynamic crystals, and could lead to the development of new functional materials with unique properties.
    “The pentiptycene rotators utilized in this work have several pocket sites,” commented Jin. “This structural feature allows the inclusion of many types of guest compounds including luminophores, which could enable development of highly functional, sophisticated optical or luminescent solid-state materials.” More