More stories

  • in

    Spotting climate misinformation with AI requires expertly trained models

    Conversational AI chatbots are making climate misinformation sound more credible, making it harder to distinguish falsehoods from real science. In response, climate experts are using some of the same tools to detect fake information online.

    But when it comes to classifying false or misleading climate claims, general-purpose large language models, or LLMs­ — such as Meta’s Llama and OpenAI’s GPT-4­ — lag behind models specifically trained on expert-curated climate data, scientists reported in March at the AAAI Conference on Artificial Intelligence in Philadelphia. Climate groups wishing to use commonly available LLMs in chatbots and content moderation tools to check climate misinformation need to carefully consider the models they use and bring in relevant experts to guide the training process, the findings show. More

  • in

    Generative AI is an energy hog. Is the tech worth the environmental cost?

    It might seem like magic. Type a request into ChatGPT, click a button and — presto! — here’s a five-paragraph analysis of Shakespeare’s Hamlet and, as an added bonus, it’s written in iambic pentameter. Or tell DALL-E about the chimeric animal from your dream, and out comes an image of a gecko-wolf-starfish hybrid. If you’re feeling down, call up the digital “ghost” of your deceased grandmother and receive some comfort (SN: 6/15/24, p. 10).

    Despite how it may appear, none of this materializes out of thin air. Every interaction with a chatbot or other generative AI system funnels through wires and cables to a data center — a warehouse full of server stacks that pass these prompts through the billions (and potentially trillions) of parameters that dictate how a generative model responds. More