Spotting climate misinformation with AI requires expertly trained models
Conversational AI chatbots are making climate misinformation sound more credible, making it harder to distinguish falsehoods from real science. In response, climate experts are using some of the same tools to detect fake information online.
But when it comes to classifying false or misleading climate claims, general-purpose large language models, or LLMs — such as Meta’s Llama and OpenAI’s GPT-4 — lag behind models specifically trained on expert-curated climate data, scientists reported in March at the AAAI Conference on Artificial Intelligence in Philadelphia. Climate groups wishing to use commonly available LLMs in chatbots and content moderation tools to check climate misinformation need to carefully consider the models they use and bring in relevant experts to guide the training process, the findings show. More