Factual accuracy of AI-generated content: new donation-funded project launched

Large language models (LLMs) based on artificial intelligence (AI) – ChatGPT, for example – interact with users in the form of dialogue and provide seemingly trustworthy responses. But how accurate is the information in these responses? In its new non-client, donation-funded project, the Oeko-Institut is examining the specific risks, as well as the opportunities, created by these new AI-based large language models.

Countering false information

AI-based large language models are trained on vast amounts of text data. In the specific case of ChatGPT, the primary objective during its development was apparently to provide an answer to every question; the quality of the responses is less important. As a result, the answers may sound plausible but lack any basis in fact. This makes it difficult for non-experts to separate fact from fiction. Disinformation campaigns, particularly in the climate context, are already on the rise and may be facilitated by artificial intelligence tools.

 “There is a real danger here that false information could negatively impact society’s acceptance of climate policy measures and mechanisms,” says Carl-Otto Gensch, Head of the Oeko-Institut’s Sustainable Products and Material Flows Division, who leads the project. The new project will therefore start by looking at the factual accuracy of the responses. The aim is to develop a methodology for reviewing answers generated by AI-based large language models on key environmental and climate topics and checking their factual accuracy.

Easy access to information

The project will also analyse how the factual accuracy of the answers evolves over a given period of time and to what extent it depends on clear and effective phrasing in the prompts. A further question is whether access to environmentally relevant information is improved through the use of AI-based large language models. And: how are these models already being used in the processing and sharing of knowledge?

 

There is a real danger here that false information could negatively impact society’s acceptance of climate policy measures and mechanisms. The new project will therefore start by looking at the factual accuracy of the responses. The aim is to develop a methodology for reviewing answers generated by AI-based large language models on key environmental and climate topics and checking their factual accuracy.
Carl-Otto Gensch
Head of the Oeko-Institut’s Sustainable Products and Material Flows Division

Following on from the analysis, the project team will then develop a set of policy recommendations. What form should a regulatory framework take in order to counter the potential risks and leverage the opportunities created by AI-based large language models?

Donation-funded projects support non-client, independent research at the Oeko-Institut. In these projects, our researchers investigate the key bases for the sustainable transformation of our society and make policy recommendations for socially and environmentally just transitions.