Several lines from different directions that tangle in a person's head (Illustration by Dumitru Ochievschi)

Scientists, along with the general public, have been amazed at the rapid advancements and public explosion of Large Language Model (LLM)-based AI tools over the past year. Many scientists have also experimented with these new tools, and the result is that AI is now involved at all levels of the scientific process. Whether it is research; data extraction, interpretation, or visualization; text production; or creating other forms of multimedia content: AI is fundamentally changing the way we generate and share knowledge.

In the long term, the impact of this relatively new development is likely to be as transformative as the digitization of scholarly publishing and the media landscape in the 1990s and 2000s. AI tools will undoubtedly increase the productivity of science, and research output will continue to accelerate. Yet there are also significant reasons for concern. A key question is what this will mean for external science communication, for the exchange of information and opinions between science and the public? There will be at least as big a change here as in internal science communication, i.e. the professional exchange between experts. What will this mean for us as a society?

How AI Will Transform Science Communication

First, a variety of new exploitation chains for scientific content are emerging: With LLM-based applications, specialist publications can be summarized in a matter of seconds into “generally understandable” news. These, in turn, can be translated into all kinds of languages and then tailored with other tools to specific target groups, be they 12-year-old girls interested in nature conservation or relatives of patients with seasonal affective disorder, to name just two examples.

Are you enjoying this article? Read more like this, plus SSIR's full archive of content, when you subscribe.

Additional tools allow these texts to be converted into audio versions, similar to audiobooks or podcasts. Finally, other AI applications may be used to create accompanying moving image content, infographics, or even entire short-form videos, such as those widely used on TikTok, YouTube, and Instagram.

Any scientific topic in the language, output format, tonality, and depth of content that is useful and engaging to the target audience—there is enormous potential here, including for greater educational equity and opportunity. The creators of such value chains will not only be scientists who want to bring their own research to the world through as many channels as possible (or who are simply trying to meet the requirements of their funders). The communications departments of research institutions, funding foundations, and the specialist media will also play an important role.

Another positive aspect of flourishing external science communication may be the emergence of new participatory practices, similar to citizen science approaches: Lay people can work with scientists to ensure that texts and infographics are better tailored to the information needs, prior knowledge, and media use habits of other lay people. AI tools may also be useful here, for example by helping to prepare topics that are of interest only to small audiences and therefore rarely appear in traditional media.

How will traditional media evolve? Newspaper and magazine publishers, for example, will no longer only offer access to their (archived) content through a simple full-text search and subsequent output of more or less relevant individual articles. Instead, they will be able to provide (paying) users with various input and output channels that allow highly individualized interaction with their content, including personalized dialogues.

Preventing Harms

It is not, however, only the traditional players in science communication who will use AI to bring science topics to the people in a more targeted way and, above all, on a larger scale than ever before. In principle, anyone can use intelligent tools at the various nodes to intervene in the burgeoning publication process. All in all, we are therefore facing a deafening cacophony of scientific communication, and much of it will not be positive.

False information hallucinated by AI as well as deliberately spread misinterpretations and fake content by conspiracy believers are likely to shape numerous discourses. LLM-based text and (moving) image generation programs open up fabulous horizons for malicious actors to promote their own agenda, for example through fake primary publications that mimic the appearance and style of peer-reviewed articles and serve as “original sources” for ideologically motivated explanations of the world, or through deep fakes that “prove” some deluded conspiracy theory.

Internal science communication also faces challenges. Just a few months after AI programs such as ChatGPT and Midjourney became accessible to the entire world, one of Europe’s major research funding organizations, the German Research Foundation (DFG), was faced with the task of clarifying how and at what points in the scientific process the use of these new tools should and should not be permitted: Should AI be allowed to co-author when scientists submit project ideas or publish results? Nothing less than the progress of scientific knowledge is at stake in this primary communication, which serves the professional exchange between specialists. The DFG’s preliminary answer: In principle, researchers may use AI tools, for example, to help them analyze data and write scientific publications or proposals. However, the exact type of use and the specific tools must be specified, without making ChatGPT or Gemini a co-author. These are reasonable guidelines because they create transparency. The opposite, a blanket ban, would neither make sense nor be verifiable.

Clever prompt engineering and innovative AI-assisted extraction tools will allow each of us to distill the content that best suits our personal interests from reputable primary scientific literature—although here, too, the question of quality assurance arises, as does the need for media literacy. Think of personalized treatment plans designed by an AI for patients from the latest global medical literature.

Finally, science journalism will face further challenges, including economic ones, as a result of the foreseeable flood of new knowledge content. External science communication content produced according to professional journalistic criteria and independent external observation of science remains a valuable asset for democratic opinion-forming, which we as a society cannot do without, just as we cannot do without trustworthy independent media.

Of course, there are also opportunities here: Data journalists are by no means the only ones who will benefit from AI in their daily work. LLM-based tools provide editorial teams with new methods that enable research at a higher level than before. For example, the automated monitoring of entire specialist scientific fields or the AI-assisted compilation of data from diverse, heterogeneous sources and their evaluation will be able to increase the quality and depth of journalistic work.

However, the genuine added value of journalistic science reporting—critical classification, commentary, contextualization—will not be provided by generative AI, at least for the time being. So far, this remains the domain of scientific reflection by human experts.

This is precisely where an opportunity to ensure journalistic quality arises: AI companies, which constantly need new primary content from science and the media to train their models, could contribute to the refinancing of quality journalism through a compulsory levy. As this is unlikely to happen voluntarily and it must also be decided who should benefit from this in the highly complex media ecosystem of the AI age, the legislator would have to take the reins here.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Carsten Könneker.