Generative artificial intelligence (AI) technologies are here to stay, yet these tools are nowhere near intelligent and independent without human intervention. In the media sphere, where AI’s impact is more pronounced, journalists and media organizations continue holding their ground to ensure the public’s free access to quality information.
In this year’s Prague Media Point, nearly 100 journalists, media students, academics, and newsroom executives discussed the theme of volatility induced by AI in the media. The various panels and workshops discussed how AI has complicated the divides and the existing problems in the media landscape and what can be done to address these issues.
Prague Media Point Coordinator Marek Přeček said this year’s topic on AI “has kind of run to the full in its capacities”, and that the discussion on its regional impact “should be enriched with a more comparative perspective.”
“And as ever, we wanted to thus show not only different perceptions and stances to the issue, but also the way how people have seized it professionally, what’s worked for them in that regard, so that there’s more of an idea what’s actually possible to do with AI in journalism at present,” Přeček added.
Moving targets in AI and media
For Pierre Romera Zhang, Chief Technology Officer of the International Consortium of Investigative Journalists (ICIJ), while AI may have shaken journalism, its reverberations have transformed the profession. Two years since the introduction of OpenAI’s ChatGPT, “it is not yet ready to take our jobs”, Zhang remarked.
“We are still secure… While AI is capable of tremendous accomplishment, it is still not able to work without human intervention.”
Some journalists and academics at Prague Media Point agreed with this observation, but some voiced concerns about how AI induces transformative shifts and widens existing imbalances. Marius Dragomir of the Media and Journalism Research Center believes that regarding AI’s impact on the media, “there are many moving targets to analyze and research.”
“When you look at that picture, I think the answer to this question is that definitely there will be more imbalance unless more regulation, tools, and mechanisms [will be implemented] to check the transparency of these companies. [And] something that we have to discuss is the role of the tech companies in the news media ecosystem.”
The panel on media diversity also addressed the question about how AI is seen as a “technology of whiteness”, and whether this could be changed. For Media Diversity Institute (MDI) executive director Milica Pesić, there are enough studies that show how AI tools can be as “discriminatory, racist, homophobic, and misogynist as the people who trained them.” At the same time, MDI community strategist Yazan Abu Al Rous believes “we can improve” such technologies but it “usually takes time, energy, and the will at the beginning to make that change.”
“If you engage [the trainers], they’ll say, ‘Yes, we’re trying [and] working.’ But that doesn’t really reflect. At the same time, with the changes happening, everyone will soon have their own AI that they can train. But then, that AI will also hold our biases and discriminations. Our own stereotypes will be put into that.”
Is upskilling journalists on AI education enough?
In the various debates, investigations, and projects related to AI’s impact on media so far, civil society organizations and academics encourage journalists and operators in media institutions to reframe their thinking about technology. Prague Centre for Media Skills project director Adriana Dergam shared a committee discussion in a Czech government council that led to a survey asking, “What do journalists know about technology [and] algorithms?”, “How does it affect their lives and professional performance?”, “What do they know about algorithms in the newsroom?”, and “What do the people in IT, marketing, and sales know about that?”
“They didn’t have much idea about, for example, digital self-management. I’m talking about [digital] hygiene, legal aspects, and the threats and pressures female journalists receive,” Dergam shared in the penultimate panel on protecting the European information and media space.
“We then saw that when [asked] about algorithms and media and newsrooms, people—including those who were buying the technology—had a very low idea and awareness about responsibility and accountability on what they were buying. We found that the situation was quite worrisome. So, what we started to do is to advocate increased education and upskilling of journalists [so they could] really understand technology and its risks.”
Meanwhile, Auckland University of Technology media studies professor Verica Rupar calls for “much-needed” interdisciplinary collaborations, not just between academics, civil society organizations, and policymakers but also between journalists and computer scientists.
“Without that knowledge [of computer science and algorithms], it wouldn’t be possible to retain all great things journalism has, as a profession with an expertise in fast processing of information,” Rupar said in the panel on media diversity.
Her co-panelist Al Rous also emphasized what he called a “post-social media” situation, in which the gap in the information space and trust in traditional media institutions has diminished are being filled in by content creators without journalistic training.
“We’ve developed a [two-part] theory of change, where [the first part is] we need to work with journalists to learn more from content creators about the best practices and tips to gain an audience base. The second part is that we need to work with content creators and social media influencers to educate and inform them about important social issues,” Al Rous explained.
Speakers in the panel on adapting media/journalism education also gave their takes on whether the recent discussions on AI have left media organizations and journalists underestimating the agency and critical engagement of young people regarding new technology. Science journalist Pavel Kasik said future journalists may not come from journalism schools or be trained by traditional media, but they “might be on YouTube or TikTok, or something we don’t know yet, [and] will play a role of responsibly informing someone.” For Czech News Agency deputy editor-in-chief Zdenek Veit, students nowadays use free versions of AI tools, and “this may be limiting” their exploration of such technologies. He encourages universities and institutions to consider providing access to paywalled AI tools.
“It is always bad to underestimate anyone because people can always surprise you,” Veit said, adding, “Universities should be aware of the age that we live in now.”
Protecting the information space
In the session on protecting the information space, Council for European Public Space founding director Matthias Pfeffer opined that “the media is more and more ruled by AI”, particularly how algorithms drive the creation of information “that is critical for democracies, for societies, for ourselves, [and] for our lives.” Such content is being distributed through social media.
“The problem is the big tech companies that we already have are combined with AI—and now with populist parties and anti-democratic powers in society. I really hope that one of the roles of journalism in the future will not only be to use these tools but to make clear that these tools are very dangerous weapons.”
Pfeffer proposed building capacities and “infrastructure for trustworthy news and information” in terms of legislation that would become “one cornerstone for a new, public sphere that could be a European one.”
Vincent Berthier from Reporters Without Borders also called for a “new status quo” in which platforms and tech companies are held accountable “for the harm they are doing.” He suggested the Journalism Trust Initiative as a solution.
“It’s not about checking if the content is true or not because this is the beginning of the end when you start to do that. But it’s checking if the way you produce this content is compatible with ethical journalistic standards. And we call platforms and big tech to promote this kind of content produced by this producer, if you prefer, when a user or citizen is looking for accurate information.”
Adriana Dergam and her organization, Prague Centre for Media Skills, advocate for “human-centered digitalization”, adding regulating yet-to-exist technologies ex ante could be risky and threaten rights and freedoms.
In the end, Pfeffer called for an “editorial society” where people have media literacy skills. “But on the other hand, we need to make sure that information, because it is part of critical infrastructure for democracy, is like clean water. The state must take care that the infrastructure is there. We need to build infrastructures that allow new forms of journalism and new forms of business models and public service, at its best.”
Tags:#JournalismAI, AI journalism, AI žurnalistika, prague media point