A popular warning goes like this: Artificial intelligence (AI) is a threat to many jobs in newsrooms, and to the credibility of journalism itself. It will replace human beings and produce fake news, including deepfakes of such quality that renders it impossible for the audience to distinguish between fact and fiction.
At the CTeC Asia conference of KAS Media Programme Asia, leading experts discussed the topic of AI in the media and raised the aforementioned questions. But most of the 25 speakers came to more optimistic conclusions. Mathias Doepfner, Chairman and CEO of Axel Springer, Europe’s largest media publishing company, definitively told an international audience consisting of young journalists and experienced editors from many Asian countries, ‘AI will improve journalism.’ Doepfner’s advice? Journalists and newsrooms should be prepared to embrace technological advances in order to catapult the industry into the future. ‘We have to embrace progress. We have to embrace opportunities and take advantage of the tools not only to survive, but also to do better. We should be at the forefront of progress,’ Doepfner said in his virtual keynote. ‘We can delegate all the boring stuff of [journalism], the less exciting, less distinguishing elements of our business, to machines,’ he stated, ‘and we can focus on the more complicated, more exciting part […] of that business, excel on that level and make journalism more impactful to society.’ The CEO and former journalist underscored that it is unlikely that reporters will lose their jobs to mechanical counterparts. ‘It is obvious that AI and language models can be very manipulative and have the potential to overrule and undermine democracy,’ he said. ‘[But] everything that deals with opinion, commentary, should be done by original brains and minds.’ The most effective newsrooms will be those that can manage ‘to integrate tech and journalism creativity,’ Doepfner claimed.
This optimistic view is supported by scientists. More and more newsrooms around the world are experimenting with the use of AI in various aspects of news operations as it presents opportunities for growth, explained Professor Charlie Beckett, London School of Economics and the founding director of Polis at LSE. He further cited that 85% of the journalists and news managers that Polis surveyed from more than 100 news organisations across almost 50 countries have experimented with the use of AI in their newsrooms. ‘It struck me about five years ago that this weird thing called machine learning, natural language processing, artificial intelligence, could be the next big thing. And I think in the last 12 months, with the rise of generative AI, I certainly feel that this is […] almost like a tsunami,’ Beckett said. Most of the newsrooms Polis surveyed believe AI will be useful in text summarisation and generation, content personalisation and automation, as well as increase efficiency in the newsroom by deploying chatbots to conduct preliminary interviews and gauge public sentiment on issues, all of which are crucial in improving the experiences of news consumers. Resources that would have gone to banal and repetitive tasks such as transcribing, Beckett explained, could be devoted to more ‘human’ aspects of journalism such as investigations, specialist news topics, human interest stories, and real-world reporting.
In Malaysia, Star Media Group is currently utilising AI in news voiceovers based on scripts that journalists provide. This practice reduces the processing time from 20 to 30 minutes down to 2 or 3 minutes. ‘It is 10 times faster. So our editors can do more things, such as cover more events,’ Michelle Tam, an editor from Star Media Group, explained at the conference.
However, not everyone agreed with this sentiment. A number of speakers emphasised the importance of making sure that a journalist, as a human being, plays a critical role during news production. ‘The reporter should be the one who picks the subject, writes or edits the script, supervises translation, and does the voiceover,’ Dimitri Bruyas, Head of English News of Taiwan’s TVBS, stated, adding that AI does not perceive time and ethical issues the way reporters do.
Don Lejano, a channel editor from the Philippine newspaper Inquirer, said that the editors at the paper are allowed to use AI when coming up with SEO-friendly headlines as well. ‘Basically, our position is that we allow limited use of AI in news production, given that the guidelines we have set are adhered to,’ Lejano said.
In a panel titled ‘Harnessing AI for Misinformation Detection’, moderated by DataLEADS founder and CEO Syed Nazakat, with participants including Director of Research, Meedan Dr. Scott Hale, Regional Director of IFPIM, Irene Jay Liu, founder of DataN, Kuek Ser Kuang Keng, and Assistant Professor & Director of DigiCamp & Propaganda in Asia, National University of Singapore, Dr. Taberez A. Neyazi, the discussion centred on the multifaceted interplay between technology and the spread of misinformation, and the complexities surrounding Al’s potential to both exacerbate and alleviate the growing issue of misinformation.
At another panel discussion, disinformation researchers and professors from Japan, Hong Kong, and Singapore presented the findings of their studies on how China and Russia use ‘sharp power’—informal and unofficial tactics used to sway public opinion favourably towards authoritarian regimes—on certain issues.
Professor Maiko Ichihara from the Hitotsubashi University (Japan) further examined the work of Russian and Chinese trolls on recent events such as the Russian-Ukraine war and the release of treated Fukushima nuclear waste water into the ocean. She found that while the plan to release treated nuclear waste water galvanised support for local Japanese merchants, some Japanese members of public were actually influenced to be sympathetic to the Russian invasion. This point was echoed by the findings of fellow-Japanese academic Professor Tetsuro Kobayashi of Waseda University, in a separate study which found that ‘undemocratic narratives projected from authoritarian states have significant persuasive effects on the Japanese public.’
Ultimately, AI is able to drive new business ideas in the media, predicted Heike Weigelt, CIO of Funke Media Group from Germany. Artificial intelligence and large language models can help newsrooms generate more revenue streams and even retain audiences, the manager from Germany’s third largest publisher said in his keynote address. One way to do this is by using AI to generate an electronic newspaper, with curated content for each reader. ‘Personalisation has to be more creative than your bubble. You have to work out the right balance between content that fits the interest and still add curated content that you want to transport.’