Sun. Jul 13th, 2025

Leading with AI: How Sri Lanka’s Editors Are Preparing Newsrooms for the Future

As artificial intelligence continues to reshape global industries, Sri Lanka’s media sector took its first unified step into this new frontier. In a landmark initiative, the Lanka Education and Research Network (LEARN), in collaboration with the Sri Lanka Press Institute (SLPI), hosted the workshop “AI Leadership in the Newsroom” — a first-of-its-kind session designed specifically for newsroom editors from Sinhala, Tamil, and English press outlets.

The session opened with remarks from Mr. Kumar Lopez, CEO of the SLPI, who underscored the urgent need for newsrooms to embrace AI to remain relevant, efficient, and innovative. He emphasised that as the global media landscape evolves rapidly, Sri Lanka’s press must not lag behind but instead lead the way in adopting technology that can enhance both journalistic impact and operational agility.

This introductory workshop didn’t just explore what AI is. It focused on what AI means for journalism — now and in the years to come. From editorial strategy to practical toolkits, the event offered a grounded look at how journalists can use AI to improve reporting, streamline workflows, and uphold truth in an age of automation. The session, led by the CTO of LEARN, Dr. Asitha Bandaranayake, the CEO of LEARN, Prof. Roshan Ragel, and Chief Editor of Arteculate Asia, Mazin Hussain, brought together national expertise to spark one key question: How can Sri Lanka’s media lead the AI conversation, rather than follow it?

An Introduction to AI for Journalists

The workshop began with Dr. Asitha Bandaranayake laying the groundwork for understanding Artificial Intelligence. He started by tracing the evolving definitions of human intelligence, illustrating how the complexity of replicating human thought has challenged both scientists and philosophers over time. This was an entry point into the broader discussion on what AI truly is, not just in theory, but in its growing impact on everyday life. Dr. Bandaranayake then charted the historical trajectory of AI, from its golden years and the expert systems boom of the 1980s to the current resurgence driven by massive datasets and affordable computing power. He highlighted how AI now permeates sectors ranging from autonomous vehicles to smart agriculture, illustrating that AI is no longer an emerging trend but an integrated part of daily life.

This point was echoed by Prof. Roshan Ragel, who explained how tools powered by these technologies are capable of handwriting recognition, speech-to-text conversion, image classification, reading comprehension, and even generating working code. As computational power has exponentially increased, these tools have grown increasingly powerful, with some now demonstrating reading comprehension and language understanding above average human levels. He also noted that while these systems are currently most advanced in English, rapid progress is being made toward accessibility in languages like Sinhala and Tamil. However, he cautioned that developers of these AI systems may soon face a new challenge: running out of high-quality training data, potentially causing AI performance to plateau by 2026. 

Both speakers emphasised that these technological developments are not theoretical. Citing a global study of 376 media professionals across 51 countries, Dr. Bandaranayake revealed that over 60% of newsrooms have already automated backend processes using AI, and 87% expect to integrate AI into newsroom workflows soon. These insights were supported by findings from the Reuters Institute, Oxford University, and UNESCO, especially around the implications of AI on press freedom and editorial independence.

The Potential Opportunities of Using AI in the Newsroom

Across all presentations, one thing was clear: AI is not just a technical tool; it’s becoming a strategic asset in the modern newsroom. For journalists, editors, and producers alike, it offers powerful new ways to create, edit, and deliver content — all while meeting the demands of shrinking resources and rising audience expectations. 

Dr. Bandaranayake explained that advances in Natural Language Processing (NLP) — the same technology that powers chatbots and voice assistants — now allow news consumers to choose how they engage with content. Articles can be automatically summarised, translated into native languages like Sinhala or Tamil, read aloud, or even delivered in tiered formats that suit different attention spans. This evolution makes journalism more accessible, inclusive, and user-focused.

Building on this, Prof. Roshan Ragel introduced a range of GenAI tools already shaping newsroom practices:

  • ChatGPT, DeepSeek, and Gemini are now capable of brainstorming angles, rewriting articles in different tones, and aiding in headline generation.
  • Google Pinpoint helps investigative journalists transcribe interviews, search archives, and analyse large document sets.
  • Perplexity AI, a fact-based search engine, can surface credible sources, generate reference lists, and provide instant background for breaking stories.
  • JournalistToolbox.ai was highlighted as a go-to portal featuring thousands of curated tools specifically tailored for journalists, along with free tutorials.

He also demonstrated how Generative AI is enabling fast and flexible content creation:

  • Translating stories while retaining tone and cultural context.
  • Generating short-form explainers, social media captions, and visuals for complex topics.
  • Creating video summaries with auto-subtitles and voiceovers — all within minutes.

From a hands-on perspective, Mr. Hussain shared how his personal AI journey has mirrored the industry’s trajectory. What began with using Grammarly for proofreading evolved into Otter.ai for automated interview transcripts — tools that saved time and improved output. With the arrival of ChatGPT, the writing process itself became faster, allowing him to spend less time on structural edits and more on reporting.

He illustrated this transformation through global case studies:

  • Express.de in Germany saw up to 80% higher engagement through AI-generated, A/B tested headlines.
  • Nikkei in Japan used AI to offer both quick news digests and deep-dive options for readers, giving them control over their level of engagement.

Ultimately, the presenters agreed that AI isn’t here to replace journalists — it’s here to augment their work. By taking over repetitive tasks and offering real-time creative support, AI allows journalists to focus on what they do best: asking the right questions, telling compelling stories, and upholding editorial integrity. In an era where resources are limited, AI presents a powerful opportunity to do more with less, while reaching audiences in smarter, more personalised ways.

The Ethical Challenges Posed by AI to Journalists

“AI doesn’t replace journalism. It gives you a running start,” remarked Mr. Hussain — but with that head start comes a complex terrain of ethical responsibility. Throughout the workshop, presenters made it clear that while AI brings efficiency and innovation, it also introduces profound challenges that journalism must navigate with care.

One of the most pressing concerns is the rise of misinformation and deepfakes. AI-generated videos, images, and audio are becoming so realistic that they can easily deceive audiences. As Prof. Roshan Ragel demonstrated, tools like TrueMedia.org are now being developed to detect political deepfakes, underscoring the urgency of adapting newsroom verification protocols to keep pace with AI-generated disinformation. Equally concerning are AI hallucinations — fabricated facts presented with the fluency and confidence of truth. Mr. Hussain likened these to anonymous tips: nothing should be trusted without independent verification. These risks are amplified when AI is used by inexperienced journalists or over-relied upon in fast-paced newsrooms.

Bias is another critical issue. As Prof. Ragel explained, AI models mirror the data they’re trained on — and if that data is racially, geographically, or socioeconomically skewed, the output will reflect those biases. Journalists must ask what an AI tool produces, whose interests it serves, and who may be left out. Privacy and intellectual property also took centre stage. Prof. Ragel highlighted high-profile lawsuits — including The New York Times suing OpenAI, and Google being fined in France — for using publishers’ content without consent. He warned that GenAI platforms often scrape public data without user knowledge, making it critical for journalists to understand the terms of service, avoid inputting sensitive information, and anonymise case details when using such tools.

He also spoke about the concerning Dunning-Kruger effect in AI usage, where users, particularly younger journalists, might become overconfident in outputs they don’t fully understand. Over-reliance on AI, especially without verification, risks publishing errors and undermines the development of core journalistic skills like writing, critical thinking, and editorial judgment.

To counter these threats, the presenters laid out practical safeguards:

  • Set clear editorial policies: Define exactly what AI is permitted to do (e.g., suggest headlines, assist in research) and where its use is prohibited (e.g., manipulating images, writing final drafts).
  • Strengthen verification: Treat every AI output as an unverified source — cross-check and fact-check rigorously.
  • Maintain human accountability: Final responsibility must lie with a journalist, not a machine.
  • Invest in AI literacy: Newsrooms must actively train their teams to understand AI’s strengths, weaknesses, and limitations.
  • Safeguard confidentiality: Avoid uploading sensitive or identifying information into public GenAI tools unless you’re certain of their security.

Mr. Hussain shared how these principles are already in practice at Arteculate, where the team uses AI to enhance workflow but under strict internal guidelines that reinforce human oversight at every stage. These are not abstract ideals — they are everyday safeguards being implemented in real-world newsrooms engaging directly with the future.

Finally, Prof. Ragel reminded attendees that even at a national level, ethical governance of AI is being prioritised. He introduced the Sri Lankan National AI Strategy, which includes pillars focused on ethics, regulatory frameworks, public awareness, and responsible data use. This institutional context reaffirms that the ethical use of AI in journalism is not optional — it’s foundational.

Charting the Path Forward for AI in Sri Lankan Journalism

The AI Leadership in the Newsroom workshop marked more than just an introduction to emerging technology — it signalled the beginning of a larger transformation in how journalism is practised, produced, and protected in Sri Lanka. From foundational concepts to real-world newsroom applications, and from innovative tools to pressing ethical concerns, the event equipped editors with the knowledge and clarity needed to navigate the AI era with confidence.

What emerged was not just a sense of what AI can do, but a deeper understanding of what journalism must do in response. As echoed by all three presenters, the future of journalism is not one where machines replace people, but one where journalists use technology to reclaim time, sharpen focus, and deepen impact. In a media environment facing rapid change, shrinking resources, and growing threats to truth, Sri Lanka’s editors now face a choice: to wait and react, or to lead and shape. This workshop made it clear that with the right policies, critical thinking, and ethical leadership, AI can be an ally in upholding the core values of journalism.

By Arteculate

Arteculate is your guide to the Asian tech industry. We give you unparalleled insights, accurate, local tech news, thoughtful features and sometimes scathing opinions on where things are headed. Stay tuned for the best of Asia!

Related Post