loader

In 2023, readers of the respected technology outlet CNET were quietly being served articles written not by journalists, but by artificial intelligence. The stories, many covering complex financial advice, were presented without any disclosure that they had been AI-generated. It was only after independent journalists and critics began examining the articles that glaring errors came to light. These articles were rife with flawed calculations, misleading information and instances of plagiarism. What made the episode more unsettling was that CNET’s newsroom itself had initially failed to catch them. For a brief but telling moment, AI had slipped into journalism’s bloodstream without the kind of rigorous oversight that is supposed to safeguard public trust. The incident at CNET was not an outlier. It was a warning about how easily AI, when deployed without transparency or caution, could undermine journalism from within. As the technology continues to advance, newsrooms face a pivotal choice over whether AI will become a friend to journalism, strengthening its foundations, or a foe that quietly erodes them. Much will depend not on the tools themselves, but on how carefully and responsibly they are used. In some cases, it has already proven to be a dangerous disruptor of the information ecosystem. In others, it holds the potential to enhance journalistic practices and deepen the quality of public discourse. The darker side of AI’s influence on journalism has already begun to materialise. AI-powered tools are now capable of producing highly convincing deepfakes that can easily deceive even a trained eye. The technology to fabricate a politician’s speech, invent a protest that never happened and create an entire event from scratch is no longer reserved for sophisticated actors. It has been ‘democratised,’ available at the click of a button. One need not look far for examples. Fabricated AI-generated images of Donald Trump being arrested went viral a couple of years ago, spreading confusion even among reputable media outlets before being debunked. Similarly, a fake narrative about Olena Zelenska, the first lady of Ukraine, purchasing a luxury car with aid money circulated widely, with AI-generated imagery lending it a veneer of authenticity. These cases are not isolated errors and reveal a deeper risk. When it becomes impossible for audiences to distinguish the real from the fabricated, trust in media and in verifiable reality itself begins to erode. AI has also made it easier for bad actors to scale disinformation campaigns. Synthetic accounts, automated networks and coordinated troll armies now spread false narratives at speeds and volumes that were unimaginable just a few years ago. Entire online ecosystems can be flooded with persuasive but false information within minutes of a major event. This places enormous pressure on journalists and fact-checkers, who must work against the clock to debunk falsehoods before they solidify in the public imagination. In this way, AI has acted as a destabilising force, exacerbating the already fragile state of the information ecosystem. Yet, to conclude that AI is inherently hostile to journalism would be equally misguided. When used thoughtfully and ethically, AI can revolutionise modern journalism. In the everyday rhythms of a newsroom, AI can automate repetitive tasks that would otherwise drain time and resources, such as transcribing interviews, sorting through large datasets and monitoring public records for newsworthy developments. It can assist investigative journalists in sifting through thousands of documents, spotting patterns that might otherwise go unnoticed. It can help make sense of hundreds of gigabytes of public data, locate obscure records, and, in some cases, generate useful predictions. AI can also help newsrooms surface local stories that would not have emerged through traditional editorial processes, by analysing open data or social media chatter in ways no human team could feasibly manage, but only if it is deployed meaningfully within newsroom systems as part of the underlying workflow rather than as a detached addition. Moreover, AI can play a crucial role in knowledge management within news organisations. In a field where information is constantly produced, retrieved and updated, having intelligent systems that can organise archives, tag material accurately and retrieve relevant information on demand can significantly enhance reporting quality. AI-powered recommendation systems, when used responsibly, can also help newsrooms personalise content for readers, allowing important stories to reach audiences who might otherwise miss them. In all such applications, AI does not replace the journalist; it equips the journalist to do more meaningful work. This distinction is central to how we at Media Matters for Democracy are approaching the question. Our goal is not to automate journalism, but to empower it. We want to ensure that journalists are supported by AI and not displaced by it. In line with this vision, we are assisting modern newsrooms in designing and deploying their own custom-built AI enterprises tailored to their specific editorial and operational needs. These platforms are built not only for efficiency but with an emphasis on reliability, transparency and control, particularly in areas like data handling, research, analysis and knowledge management. By creating in-house AI solutions that reflect newsroom values and editorial goals, we seek to eliminate the risk of errors and hallucinations, such as seen in the CNET example. The real divide is not between using AI and rejecting it, but between using AI to empower journalism and allowing it to undermine journalism’s core values. There is no escaping the fact that AI will continue to play a growing role in how news is produced, distributed, and consumed. The question is whether journalists, editors and news organisations can shape that role thoughtfully. If left to profit motives alone, the same tools that promise efficiency could very well deepen disinformation and erode trust further. But if integrated carefully, with a commitment to ethical standards and a recognition of AI’s limits, these technologies could also revitalise journalism at a time when it is needed most. AI is neither a friend nor a foe by default. It is a set of tools, powerful and neutral only in theory, but profoundly shaped by human choices in practice. It has already demonstrated its ability to act as a destabilising force when used irresponsibly, but it has also shown promise as a means of strengthening journalism’s foundations. The future of journalism in the age of AI will not be determined by the technology itself, but by how thoughtfully the profession rises to the challenge it presents.