Impactful research findings on AI generated content and information integrity

Impactful research findings on AI generated content and information integrity

Part of UNESCO’s latest report on freedom of expression

UNESCO’s recently released report, World Trends in Freedom of Expression and Media Development Report 2022-2025, reveals a “10% decline in freedom of expression worldwide since 2012 – a level not seen in decades.” The report also sounds the alarm that in the same period, self-censorship increased significantly among journalists, rising 63%, at a rate of about 5% per year.

The extensive report is available in full via the UNESCO website. While many topics of the report cover familiar yet worrying themes, one particular topic stood out to us as especially timely and unprecedented – “AI generated content and information integrity.” Below is the excerpt of the UNESCO report on the matter.

The abuse of generative AI to create deepfakes that harm human rights is a growing problem with serious implications for freedom of expression and access to information. Women have been silenced, shamed and blackmailed with deepfake non-consensual sexual imagery. Journalists, along with celebrities and political leaders, are being impersonated for financial scams, while AI-personalized “phishing” attacks are cheap. The abuse of generative AI encompasses new threats to cybersecurity and to election integrity, as well as portending problems for scientific and academic integrity.

In response, a number of countries have applied existing laws or set out new provisions concerning such abuses of human rights, although concerns have also emerged that restrictions are sometimes too broadly worded (thus allowing selective application) and that the prescribed penalties are not always proportional to the offenses.

Another response to generative AI is technological. In 2024, major tech companies agreed on standards, including metadata and labels, for adding “Content Credentials” to AI-generated content. Corporate opacity prevents full assessment of how this is unfolding. Further, even when they are applied, these technical measures are not a panacea since they can be hacked or bypassed (especially in text, audio, and video formats) or be distributed without disclosure labels.

The technology of generative AI also raises further critical concerns for information integrity, as well as for cultural and linguistic diversity. The models are generally trained on datasets heavily skewed toward English-language content and perspectives from the Global North. Their outputs entrench cultural biases and marginalize underrepresented voices and languages. A UNESCO study, among others, shows how, based on their training data, the outputs of these systems often reflect gender and racial stereotypes.

Even when supplemented by “reasoning models,” LLMs pose inherent challenges to information integrity. They can misleadingly combine unrelated information in their outputs, as well as fabricate new plausible, but incorrect, content (described as ‘hallucinations’). Moreover, generative (AI) outputs often obscure access to original sources, limiting the ability to verify information, trace content origins and copyright holders, and detect pseudo-realistic content that has been fabricated.

While journalists have to continuously strive to win trust as honest seekers of truth, the design and operation of chat interfaces create the false impression that users are engaging with a trustworthy yet objective human-like interlocutor. The reality, on the other side of the screen, is a machine programmed with content and values that cannot be trusted.

The power structure within AI markets exacerbates these issues. The infrastructure, data, and resources needed to develop and train AI models remain concentrated among a few dominant companies, further deepening inequalities in the global digital ecosystem, as well as increasing power asymmetries between these companies and the news media.

The sheer market dominance of these firms illustrates the imbalance: Three major technology companies each have a market valuation of around $3 trillion (USD) – a figure rivalling the GDP of the entire African continent.

The deregulatory drive for unfettered AI development, in the name of innovation and international competition, is a growing trend, with belligerent tactics being deployed even against large jurisdictions like the EU.

This push seeks to dismantle guardrails that protect freedom of expression, privacy and information integrity. Rival expansionist giants now run systems with high risk yet little consideration of, or investment into, mitigating negative externalities. As the big players get larger, it becomes harder for alternative models to break into the market.

On the other side of this growing trend, some countries and regions are asserting their sovereignty over both citizens’ data and the informational influence of foreign digital companies. In Africa and in Latin America and the Caribbean, for instance, courts and regulators in several countries have affirmed national jurisdiction despite resistance from the companies.

Not all efforts to rein in big tech, however, have been in favour of freedom of expression. A number of countries have sought to curb online expression on LGBTQI issues, while others have continued to demand that digital services practice censorship, or give access to user data, without compliance with the internationally agreed standards of legality, legitimate purpose, and proportionality that are required for intrusions on human rights.

The Daily Herald

Copyright © 2025 All copyrights on articles and/or content of The Caribbean Herald N.V. dba The Daily Herald are reserved.


Without permission of The Daily Herald no copyrighted content may be used by anyone.

Comodo SSL
mastercard.png
visa.png

Hosted by

SiteGround
© 2026 The Daily Herald. All Rights Reserved.