OSINT Pulse: November 2025 I How AI is reshaping evidence and information

Global research from the World Bank pointed to widening gaps in AI access and talent, raising concerns about who benefits from current advances.

By -  Dheeshma
Published on : 30 Nov 2025 4:15 PM IST

OSINT Pulse: November 2025 I How AI is reshaping evidence and information

Hyderabad: November brought several important shifts in how AI is shaping information ecosystems, newsroom practices and even everyday consumer interactions.

AFP introduced an open-access training course to help journalists verify AI-generated content, while new data from the European Digital Media Observatory showed a steady rise in synthetic media within fact-checking outputs. Global research from the World Bank pointed to widening gaps in AI access and talent, raising concerns about who benefits from current advances.

Audience trust also remained a major concern, with new findings showing that many readers feel uneasy about AI in journalism. Meanwhile, real-world cases, such as the Swiggy Instamart refund fraud incident, demonstrated how easily AI-generated edits can bypass customer service systems that rely on basic visual proof.

These developments together reflect how quickly AI is influencing verification work, public trust and digital governance, in India and internationally.

1. AFP launches open-access course on verifying AI-generated content

AFP has released a highly relevant open-access online course designed to help journalists verify AI-generated content — a much-needed resource as synthetic media becomes increasingly common across platforms.

The one-hour module covers:

- The impact of AI on the information ecosystem

- Verification best practices for newsrooms

- Key types of AI-driven misinformation

- Methods to track, analyse and identify AI-generated material

- A curated list of useful tools and resources

It’s practical, concise and ideal for newsrooms onboarding or for upskilling reporters who regularly work with user-generated content.

2. AI-generated disinformation continues to rise, EDMO reports

A new analysis by the European Digital Media Observatory (EDMO) highlights a clear and accelerating shift toward AI-generated disinformation across Europe. With several generative AI tools launched this year, disinformation actors are increasingly using them to produce deceptive images, audio and text at scale.

The report notes that this trend is unlikely to slow, given the consistent patterns observed in previous months and the ease with which synthetic content can now be created.

Key data point from October:

- Out of 1,722 fact-checks published by 32 European organisations, 210 focused on AI-driven disinformation — 12% of all fact-checking output.

Similar patterns are being observed in India, where fact-checkers are also reporting a steady increase in cases involving AI-generated political content, manipulated audio and synthetic narratives.

This confirms that synthetic media is no longer a niche concern; it’s becoming a significant portion of the misinformation ecosystem that verification teams must prepare for.

(Screenshot from EDMO report)

3. World Bank report warns of deepening global divides in AI development and access

The latest World Bank Digital Progress and Trends Report highlights how unevenly AI innovation and benefits are distributed across the world.

AI development remains heavily concentrated in high-income countries, even as China and India rapidly expand their capabilities.

According to the report, wealthy nations account for 85 per cent of AI start-ups, 91 per cent of venture capital funding and 54 per cent of AI publications. It also notes a significant power shift toward industry, with 80 per cent of notable AI models now originating from private labs, reducing academia’s influence over the direction of research.

GenAI adoption is highest in high- and middle-income countries

GenAI adoption (measured through ChatGPT traffic) is strongest in countries with robust digital infrastructure, skilled workforces and large youth populations, with high- and middle-income countries accounting for more than 99 per cent of global usage.

Usage is also demographically skewed toward young, college-educated men. Left unaddressed, the report warns, AI’s economic benefits will remain concentrated among advanced economies, large corporations and high-skilled workers, reinforcing existing inequalities.

Connectivity emerges as a core barrier: without reliable internet or data capacity, users in many developing countries cannot meaningfully participate in the AI ecosystem, whether through uploading datasets, accessing cloud compute or integrating models into local workflows.

Developing nations are at a disadvantage

The report also flags the severe concentration of computing resources, which places developing nations at a structural disadvantage.

Digital talent remains similarly uneven, with China and the US each accounting for 21 per cent of global AI specialists, India 15 per cent, and low-income countries collectively contributing less than 1 per cent. Talent migration from low- and middle-income regions to richer countries continues to exacerbate the divide.

Overall, the findings point to an AI landscape that is not only technologically imbalanced but also structurally unequal, with far-reaching implications for global development.

4. Why Poynter’s AI Ethics Starter Kit matters for newsroom trust

A new study from Trusting News shows that audiences remain deeply uneasy about AI in journalism. Readers described feeling ‘skeptical, uncomfortable and worried’ when they learn that newsrooms use AI — and in many cases, AI disclosures actually reduced trust rather than increasing it. This comes at a time when AI-related mishaps in newsrooms continue to surface, further straining already fragile audience confidence.

Against this backdrop, Poynter’s AI Ethics Starter Kit is an essential tool for news organisations.

It provides a structured framework to help newsrooms define clear guidelines on how AI will — and will not — be used. The kit emphasises accuracy, transparency, human oversight and editorial responsibility, making it easier for organisations to align AI adoption with their mission and values. Its latest updates also address visual journalism and product teams, areas where synthetic media and automation pose particularly high risks.

At a time when public trust in media is extremely delicate, having a transparent, well-defined AI policy is no longer optional. The starter kit helps newsrooms safeguard credibility, avoid unintentional misuse, and communicate clearly with audiences about their AI practices.

( Screenshot from Trusting News Report)

5. Swiggy Instamart case shows customer support teams are unprepared for AI-generated refund fraud

A recent Swiggy Instamart incident highlighted how easily customer service systems can be misled by AI-generated images.

A user reportedly used Google’s Gemini Nano to edit a photo of an egg tray, making it appear as if all the eggs were cracked when only one was actually damaged. Since most refund workflows rely on quick visual checks and static photo proof, the support team issued a full refund without realising the image had been altered.

This exposes a growing problem: customer service dashboards and verification processes are not designed to detect AI manipulation.

The incident is an early example of what many businesses will face as generative tools become more accessible. The refund systems built around photographic proof will need stronger safeguards, automated detection and better training to prevent misuse.

According to industry experts, quick-commerce and delivery platforms already operate on thin margins, and even a small rise in AI-assisted refund fraud could lead to a significant financial impact. Most teams currently lack detection tools, and support agents have no reliable way to distinguish authentic images from AI-generated edits.

(Image Credit: @kapilansh_twt/X)

See you next month!

Dheeshma Puzhakkal

(This is part of a monthly report written by Dheeshma Puzhakkal, Editor of NewsMeter’s Fact Check team, on emerging developments in OSINT and AI, with a focus on what matters most to Indian readers and OSINT professionals. For comments, insights and leads, please email at dheeshma.p@newsmeter.in. We do not have a financial relationship with any of the companies or tools mentioned as part of this series.)

Next Story