May 2025 OSINT Update: Navigating AI, deepfakes and conflict-driven misinformation

By Dheeshma
Published on : 31 May 2025 6:00 PM IST

May 2025 OSINT Update: Navigating AI, deepfakes and conflict-driven misinformation

May 2025 was a defining month for Indian fact-checkers. The Pahalgam terror attack and ensuing India-Pakistan tensions unleashed a wave of misinformation that pushed verification networks and media literacy to their limits. Both sides deployed coordinated disinformation campaigns, flooding social media, complicating efforts to deliver timely and accurate information.

During this period, the Press Information Bureau’s fact-check wing regularly issued rebuttals to false claims targeting India. Yet, fact-checkers found themselves in a precarious position, pressured to verify and publish quickly while often reliant on official sources, raising tough questions about editorial independence.

At NewsMeter, this dilemma led to the deliberate shelving of some stories that relied solely on official narratives.

Adding fuel to the fire, segments of Indian media amplified confusion by broadcasting unverified and false reports, further undermining public trust and complicating disinformation mitigation. A detailed story on this can be read here.

AI as both a vector and a victim of misinformation

The role of AI in this landscape is profound. AI chatbots like Grok and Perplexity, trained on vast but uncurated datasets, struggled with nuanced and rapidly evolving conflict narratives, sometimes producing misleading or false responses on sensitive topics.

NewsMeter’s story on such case studies can be read here.

Meanwhile, AI-powered synthetic media, including deepfakes, emerged as potent tools of deception. Viral videos falsely depicting Pakistan’s prime minister, Shehbaz Sharif, admitting defeat, alongside fabricated apologies from India’s PM Narendra Modi and home minister Amit Shah to Pakistan, circulated widely.

The deepfake crisis: Scale, accessibility, and gendered harm

A new study from researchers at the Oxford Internet Institute titled ā€˜Deepfakes on Demand: The Rise of Accessible Non-Consensual Deepfake Image Generators’ reveals a rapid surge in publicly available deepfake tools.

The researchers identified over 35,000 deepfake models on platforms like Civitai and Hugging Face, downloaded nearly 15 million times since late 2022. While celebrities remain common targets, the alarming trend is the increasing focus on ordinary women, including Instagram users with fewer than 10,000 followers, TikTok influencers and YouTubers. Shockingly, 96 per cent of these models target women, often to create non-consensual intimate imagery (NCII).

This crisis is worsened by the ease of creation: using LoRA (low-rank adaptation), developers need only about 20 images, 24GB VRAM, and 15 minutes on a consumer-grade machine to fine-tune a deepfake model. Online tutorials enable even non-technical users to create such content. Although platforms prohibit these models, enforcement is weak and reactive, and many harmful models remain publicly accessible.

Read the full Oxford study here.

AI-enhanced geolocation: Transforming OSINT, raising ethical flags

Historically, geolocation demanded expert knowledge and painstaking manual analysis. However, recent upgrades in OpenAI’s GPT-4 Mini and o3 models have quietly revolutionised image analysis. ChatGPT can now geolocate photos by interpreting terrain, architecture, street layouts, and sign languages—skills once niche to expert OSINT communities.

Investigative journalist Ben H’s demonstration, accurately locating a remote mountain valley in Italy using only visual cues, shows this potential to accelerate investigations and cut search times.

LinkedIn post explaining this

Yet, with great power comes serious ethical concerns. The ability to de-anonymise private locations from seemingly innocuous photos raises privacy risks and threatens to disrupt established OSINT expertise. While safeguards are planned, geolocation is poised to become ubiquitous and automated, challenging analysts to adapt both their methods and ethical frameworks.

Google Veo 3: The next frontier of AI-generated video and misinformation risks

At Google I/O 2025, DeepMind unveiled Veo 3, marking a leap in AI-generated video. Veo 3 creates high-resolution cinematic videos from simple text or image prompts, complete with synchronised dialogue, sound effects, and ambient audio. This lowers barriers for producing polished video content, promising to reshape creative industries and digital storytelling.

However, Veo 3’s sophistication also magnifies misinformation risks. The capacity to craft highly realistic videos with matching audio will enable more immersive and persuasive disinformation campaigns, making detection harder than ever. Google has introduced protections like content filtering and SynthID digital watermarking, but the potential for misuse remains significant.

May’s developments reveal a fundamental challenge for OSINT and fact-checking: AI is both a powerful enabler and a formidable adversary.

Tools like ChatGPT’s geolocation and Google’s Veo 3 boost newsroom abilities, but also make it easier to create and spread fake content. At the same time, conflicts like the India-Pakistan tensions remind us that challenges like speed, access, and independence can’t be solved by technology alone.

Fact-checkers, platforms, and policymakers must work together to build better verification tools, raise public awareness, and ensure AI is used responsibly. In the end, human judgment, strong ethics, and resilience are still our best defences against the spread of misinformation.

See you next month,

Dheeshma

Next Story