Monthly OSINT & AI Update – Breaking Down February’s Most Intriguing Cases

By Dheeshma  Published on  28 Feb 2025 9:29 PM IST
Monthly OSINT & AI Update – Breaking Down February’s Most Intriguing Cases

Staying ahead in OSINT means constantly experimenting with tools, but many promising ones are either locked behind paywalls or unavailable in India. To bridge this gap, I test and review tools that are at least partially free and accessible. This month, I explored reverse image search tools, examined open-source AI resources for journalists, and analyzed how generative AI was used in two major events—the Delhi Elections and the Maha Kumbh Mela.

Reverse Image Search: Evolving Capabilities

Reverse image search has come a long way from just finding similar images online. AI-powered tools can now detect faces, locations, objects, and even uncover manipulated media. I tested multiple platforms to assess their effectiveness—here’s a look at two of them.

Lenso.ai

Lenso.ai is an AI-driven reverse image search tool that can identify faces, locations, duplicates, and visually similar images.

I tested it using a viral video of Marathi actor Payal Jadhav wielding a sword, which was falsely claimed to be an old clip of Delhi’s newly elected Chief Minister Rekha Gupta. I extracted a frame where Gupta’s face was clearly visible and ran it through Lenso.ai’s free version. The tool correctly identified Gupta’s face and retrieved similar images from fact-checking reports. However, like TimEye, its unpaid version doesn’t provide direct links to source images or videos. While this is a limitation, the fact that relevant images appeared in the results confirmed the presence of related content online, guiding OSINT practitioners toward alternative search methods.


Sogou Image Search

Sogou, a Chinese image search engine, is known for delivering better results for Asian faces and images from China and neighboring regions.

I tested it with a photo of Jadhav, but it didn’t return any results. However, when I uploaded a decade-old image of Mumbai’s Dharavi, it suggested a different but related photo used in a Chinese news report. While it didn’t lead me to the original source, it showed potential for discovering alternative perspectives on widely circulated images.




Hugging Face’s Open-Source AI Toolkit for Journalists

Florent Daudens from Hugging Face recently shared a collection of free and open-source AI tools designed for journalists. These tools assist with transcription, analysis, and content generation. A full list of 20 tools is available here. Journalists and OSINT professionals can try leveraging these resources to streamline workflows without relying on expensive software.

Exploring Free Satellite Imagery for Investigative Journalism

Satellite imagery has transformed investigative journalism, helping reporters uncover hidden stories, track environmental changes, and expose human rights violations. However, accessing high-resolution images can be both expensive and technically challenging. To better understand how journalists can navigate these barriers, I attended a recent webinar by the Global Investigative Journalism Network (GIJN) on using free satellite imagery for investigative reporting. The session provided some valuable insights into accessible tools and techniques. You can watch the full discussion here.

Why AI Detection Tools Aren’t Enough: A Key Takeaway from the AI Spotlight Series

This month, I attended the AI Spotlight Series hosted by the Pulitzer Center—a three-day training led by Karen Hao and Gabriel Geiger. One key takeaway, especially relevant for fact-checkers, was the growing limitations of AI detection tools in identifying AI-generated images. Many fact-checkers, including myself, rely on a mix of traditional verification methods and AI detection tools, often cross-checking results from multiple sources.

However, Karen highlighted a crucial challenge: AI models are constantly evolving to evade detection tools, making them increasingly unreliable. Yet, there is a growing tendency to take tool-generated results at face value. She emphasised the need to prioritise traditional verification techniques and explore advanced forensic methods, like those used by Hany Farid from UC Berkeley.

With AI-generated images becoming alarmingly realistic, even fooling experts, the session explored the importance of staying ahead with stronger investigative methods instead of relying solely on detection tools.

AI Action Summit: A Divided Commitment

French President Emmanuel Macron opened the AI Action Summit in Paris with a bold move—playing a montage of deepfakes impersonating him. The event brought together leaders from over 100 countries, including heads of state, tech executives, and academics, to debate the future of AI.

Who Signed the AI Pledge—And Who Didn’t?

The summit led to a Declaration on Inclusive and Sustainable AI, signed by 61 countries, including China. But the bigger story? The US and UK refused to sign. The UK said the document lacked clarity on global governance and security, while US Vice President JD Vance dismissed it as unnecessary, particularly its focus on environmental and inclusivity measures.

With the US still leading the AI race, it’s unclear how effective this pledge will be. Can global AI regulation move forward without the biggest players on board? Or is this just another well-meaning but toothless promise?

UK’s Crackdown on AI-Generated CSAM

The UK is stepping up efforts to combat AI-generated child sexual abuse material (CSAM). As part of the upcoming Crime and Policing Bill, the government wants to criminalise not just AI-generated CSAM but also the tools used to create it—a world-first move. Offenders could face up to five years in prison.

Other key measures in the bill include:

  1. Banning "AI paedophile manuals"—guides that teach people how to generate CSAM (three-year sentence).
  2. Targeting websites that facilitate the sharing of illegal content.

With AI-generated child abuse images increasing fourfold in the past year, the UK is pushing to stay ahead of this growing threat.

Why AI Ethics and Security Keep Getting Mixed Up

There’s an ongoing issue in AI discussions—security concerns often get mislabeled as “ethics”, making them easier to brush aside.

Take issues like bias in facial recognition, deepfake scams, or election interference. These are clear security threats, yet they often get lumped into AI ethics, which is treated as a philosophical debate rather than an urgent problem. The result? Policymakers and tech leaders downplay real risks instead of taking immediate action.

This misclassification isn’t just a technicality—it affects how seriously these issues are taken. And in a world where AI threats are evolving fast, we can’t afford to get stuck in semantics.

AI and Misinformation in India: February Highlights

Two major events in India this month—the Delhi elections and the Maha Kumbh Mela—saw the strategic use (and misuse) of generative AI.

Delhi Elections: Deepfakes and Political Warfare

The Aam Aadmi Party (AAP) made headlines with an AI-generated deepfake of B.R. Ambedkar appearing to bless Arvind Kejriwal, sparking a retaliatory AI-generated response from the BJP. The BJP, in turn, accused AAP of using AI to spread misleading narratives and mock its leaders.

Right-wing social media accounts also deployed AI-generated images to incite communal tensions. One viral image depicted a Hindu bride and groom inside a glass exhibit, watched by a Muslim family, with the caption: “National Museum, Delhi, 2061.” This attempt to stir fear and division highlights how AI is increasingly shaping political discourse.

Maha Kumbh Mela: AI-Generated Celebrity Hoaxes

The Kumbh Mela saw a surge in AI-generated images of celebrities allegedly taking a holy dip in the Ganga. Viral images of Shah Rukh Khan and other public figures falsely claimed to show them visiting the Kumbh—some suggesting it was out of fear of the Modi government, others claiming it was an act of appeasement.

These misinformation tactics show how AI is being used to manipulate religious and political narratives. I’ll be exploring this further in a separate article.

If you’ve come across interesting OSINT tools or misinformation trends, let’s discuss! Email me at dheeshma.p@newsmeter.in.

See you next month,

Dheeshma Puzhakkal

Next Story