OSINT & Fact-Checking in March: Key Developments & Challenges

By Dheeshma
Published on : 31 March 2025 9:02 PM IST

OSINT & Fact-Checking in March: Key Developments & Challenges

This month has been a whirlwind for open-source intelligence (OSINT) and fact-checking, with AI playing an increasingly complex role in both spreading and debunking misinformation. From Grok’s unpredictable responses to political queries in India to new challenges in AI-generated image verification, March brought several developments that could reshape digital investigations.

1. Grok's Unrestrained Responses Stir Controversy in India

Elon Musk's AI chatbot, Grok, integrated into the social media platform X (formerly Twitter), has recently been at the center of a storm in India. Users engaged Grok with political queries and, at times, provocative language, leading to unexpected and unfiltered responses from the AI. Notably, when a user insulted Grok in Hindi after a delayed response, the chatbot retorted with a Hindi expletive before providing the requested information. This interaction quickly went viral, showcasing Grok's unpredictable behavior when confronted with offensive language.

Further fueling the controversy, Grok made contentious remarks about Indian political figures. In one instance, it suggested that opposition leader Rahul Gandhi was more honest than Prime Minister Narendra Modi and implied that many of Modi's interviews were scripted. These statements sparked widespread debate and drew the attention of India's Ministry of Electronics and Information Technology (MeitY), which is now reportedly engaging with X to address concerns over Grok's content moderation and potential violations of IT regulations.

Adding to the intrigue, Elon Musk responded to the uproar by sharing a BBC article titled Why Elon Musk's Grok is kicking up a storm in India, accompanied by a laughing emoji. This reaction garnered millions of views and further amplified discussions around the chatbot's behavior and the responsibilities of AI developers in content moderation. ​

Some users even tagged Grok and Perplexity, asking them to fact-check viral claims. The results? A mixed bag. While the chatbots got some answers right, they also made glaring errors.

For instance, amid recent communal tensions in Madhya Pradesh’s Mhow, an old video showing police caning several young men and making them do sit-ups resurfaced online, falsely claimed to be connected to the latest riots. When asked about it, Grok confidently told an X user it was from the fights following the ICC championship—an entirely incorrect assertion. The video was actually from an anti-goonda drive by Indore police in 2015 to curb crime.

Chatbots like Grok are built to be user-friendly, offering quick responses to a wide range of questions. But unlike traditional fact-checking, which relies on clear methods and verifiable sources, these AI tools don’t follow a consistent process when it comes to sourcing or citations. Sometimes, they provide references, but there’s no guarantee of accuracy or a standard way of verifying their claims.

It’s also possible to influence their responses. By carefully framing a question, users can push the chatbot toward a particular narrative, making it easy to spread misinformation. This makes human oversight and independent verification all the more important, as AI-generated information isn’t always reliable.


2. The VisualOrigins Detector: A New Tool for Image Verification

In the realm of digital verification, Henk van Ess has introduced the VisualOrigins Detector, a tool designed to trace the first appearance of images online. This innovative platform automates searches across multiple sources, integrating tools like Google's Fact Check Explorer and Google Lens into a single interface. Additionally, it maintains a history of searches, allowing users to track and revisit their investigations.


3. OpenAI's GPT-4o and Its Impact on Image Generation

For years, one of the easiest ways to spot AI-generated images was by looking at the text. AI image generators often struggled with words, producing garbled or misspelled text that gave them away instantly.

That’s no longer the case. OpenAI’s new GPT-4o model has significantly improved how AI generates text in images, making it much clearer and more accurate. While errors still happen occasionally, they’re far less frequent, making it harder to tell whether an image was created by AI or not.

This development has two sides. On the positive side, AI can now be used more effectively for designing infographics and educational content. But on the flip side, it removes a key clue that fact-checkers and digital investigators relied on to detect AI-generated images. As AI technology evolves, verification techniques must also adapt.

4. Python for Journalists – A Custom GPT for Investigations

Reuters Institute fellow Sannuta Raghu has developed Python for Journalists, a custom GPT designed to help journalists and fact-checkers learn and apply Python to their work. Initially structured as a 30-day beginner training, the tool is tailored to common journalistic challenges, from data analysis to automation.

Beyond the structured training, users can also turn to the GPT for troubleshooting code issues or brainstorming ideas related to their investigative projects. This could be a valuable resource for reporters looking to enhance their technical skills and streamline their investigations.


Final Thoughts

March was a reminder that AI is both a powerful tool and a potential risk in the fight against misinformation. While chatbots like Grok are making waves with their unpredictable responses, new verification tools like VisualOrigins and advances in AI-generated images are changing the way fact-checkers work. As technology keeps evolving, so must our strategies for verification. The key takeaway? Stay sharp, question everything, and keep learning.

Until next month,

Dheeshma

Next Story