OSINT Monthly: Key Developments in Fact-Checking and Open-Source Intelligence

By Dheeshma  Published on  1 Feb 2025 2:30 PM IST
OSINT Monthly: Key Developments in Fact-Checking and Open-Source Intelligence

OSINT Monthly: Key Developments in Fact-Checking and Open-Source IntelligenceHello OSINT enthusiasts,

January has been an eventful month in the world of open-source intelligence and fact-checking. From Metaā€™s shifting stance on misinformation to new AI players entering the scene and a significant OSINT tool restricting public access, hereā€™s everything you need to know.

Metaā€™s Big Move: Fact-Checking vs. Community Notes

The month started with a major shake-up from Meta, as the company announced it was shutting down its Third-Party Fact-Checking (3PFC) program in the U.S. and shifting to Community Notes. But the real controversy came when Mark Zuckerberg claimed fact-checking is a form of censorship, a statement widely condemned by fact-checking organisations, including the International Fact-Checking Network (IFCN).

In response, Spanish fact-checking outlet Maldita pointed out a critical flaw in Xā€™s (formerly Twitter) Community Notesā€”during the 2024 Spanish flash floods, over 90% of hoaxes on X lacked a Community Note. Not because no one fact-checked them, but because the platformā€™s algorithm didnā€™t find enough "consensus" to display them.

In an open letter IFCN emphasised that this move threatens to undo nearly a decade of progress in promoting accurate information online.

Misinformation Tops Global Risk Report 2025

If you needed proof that misinformation is one of the worldā€™s biggest challenges, the World Economic Forumā€™s Global Risks Report 2025 confirmed it.

For the second year in a row, misinformation and disinformation were ranked as the No. 1 short-term global risk for the next two years. The report highlights how the unchecked spread of false information can undermine trust, harm societies, and distort decision-making at all levels.

Looking ahead, misinformation remains a long-term challenge as well, ranking fifth among global risks over the next decade. Adverse AI outcomesā€”such as deepfakes and algorithmic manipulationā€”are also climbing the list, sitting at sixth place, signaling that AI-driven misinformation is becoming a major threat.

GeoSpy Restricts Public Access After Investigative Report

January also saw GeoSpy, the AI-powered geolocation tool from Graylark Technologies, restrict public access following an investigative report by 404 Media. The tool, designed to identify photo locations based on environmental features, had been publicly available for months and was widely tested by OSINT professionals.

Personally, I found GeoSpy to be an exciting tool. It worked well for geolocating Western locations, but struggled with places in India. Like many fact-checkers, I was hoping it would improve over time and become a reliable OSINT asset.

However, speaking to 404 Media, security researcher Cooper Quintin from the Electronic Frontier Foundation raised serious concerns about its potential misuse. He warned that if law enforcement agencies use it at scale to build geolocation databases or gather evidence on individuals not suspected of crimes, it could lead to wrongful arrests and surveillance abuses.

GeoSpy has now restricted access to law enforcement training programs and qualified instructors. Those interested can apply via the companyā€™s website. While this move addresses ethical concerns, it also limits OSINT researchers from using the tool for independent investigations.

Meanwhile, GeoSpyā€™s developers have launched Superbolt, a new AI model for street-level photo geolocation. To test its efficiency with Indian locations, I tried images from Mumbaiā€™s Dharavi and a residential area in Patnaā€”but in both cases, the tool misidentified them as San Francisco. Even an attempt to locate Al Wahda Bridge in Doha returned San Francisco as the result. Clearly, thereā€™s still work to be done.

The Open-Source AI War: DeepSeek vs. Qwen 2.5

January also saw major developments in the AI landscape, with the emergence of DeepSeek, a Chinese AI startup making waves with its open-source R1 model. Unlike many proprietary large language models (LLMs), DeepSeek has taken a different approach, offering its model architecture and weights publicly.

This move was welcomed by AI and OSINT researchers because open-source models allow for transparency and independent research. However, it also raises concerns, especially given Chinaā€™s tight control over information and technology.

Researchers have pointed out that DeepSeekā€™s responses align with government-approved narratives on sensitive topics like Taiwan and human rights abuses. While all AI models have biases, state-controlled AI models could be used to shape global discourse or fine-tuned for disinformation campaigns.

Meanwhile, Alibaba quickly responded by launching Qwen 2.5, an upgraded version of its AI model, claiming it outperforms DeepSeek-V3 on key benchmarks. The fact that Alibaba released Qwen 2.5 right before Lunar New Yearā€”when most Chinese companies are on holidayā€”suggests that DeepSeekā€™s rapid rise has disrupted Chinaā€™s AI landscape.

Chinaā€™s AI ambitions are becoming clear: not just to compete globally, but to dominate AI on its own terms.

Bitwarden Introduces Login Verification for Unrecognised Devices

Password manager Bitwarden has introduced an optional security feature to enhance account protection for users who havenā€™t enabled two-factor authentication (2FA).

With this update, when logging in from an unrecognised device, users without 2FA will receive an email verification code that must be entered before accessing the account. This adds an extra layer of security, helping prevent unauthorised logins even if a password is compromised.

For fact-checkers, journalists, and OSINT professionals, who rely on secure tools, this update offers an additional safeguardā€”especially for those who havenā€™t enabled 2FA yet (though enabling 2FA is still the best option for account security).

Final Thoughts

From Metaā€™s shifting misinformation policies to AI models redefining global discourse and OSINT tools facing ethical dilemmas, January set the stage for an eventful year ahead.

Iā€™d love to hear your thoughts! How do you see these developments shaping the future of fact-checking and OSINT? Feel free to reach out at Dheeshma.p@newsmeter.in.

Until next month,

Dheeshma

Next Story