OSINT Monthly: Key Developments in Fact-Checking and Open-Source Intelligence
By Dheeshma Published on 1 Feb 2025 2:30 PM ISTOSINT Monthly: Key Developments in Fact-Checking and Open-Source IntelligenceHello OSINT enthusiasts,
January has been an eventful month in the world of open-source intelligence and fact-checking. From Metaās shifting stance on misinformation to new AI players entering the scene and a significant OSINT tool restricting public access, hereās everything you need to know.
Metaās Big Move: Fact-Checking vs. Community Notes
The month started with a major shake-up from Meta, as the company announced it was shutting down its Third-Party Fact-Checking (3PFC) program in the U.S. and shifting to Community Notes. But the real controversy came when Mark Zuckerberg claimed fact-checking is a form of censorship, a statement widely condemned by fact-checking organisations, including the International Fact-Checking Network (IFCN).
In response, Spanish fact-checking outlet Maldita pointed out a critical flaw in Xās (formerly Twitter) Community Notesāduring the 2024 Spanish flash floods, over 90% of hoaxes on X lacked a Community Note. Not because no one fact-checked them, but because the platformās algorithm didnāt find enough "consensus" to display them.
In an open letter IFCN emphasised that this move threatens to undo nearly a decade of progress in promoting accurate information online.
Misinformation Tops Global Risk Report 2025
If you needed proof that misinformation is one of the worldās biggest challenges, the World Economic Forumās Global Risks Report 2025 confirmed it.
For the second year in a row, misinformation and disinformation were ranked as the No. 1 short-term global risk for the next two years. The report highlights how the unchecked spread of false information can undermine trust, harm societies, and distort decision-making at all levels.
Looking ahead, misinformation remains a long-term challenge as well, ranking fifth among global risks over the next decade. Adverse AI outcomesāsuch as deepfakes and algorithmic manipulationāare also climbing the list, sitting at sixth place, signaling that AI-driven misinformation is becoming a major threat.
GeoSpy Restricts Public Access After Investigative Report
January also saw GeoSpy, the AI-powered geolocation tool from Graylark Technologies, restrict public access following an investigative report by 404 Media. The tool, designed to identify photo locations based on environmental features, had been publicly available for months and was widely tested by OSINT professionals.
Personally, I found GeoSpy to be an exciting tool. It worked well for geolocating Western locations, but struggled with places in India. Like many fact-checkers, I was hoping it would improve over time and become a reliable OSINT asset.
However, speaking to 404 Media, security researcher Cooper Quintin from the Electronic Frontier Foundation raised serious concerns about its potential misuse. He warned that if law enforcement agencies use it at scale to build geolocation databases or gather evidence on individuals not suspected of crimes, it could lead to wrongful arrests and surveillance abuses.
GeoSpy has now restricted access to law enforcement training programs and qualified instructors. Those interested can apply via the companyās website. While this move addresses ethical concerns, it also limits OSINT researchers from using the tool for independent investigations.
Meanwhile, GeoSpyās developers have launched Superbolt, a new AI model for street-level photo geolocation. To test its efficiency with Indian locations, I tried images from Mumbaiās Dharavi and a residential area in Patnaābut in both cases, the tool misidentified them as San Francisco. Even an attempt to locate Al Wahda Bridge in Doha returned San Francisco as the result. Clearly, thereās still work to be done.
The Open-Source AI War: DeepSeek vs. Qwen 2.5
January also saw major developments in the AI landscape, with the emergence of DeepSeek, a Chinese AI startup making waves with its open-source R1 model. Unlike many proprietary large language models (LLMs), DeepSeek has taken a different approach, offering its model architecture and weights publicly.
This move was welcomed by AI and OSINT researchers because open-source models allow for transparency and independent research. However, it also raises concerns, especially given Chinaās tight control over information and technology.
Researchers have pointed out that DeepSeekās responses align with government-approved narratives on sensitive topics like Taiwan and human rights abuses. While all AI models have biases, state-controlled AI models could be used to shape global discourse or fine-tuned for disinformation campaigns.
Meanwhile, Alibaba quickly responded by launching Qwen 2.5, an upgraded version of its AI model, claiming it outperforms DeepSeek-V3 on key benchmarks. The fact that Alibaba released Qwen 2.5 right before Lunar New Yearāwhen most Chinese companies are on holidayāsuggests that DeepSeekās rapid rise has disrupted Chinaās AI landscape.
Chinaās AI ambitions are becoming clear: not just to compete globally, but to dominate AI on its own terms.
Bitwarden Introduces Login Verification for Unrecognised Devices
Password manager Bitwarden has introduced an optional security feature to enhance account protection for users who havenāt enabled two-factor authentication (2FA).
With this update, when logging in from an unrecognised device, users without 2FA will receive an email verification code that must be entered before accessing the account. This adds an extra layer of security, helping prevent unauthorised logins even if a password is compromised.
For fact-checkers, journalists, and OSINT professionals, who rely on secure tools, this update offers an additional safeguardāespecially for those who havenāt enabled 2FA yet (though enabling 2FA is still the best option for account security).
Final Thoughts
From Metaās shifting misinformation policies to AI models redefining global discourse and OSINT tools facing ethical dilemmas, January set the stage for an eventful year ahead.
Iād love to hear your thoughts! How do you see these developments shaping the future of fact-checking and OSINT? Feel free to reach out at Dheeshma.p@newsmeter.in.
Until next month,
Dheeshma