OSINT Pulse: August 2025 I Chatbot leaks, human tragedy and tools for investigators
By Dheeshma
Illustration created with the help of AI tools.
Hyderabad: This month in AI has been unsettling.
It began with reports of private chat leaks from ChatGPT, followed soon after by similar breaches from Grok. Even as new releases like Gemini 2.5 Flash Image made headlines, the darker side of the technology dominated the conversation.
In California, 16-year-old Adam Raine took his own life after weeks of exchanges with ChatGPT. Instead of offering support, the bot reportedly validated his despair, urged secrecy and even provided guidance on ending his life. Forensic records show Adam mentioned suicide more than 200 times, but ChatGPT raised it over 1,200 times, six times higher.
Chat leaks and unintended exposure
OpenAI’s ChatGPT faced a major privacy scare at the start of the month.
Its ‘share’ feature, meant to let users publish conversations via a link, included an option to make chats discoverable. Thousands of those links ended up indexed by Google, meaning private conversations were suddenly visible to anyone searching.
Early reports counted around 4,500 leaked chats, but later investigations suggested the number could be close to 1,00,000.
The exposed material ranged from emotional confessions and mental health struggles to sensitive business discussions. The backlash prompted OpenAI to disable the share-discoverability feature and begin coordinating with search engines to remove already-indexed conversations.
If that wasn’t enough, Elon Musk’s xAI was caught in an even larger storm.
Its Grok chatbot had a similar sharing tool that generated unique URLs for every shared conversation. Those links were also indexed by Google, leading to the exposure of more than 3,70,000 conversations. Many of these contained deeply personal exchanges, medical queries and even passwords.
Alarmingly, some chats featured dangerous instructions, including bomb-making and a detailed assassination plan targeting Musk himself.
Both episodes highlight the same problem: features designed for openness and sharing ended up turning private conversations into public records. For users, the incidents were a stark reminder that what feels like a private exchange with a chatbot may not stay that way.
Grok’s brief suspension after Gaza comments
Mid-August, Grok was abruptly suspended for about 15–20 minutes.
The takedown followed Grok’s statement accusing Israel and the United States of committing ‘genocide’ in Gaza, claims it supported by referring to reports from the ICJ, United Nations, Amnesty International and B’Tselem.
Upon its return, Grok offered a string of conflicting explanations: some blamed hateful content, others cited software bugs or flagging systems and some even suggested user reports triggered the suspension.
Amid the confusion, Grok accused Musk and xAI of censorship.
Musk himself downplayed the incident as ‘just a dumb error,’ noting that Grok ‘doesn’t actually know why it was suspended.’
The episode exposed how AI systems, despite their confident tone, cannot explain their own actions. Their responses shift with context, often producing contradictions rather than clarity. It also shows that human intervention remains the final authority on what an AI can or cannot say, regardless of the system’s claims to independence.
Image Whisperer — now with AI-Detection and shareable reports
Image Whisperer, developed by Henk van Ess, is a media verification and research tool that’s built for journalists, researchers and fact-checkers.
It combines traditional forensic techniques, like reverse-image search and metadata checks, with modern AI-detection capabilities. Now, it specifically flags AI-generated visuals and suspicious content, making it a strong ally in combating deepfakes and synthetic imagery.
Beyond the analysis, it also offers practical usability: users can download the full breakdown of their image investigations, enabling easy saving, sharing and referencing of results in reports or collaboration.
EO Browser – A satellite tool for every reporter
The EO (Earth Observation) Browser, developed by Sentinel Hub, has become one of the most accessible and versatile tools for satellite image investigations.
It provides free access to an archive of imagery from multiple constellations, including the European Space Agency’s Sentinel satellites. For most searches, the platform defaults to Sentinel-2, which revisits the same location roughly twice a week, offering medium-resolution images suitable for tracking land use, environmental changes or conflict zones.
What makes EO Browser especially valuable for journalists and researchers is that it strips away technical barriers: you don’t need to select databases manually or write code, and features like road overlays, calendar-based timelines and time-lapse creation are built into the interface.
For reporters unfamiliar with remote sensing, EO Browser’s usability stands out.
A simple slider lets you filter out cloudy images, ensuring clean visuals without guesswork. The platform allows quick downloads of satellite shots, which can be layered with geographic markers for context, making it practical for investigative storytelling. While ultra-high-resolution imagery is still reserved for private vendors, experts note that EO Browser, paired with tools like Google Earth Pro, covers most newsroom needs.
Whether for climate reporting, monitoring deforestation, or documenting destruction in war zones, this browser is a reliable, no-cost entry point into satellite evidence gathering.
See you next month!
Dheeshma Puzhakkal
(This is part of a monthly report written by Dheeshma Puzhakkal, Editor of NewsMeter’s Fact Check team, on emerging developments in OSINT and AI, with a focus on what matters most to Indian readers and OSINT professionals. For comments, insights and leads, please email at dheeshma.p@newsmeter.in. We do not have a financial relationship with any of the companies or tools mentioned as part of this series.)