OSINT Pulse January 2026 | Wrong first steps into new year with obscenity and misidentification

By -  Dheeshma Puzhakkal
Published on : 1 Feb 2026 9:53 PM IST

OSINT Pulse January 2026 | Wrong first steps into new year with obscenity and misidentification

Hyderabad: A slightly belated Happy New Year to everyone who lives and breathes open-source investigations. Welcome to the first OSINT Pulse of 2026.

Grok lands in trouble over obscenity

The year began noisily. Grok, X’s AI chatbot, landed in controversy after users used it to sexualise images of women and children.

Several Asian countries moved quickly. Malaysia, Indonesia and the Philippines blocked access to Grok, while India warned of legal action if X failed to comply with local regulations. While users asking Grok to undress women did not stop, the chatbot now responds that such features are available only to paid users.

Concerns around misuse deepened after new research by AI Forensics, a Paris-based non-profit organisation. The group found around 800 sexually violent and explicit images and videos created using the Grok Imagine app.

According to a report by The Guardian, AI Forensics was able to retrieve these visuals because users had generated public sharing links, which the Wayback Machine later archived. It remains unclear whether these images were ever shared directly on X, the social media platform owned by xAI, which has integrated Grok into its services.

Why AI detectors fail on the fakes that matter most

In his article ‘Why AI detection fails on the fakes that matter most’, digital investigations trainer Henk van Ess points to a major limitation in today’s AI detection tools. Most of them are designed to spot fully synthetic images or videos, not the kinds of manipulated content that actually circulate widely.

Van Ess explains that the most effective misinformation today often comes in the form of hybrid fakes. These are visuals of real people, places or events that are subtly altered using AI, such as a real person placed inside an AI-generated background. Because many detection tools rely on probability scores and statistical artefacts, they often miss these partial manipulations or incorrectly flag genuine content.

The article also underlines something many OSINT practitioners already know. Human judgment still plays a crucial role, especially in spotting contextual inconsistencies. Van Ess argues for a layered approach that combines forensic signals, comparisons between image elements, and contextual analysis, rather than relying on a single ‘AI or not’ score.

Read more here.

When AI tools falsely ‘name’ people during breaking news

Another piece worth reading is ‘No, Steve Grove is not the name of the ICE agent’, written by Steve Grove. The article looks at how AI-assisted speculation during a breaking news event led to the false identification of an individual and triggered online harassment.

The piece is set in the US, following a police shooting. As news of the incident broke, social media users began trying to identify the officer involved. Instead of waiting for official confirmation, many turned to AI tools, reverse image searches and chatbots to guess a name. These tools surfaced an image and labelled the person as ‘Steve Grove,’ despite there being no official confirmation and signs that the image itself was misattributed or manipulated.

The claim spread quickly across platforms, leading to threats and abuse directed at the author, who had no connection to the incident. Grove’s account shows how AI tools can amplify misinformation by producing confident but incorrect outputs, especially during emotionally charged moments when people are searching for quick answers.

For Indian readers, this pattern will feel familiar. During communal clashes, custodial deaths, terror attacks or protest-related violence, social media users often rush to identify accused persons, victims or officials before authorities release details. The article shows how AI tools are now becoming part of that rush, lending unverified guesses an appearance of credibility

Read the full article here.

Six free satellite sources worth revisiting

Open-source investigator Benjamin S recently shared a list of six free satellite tools that are useful for verification. You may already know most of them, but as a geolocation enthusiast, I wanted to highlight them in this first OSINT Pulse of the year.

1. Google Earth Pro

Great for historical context. You can scrub back years, and sometimes decades, to see when changes occurred.

2. Copernicus Browser

Not just for pretty images. It helps reveal what standard imagery cannot. SWIR is useful for fires and burn scars, NDVI for vegetation health and Sentinel-1 radar is invaluable when cloud cover blocks everything else.

3. Esri World Imagery Wayback

Ideal for high-resolution before-and-after checks. The swipe tool makes change detection quick. Always click through to confirm capture dates, as basemap labels can be misleading.

4. ArcGIS Map Viewer

Offers a clean, recent basemap and access to a wide range of additional layers that many users overlook.

5. Apple Maps

A surprisingly useful second perspective in some regions. The 3D models, where available, can add helpful geolocation context.

6. Bing Maps

Another solid cross-check with detailed aerial and oblique views in certain locations.

That’s all for January’s OSINT Pulse.

See you next month!

Dheeshma Puzhakkal

(OSINT Pulse is a monthly report by Dheeshma Puzhakkal, Editor of NewsMeter’s Fact Check team. The column tracks emerging developments in OSINT and AI, with a focus on what matters to Indian readers and OSINT professionals. For comments, insights, or leads, write to dheeshma.p@newsmeter.in. NewsMeter has no financial relationship with any of the companies or tools mentioned in this series.)

Next Story