OSINT Pulse: October 2025 I Tracking AI, deepfakes and the fight for digital transparency
The Indian government has proposed new rules to help users clearly identify AI-generated or synthetically altered content online.
By - Dheeshma |
Hyderabad: As AI continues to blur the line between what’s real and what’s synthetic, investigators, journalists, and policymakers are racing to keep up.
This month, India took a significant step toward regulating AI-generated media. New research and emerging tools shed light on how deepfakes and synthetic videos are reshaping the information landscape.
Here’s what’s new in the world of OSINT and digital transparency this October.
India’s draft rules to label AI-generated content
The Indian government has proposed new rules to help users clearly identify AI-generated or synthetically altered content online.
In a draft amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, the Ministry of Electronics and IT (MeitY) has suggested mandatory labelling for all AI-created visuals, videos and audio.
The move comes amid growing concerns over the rise of deepfakes and misleading AI content that can distort public perception, particularly during elections or sensitive events.
As per the draft, AI-generated images and videos must display a visible label covering at least 10 per cent of the frame, while audio content should include a disclaimer during the first 10 per cent of its duration. Platforms will also be required to ask uploaders whether the content they post was generated or modified using AI, and to take ‘reasonable and appropriate’ steps to verify these claims. The rules also prohibit the removal or alteration of such labels, making traceability a shared responsibility between users and platforms. Public feedback on the draft is open until November 6.
While the draft rules are a welcome step toward accountability, their implementation may pose several challenges.
Enforcing a label that covers 10 per cent of an image or video could be technically difficult or visually intrusive. Automated systems used to verify AI-generated content may also produce false positives or negatives, risking both over-censorship and under-detection. Moreover, the rule’s broad definition of ‘synthetic content’ could unintentionally include harmless edits, remixes or parodies.
The growing threat of AI-generated deepfakes and the user awareness gap
ISB Institute of Data Science recently released a study titled, ‘Fact-Checking India: Identifying the Spread of Fake News and Policy Recommendations for Combatting Misinformation.’
The study offers crucial data concerning deepfake content, highlighting the sophisticated nature of the technology and a worrying gap in user awareness that OSINT professionals must track. Deepfakes, a form of manipulated content, utilise Machine Learning algorithms to embed faces and voices into images, video and audio recordings of real individuals, enabling the creation of highly realistic impersonations generated entirely from digital sources. This capacity for deception poses significant risks, from spreading misinformation during elections to enabling blackmail and reputational damage.
Crucially, the study found a striking disparity in user perception: approximately 48.22 per cent of users reported having encountered deepfake content on social media platforms, while the remaining 51.78 per cent stated they had not come across such content. This split strongly suggests that a significant number of users struggle to identify and detect deepfake material, often sharing it because they mistakenly believe it to be authentic. This widespread difficulty in identifying high-tech deception amplifies the risks of deepfakes, which include political manipulation and cyberbullying.
Further analysis reveals stark differences in platform susceptibility, providing critical context for tracking misinformation.
The study found that Facebook exhibits the highest prevalence of deepfake content, with 73.68 per cent of users reporting encounters. This is followed by YouTube at 52 per cent. In contrast, Twitter users reported lower exposure to deepfakes, with only 27.36 per cent of users encountering such material.
These platform-specific findings underscore where the AI-generated threat is most concentrated and potentially impactful. The real-world consequence of this proliferation was evidenced during the May 2024 Lok Sabha elections, where a deepfake video surfaced featuring Chief Minister Mamata Banerjee. The doctored video went viral across social media platforms, leading to ‘widespread pandemonium and heated debates’ and demonstrating the technology’s immediate power to manipulate public perception ahead of a contentious electoral period.
Addressing this challenge requires proactive measures, including enhanced detection technologies and robust media literacy programs.
Sora: The next frontier in video disinformation and the crisis of trust
The release of OpenAI’s Sora has fundamentally shifted the AI landscape, making the creation of realistic disinformation ‘extremely easy and extremely real’.
This text-to-video application allows a user to generate almost any desired footage using only a text prompt. Testing conducted by The New York Times following the app’s debut on October 3, revealed the application’s alarming capability to generate strikingly realistic footage of events that never occurred.
Within its first three days, users deployed the application to create videos of ‘ballot fraud, immigration arrests, protests, crimes and attacks on city streets.’ Further testing by The New York Times confirmed that Sora generated fake videos depicting ‘store robberies and home intrusions, even bomb explosions on city streets.’
This hyper-realistic output raises the risk that such content will lead to real-world consequences, including swinging elections, defrauding consumers or framing innocent people for crimes.
While OpenAI stated it included guardrails and usage policies prohibiting ‘misleading others through impersonation, scams or fraud,’ the testing performed by The New York Times showed these safeguards were not foolproof.
For example, though the app refused to produce videos of world leaders like President Trump, The New York Times reported that when prompted to create a political rally, Sora produced a video featuring the ‘unmistakable voice of former President Barack Obama’.
Additionally, the app will generate content featuring deceased public figures such as the Rev. Dr Martin Luther King Jr and Michael Jackson. Sora’s high-quality video content threatens to destroy the remaining public trust in visual media. Experts are increasingly concerned about the ‘liar’s dividend’: the effect that increasingly high-calibre AI videos will allow people to dismiss authentic content as fake.
This technological leap means that even experts devoted to spotting fabrications now struggle 'at first glance to distinguish real from fake’, leaving ‘almost no digital content that can be used to prove that anything in particular happened.’
Tool Spotlight: OSINT MOE
Credit: @CyberDetective
OSINT MOE is a powerful AI-driven tool designed to help investigators collect and visualise information about people, companies and topics of interest. It automatically maps digital relationships in an interactive graph format, allowing users to see connections that might otherwise go unnoticed. The platform is known for its speed and depth of data processing, making it especially useful for analysts who need to trace networks, explore entity links or conduct background research efficiently.
Free demo: https://osint.moe
Creator: @klntsky
See you next month!
Dheeshma Puzhakkal
(This is part of a monthly report written by Dheeshma Puzhakkal, Editor of NewsMeter’s Fact Check team, on emerging developments in OSINT and AI, with a focus on what matters most to Indian readers and OSINT professionals. For comments, insights and leads, please email at dheeshma.p@newsmeter.in. We do not have a financial relationship with any of the companies or tools mentioned as part of this series.)






