Draft IT Amendment Rules, 2025: The choice is not between regulation and innovation. It is between smart regulation and stifling overreach
On October 22, 2025, the Ministry of Electronics and Information Technology (MeitY) notified the Draft Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025 (“Draft Rules, 2025”). At first glance, it seemed a routine regulatory update, another addition to India’s expanding digital framework. Yet, beneath its unassuming title lies a shift that could alter the balance between innovation, regulation, and liability in India’s digital economy.
The Draft Amendment marks a reconfiguration of the safe harbour, the legal shield that protects intermediaries from liability for user-generated content under Section 79 of the Information Technology Act, 2000 (“IT Act”). Once the bedrock of India’s internet economy, this protection told companies they could host and innovate freely, provided they acted responsibly when unlawful content was flagged. But that immunity has steadily become more onerous to claim since the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, and with the 2025 Draft Rules, it now stands on even shakier ground.
At the heart of this new regulation lies a modern menace: the rapid spread of ‘synthetically generated information’. Deepfakes, AI-cloned voices, and manipulated videos have blurred the line between truth and fabrication. MeitY’s proposed amendments aims to address these user harms by demanding stronger due diligence from intermediaries, both those that enable the creation or modification of such content and those who enable display or publication of it.
Under the Draft Rules, any intermediary enabling the creation or alteration of synthetic content must ensure it carries a clear label, one that covers at least 10% of the visual surface area, or the first 10% of audio duration. A digital “warning sign,” if you will.
However, the real weight of the rule falls on the shoulders of Significant Social Media Intermediaries (SSMIs), platforms with more than 50 lakh registered users in India. They must not only ask users to declare whether their content is synthetic, but also verify the accuracy of these declarations through “reasonable and appropriate technical measures.” After verification, the platform must then ensure that content confirmed to be synthetically generated is labelled as such before it is published.
The explanation to Rule 4(1A) of the Draft IT Amendment Rules, 2025, however extends the scope of the obligation proposed to be imposed on SSMIs. Intermediaries are required to not only deploy reasonable and proportional technical measures to verify the accuracy of the declaration, but also to deploy reasonable and proportional technical measures to ensure that no synthetically generated information slips through without declaration or label. The language of the explanation implies within the duty to verify, a duty to detect. Because how else would an SSMI verify the accuracy of a declaration, that content is or is not synthetically generated, if it does not first determine for itself the synthetic or non-synthetic nature of that content. Verification, after all, presupposes detection. At first this seems sensible, even necessary, but the devil is in the details.
In Shreya Singhal v. Union of India, the Supreme Court held that an intermediary acquires “actual knowledge” of unlawful content, thereby triggering an obligation to take it down, only when it receives a court or government order directing its removal. The rationale was clear: without this protection, intermediaries would be forced to be arbiters of which reports of content being unlawful were legitimate and which weren’t.
By requiring platforms to detect synthetic content, the Draft Rules alter the legal threshold for ‘actual knowledge’, a key determinant of intermediary liability. The proviso to Rule 4(1A) states that if an intermediary “knowingly permits, promotes or fails to act upon” synthetically generated information, it is considered to not have fulfilled its due diligence obligations, thereby losing safe harbour protection. Read alongside the requirement to employ technical means of detection, this creates a presumption of knowledge. The moment a company adopts detection tools, it is effectively presumed to have knowledge of any synthetic content it fails to catch.
For SSMIs, this means an expensive, technically demanding compliance burden. They must develop detection and labelling systems, build verification features into their user workflows, and somehow keep pace with synthetic content that evolves faster than the tools designed to find it. The irony is stark: India, while championing its “Ease of Doing Business,” may be making it harder to do precisely that in the digital economy.
Worse still, these new obligations risk breeding over-censorship. When legal immunity is uncertain, platforms err on the side of caution. To avoid liability, they may preemptively suppress user-generated content by silencing legitimate voices in the process.
To be clear, the need to regulate synthetically generated misinformation is real, even urgent. Deepfakes can distort elections, ruin reputations, and destabilize societies but in trying to fix this, we must not dismantle the foundation that enabled India’s digital revolution, i.e., the safe harbour clause.
The choice is not between regulation and innovation. It is between smart regulation and stifling overreach. The solution may lie in resituating liability from the intermediary to the user or creator who deploys undeclared synthetic content. The safe harbour was never meant to be a shield for irresponsibility and neither should it become collateral damage in the fight against misinformation. As India charts its digital future, we must remember that a harbour under siege is no safe place for anyone, not for platforms, not for innovators, and certainly not for the democratic discourse that thrives upon them.
Nathishia Chandy is a Research Assistant and Ankeetaa Mahesshwari is an Associate Fellow at Pahlé India Foundation.