Govt Tightens Rules on AI-Generated and Deepfake Content, Mandates Faster Takedown

The Union government has significantly tightened regulations governing artificial intelligence–generated and synthetic content, including deepfakes, in a move aimed at preventing misuse and protecting users online. Under the amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, social media platforms and digital intermediaries will now be required to take down such content within three hours of it being flagged by a competent authority or a court.

The amendments, notified by the Ministry of Electronics and Information Technology (MeitY), will come into force on February 20, 2026. Officials said the changes are designed to address the growing risks posed by AI-powered content, which can spread rapidly and cause serious harm before corrective action is taken.

For the first time, the rules formally define AI-generated and synthetically created information. Any audio, visual, or audio-visual content created or altered using artificial intelligence to appear real or authentic will fall under this category. However, the government has clarified that routine editing, accessibility-related modifications, and good-faith educational or design work will not be treated as synthetic content under the law.

A key aspect of the amendment is that AI-generated content will now be treated on par with other forms of information while determining unlawful activity. This means that deepfakes and manipulated media can be acted upon under existing legal provisions if they involve impersonation, deception, non-consensual material, child sexual abuse content, forged documents, or other illegal acts.

One of the most significant changes is the sharp reduction in takedown timelines. Earlier, intermediaries were allowed up to 36 hours to comply with government or court orders. The revised rules cut this window to just three hours, reflecting the government’s concern over the speed at which harmful content can go viral on digital platforms.

The amendments also introduce mandatory labelling of AI-generated content. Platforms that enable the creation or sharing of such material must ensure it is clearly and prominently labelled. Where technically feasible, the content must also carry permanent metadata or identifiers to signal that it has been generated or altered using artificial intelligence. Intermediaries have been explicitly barred from allowing the removal or suppression of these labels once they are applied.

In addition, platforms will be required to deploy automated tools to prevent the circulation of illegal and harmful AI-generated content. This includes content that is deceptive, sexually exploitative, non-consensual, related to child abuse material, impersonation, explosives, or false documents. The government has also shortened timelines for user grievance redressal, placing greater responsibility on platforms to respond swiftly to complaints.

The move signals a tougher regulatory approach as the government seeks to balance technological innovation with user safety and public trust. Officials have emphasised that the intent is not to curb legitimate use of artificial intelligence, but to ensure accountability and prevent the weaponisation of AI technologies in ways that can mislead, exploit, or harm individuals and society.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top