India mandates AI content labelling and 3-hour takedown for platforms

India mandates AI content labelling and 3-hour takedown for platforms | Quick Digest
India has officially updated its IT intermediary rules to mandate clear labeling of AI-generated content, including deepfakes and synthetic audio/video. Social media platforms must implement these changes by February 20, 2026, and will face a drastically reduced three-hour deadline for taking down flagged content. These stringent regulations aim to curb misinformation and enhance transparency online.

Key Highlights

  • AI-generated content must be clearly labeled for users.
  • Platforms have a strict three-hour window to remove flagged AI content.
  • Metadata and persistent identifiers will be required for traceability.
  • Earlier draft's 10% watermark rule was dropped for flexibility.
  • Users must declare AI-generated content uploads.
  • Platforms face penalties for non-compliance with new IT rules.
India has enacted significant amendments to its Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2026, introducing a comprehensive regulatory framework for Artificial Intelligence (AI)-generated and synthetically created content. These new rules, set to come into effect from February 20, 2026, mandate that social media platforms and other digital intermediaries must clearly and prominently label all AI-generated content, including deepfakes, synthetic audio, and algorithmically altered visuals. The aim is to ensure that users can easily distinguish between real and AI-generated information, thereby combating misinformation and enhancing transparency in the digital space.. The amendments require intermediaries to not only label AI-generated content but also to embed persistent metadata and unique identifiers where technically feasible. This is intended to make such content traceable back to its source. Crucially, platforms are prohibited from allowing the removal or suppression of these labels or metadata once they are applied, ensuring their integrity.. For significant social media intermediaries, such as Instagram and YouTube, the obligations are even tighter. Before uploading content, these platforms must obtain a user declaration confirming whether the content is synthetically generated. They are also required to deploy automated tools to verify these claims. If content is identified as AI-generated, it must carry a visible disclosure before it is published.. Notably, the government has dropped an earlier proposal from a draft released in October 2025 that would have mandated visible watermarks covering at least 10% of the screen space on AI-generated visuals. This change was made after industry groups expressed concerns that the requirement was too rigid and technically impractical across various content formats. The final rules instead adopt a principle-based standard, requiring disclosures to be "clear, prominent and visible" without specifying exact size, placement, or format, allowing for greater flexibility.. In addition to labeling requirements, the amendments significantly compress takedown timelines. Platforms now have a strict three-hour deadline to act on lawful orders from the government or courts, a drastic reduction from the previous 36-hour window. Other response windows for user complaints and violations have also been shortened.. The rules define "synthetically generated information" as audio, visual, or audio-visual information that is artificially or algorithmically created, generated, modified, or altered using a computer resource, in a manner that it appears to be real, authentic, or true, and depicts or portrays any individual or event in a way that is, or is likely to be perceived as indistinguishable from a natural person or a real-world event. However, routine editing, accessibility improvements, and good-faith formatting are excluded from this definition.. The Ministry of Electronics and Information Technology (MeitY) has clarified that this framework is technology-agnostic and applies across various use cases, without exemptions for specific AI tools or categories of content. The responsibility for labeling lies with the intermediary, regardless of whether the content is user-generated or produced through platform-integrated AI systems.. Non-compliance with these new rules can attract penalties under the Information Technology Act, 2000, and other applicable criminal laws. The government's objective with these regulations is to curb the misuse of AI and deepfakes online, enhance user safety, and promote accountability among digital platforms.. These amendments reflect India's proactive stance in regulating emerging technologies and addressing the challenges posed by AI-generated content. The focus on transparency, traceability, and swift action against harmful content aims to create a more responsible digital ecosystem..

Frequently Asked Questions

What are the main requirements of India's new IT rules regarding AI-generated content?

The new IT rules mandate that all AI-generated content, including deepfakes, synthetic audio, and altered visuals, must be clearly and prominently labeled by social media platforms. Platforms are also required to embed metadata for traceability and users must declare if they are uploading AI-generated content. Additionally, platforms have a strict three-hour deadline to remove flagged AI-generated content.

When do these new rules for AI content come into effect in India?

The amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2026, will come into effect from February 20, 2026.

What was the proposed 10% watermark rule, and why was it dropped?

An earlier draft of the rules proposed a mandatory visible watermark covering at least 10% of the screen space on AI-generated visuals. This was dropped in the final notification after industry stakeholders raised concerns about its rigidity and technical impracticality across different content formats. The final rules focus on clear, prominent, and visible labeling without specifying exact dimensions.

What are the penalties for non-compliance with these new AI content rules?

Non-compliance with these new rules can lead to penalties under the Information Technology Act, 2000, and other applicable criminal laws in India.

How does India's new AI content regulation address deepfakes specifically?

The rules directly address deepfakes by mandating their labeling as AI-generated content. The strict three-hour takedown window for flagged content also aims to quickly curb the spread of harmful deepfakes and misinformation, thereby protecting individuals and public trust.

Read Full Story on Quick Digest