India's New IT Rules: Faster Takedowns for Deepfakes and Unlawful Content

India's New IT Rules: Faster Takedowns for Deepfakes and Unlawful Content | Quick Digest
India has amended its IT Rules, mandating social media platforms to remove unlawful content within three hours, a significant reduction from the previous 36-hour window. The new regulations, effective February 20, 2026, also introduce stringent requirements for labeling AI-generated content, aiming to combat deepfakes and misinformation.

Key Highlights

  • Platforms must remove unlawful content within three hours of notification.
  • Rules target deepfakes and AI-generated content, requiring clear labeling.
  • Compliance timeline significantly reduced from previous 36-hour period.
  • Amendments to IT Rules, 2021, formally define 'synthetically generated information'.
  • New regulations come into force on February 20, 2026.
The Indian government has significantly tightened its regulatory framework for social media platforms and online content with the notification of amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. These amendments, notified by the Ministry of Electronics and Information Technology (MeitY) on February 10, 2026, introduce a stringent three-hour deadline for social media companies to remove unlawful content once they are notified by a court or a competent government authority. This marks a sharp reduction from the earlier compliance window of 36 hours. For highly sensitive content, such as non-consensual nudity and deepfakes, the takedown timeline is even shorter, set at just two hours. A central focus of these new rules is the burgeoning challenge of artificial intelligence-generated content, particularly deepfakes and misinformation. For the first time, the IT Rules formally define 'synthetically generated information' (SGI), encompassing any audio, visual, or audio-visual content that is artificially or algorithmically created or altered to appear real or indistinguishable from natural persons or real-world events. This definition specifically targets deceptive AI-generated impersonations, while exempting routine editing, accessibility enhancements, and academic or training materials that do not materially distort the original meaning. Under the amended rules, social media intermediaries offering tools for the creation or dissemination of synthetic content are now mandated to ensure such material carries a clear and prominent label. Furthermore, where technically feasible, platforms are required to embed permanent metadata or provenance identifiers, including unique identifiers, to trace the origin of the content back to the intermediary resource used to generate it. The rules explicitly prohibit platforms from allowing the removal or suppression of these labels or metadata once applied. Before a user uploads content, significant social media intermediaries must now ask for a declaration on whether the information is synthetically generated and deploy reasonable technical measures, including automated tools, to verify the correctness of this declaration. The government's rationale behind these stricter regulations is to enhance user safety, curb the spread of misinformation, and combat the rising misuse of AI technologies for creating deceptive content. The amendments place enhanced due-diligence obligations on intermediaries, requiring them to proactively deploy automated tools and technical safeguards to prevent the hosting and dissemination of unlawful AI-generated content, rather than merely reacting to complaints. The scope of prohibited content includes, but is not limited to, non-consensual material, child sexual abuse material, content related to forged or false documents, impersonation, and content involving explosives or other serious offenses. These updated regulations build upon India's Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which already expanded due diligence obligations for major social media intermediaries. The new rules, coming into force on February 20, 2026, represent a significant escalation in India's efforts to regulate online speech and content, posing potential compliance challenges for global technology companies like Meta, Google (YouTube), and X (formerly Twitter). Legal experts have noted the practical difficulties for platforms to consistently remove content within a three-hour window, suggesting it might assume little to no application of mind or ability to resist compliance. The government has not provided specific reasons for the drastic reduction in the takedown timeline in its public notification. The move reinforces India's position as one of the world's most aggressive regulators of online content, requiring platforms to balance compliance in a market with over a billion internet users against mounting concerns over government censorship from digital rights advocates. The amendments also shorten other grievance redressal timelines, with periods for disposal of grievances reduced from 15 days to 7 days, and urgent actions from 72 hours to 36 hours, and some specific content removal complaints needing action within 2 hours.

Frequently Asked Questions

What are the new amendments to India's IT Rules?

The new amendments to India's IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, mandate social media platforms to remove unlawful content within three hours of notification and introduce stringent regulations for AI-generated content, including mandatory labeling and embedded metadata.

When will these new IT Rules come into effect?

These new regulations, officially notified by the Ministry of Electronics and Information Technology (MeitY) on February 10, 2026, will come into force starting February 20, 2026.

What kind of content is affected by the new takedown rules?

The rules apply to 'unlawful content' as flagged by a court or competent authority. They specifically target 'synthetically generated information' (deepfakes, AI-generated misinformation) and also reduce takedown times for sensitive content like non-consensual nudity to as little as two hours.

What are the new obligations for social media platforms regarding AI content?

Platforms must formally define 'synthetically generated information', ensure AI-generated content is clearly and prominently labeled, embed permanent metadata or provenance identifiers where technically feasible, and obtain user declarations about AI-generated content, verifying them with automated tools.

Why has the Indian government introduced these stricter rules?

The government has introduced these rules to enhance user safety, combat the spread of misinformation, tackle the rising threat of deepfakes and deceptive AI-generated content, and impose greater accountability on online platforms.

Read Full Story on Quick Digest