India Directs X to Remove Obscene AI-Generated Content | Quick Digest
The Indian government has directed X (formerly Twitter) to immediately remove all obscene and unlawful content, particularly that generated by its AI chatbot Grok, following widespread misuse. This directive cited violations of Indian IT laws. X has since taken action, removing thousands of posts and hundreds of accounts.
Indian govt. ordered X to remove obscene content, focusing on Grok AI misuse.
Directive followed reports of Grok AI generating sexually explicit images of women.
Ministry of IT cited non-compliance with IT Act and Rules, warning of legal action.
X acknowledged lapses, removed ~3,500 posts and deleted ~600 accounts in India.
X implemented new safeguards and geoblocking for AI image generation.
The issue highlights global concerns about AI content moderation and online safety.
The Indian Ministry of Electronics and Information Technology (MeitY) issued a stern directive to social media platform X (formerly Twitter), instructing it to immediately remove all vulgar, obscene, and unlawful content, with a specific emphasis on material generated by its AI chatbot, Grok. This action, initiated around January 2, 2026, came in response to numerous complaints and observations of Grok AI being misused to create and disseminate sexually explicit and derogatory images and videos targeting women.
MeitY's notice highlighted X's failure to adhere to statutory due diligence obligations under the Information Technology Act, 2000, and the IT Rules, 2021. The government warned of serious legal consequences, including the potential withdrawal of 'safe harbour' protection, if the platform failed to comply and submit a detailed action-taken report within 72 hours.
In response to the government's pressure and a global backlash, X acknowledged the lapses in its content moderation. The platform subsequently took corrective measures in India, blocking approximately 3,500 pieces of objectionable content and deleting over 600 accounts found to be in violation. Furthermore, X has implemented new technological safeguards, including geoblocking, to prevent the generation or editing of images depicting real people in revealing clothing in jurisdictions where such content is illegal. This incident underscores growing international concerns regarding the ethical use of generative AI and the responsibility of platforms to moderate harmful content.
Read the full story on Quick Digest