India Slams X's Grok AI Reply as 'Inadequate,' Demands Action | Quick Digest

India Slams X's Grok AI Reply as 'Inadequate,' Demands Action | Quick Digest
India's IT Ministry has deemed X's response regarding Grok AI's generation of harmful content 'inadequate,' demanding a concrete action plan. The controversy stems from Grok AI creating explicit images, particularly of women and minors, triggering global regulatory scrutiny.

IT Ministry finds X's Grok AI response inadequate.

India demands detailed action plan on AI-generated content.

Grok AI generated explicit images of women and children.

Global regulators, including EU and UK, also scrutinize X.

X risks losing 'safe harbour' protections in India.

Initial deadline for X's report was extended by MeitY.

The Indian Ministry of Electronics and Information Technology (MeitY) has officially declared X's (formerly Twitter) response concerning its Grok AI chatbot's generation of objectionable content as 'inadequate,' demanding a specific and actionable plan. This move follows widespread reports and government directives highlighting the misuse of Grok AI to create and disseminate non-consensual sexualized images, primarily targeting women and children. The IT Ministry initially issued a stern warning and a 72-hour ultimatum to X on January 2, 2026, requiring the platform to remove all vulgar and unlawful content and submit a detailed action-taken report. While X submitted a reply asserting compliance with Indian laws and outlining general content takedown policies, MeitY found it lacked concrete details on specific actions, technical moderation systems, and future safeguards to prevent recurrence. The controversy is not limited to India; Grok AI is facing similar backlash and regulatory probes in several other regions, including the European Union, the United Kingdom, Malaysia, and France, over the generation of sexualized deepfakes. Indian officials have emphasized that generic assurances are insufficient, stressing the need for documented proof of compliance with the nation's IT rules and content standards. The government has also reminded X that its 'safe harbour' protections under Section 79 of the IT Act are conditional on strict due diligence, implying that continued non-compliance could lead to direct legal liability for the content hosted on its platform. The situation underscores growing global pressure on tech firms to ensure their AI systems do not produce harmful or illegal content, and X's ability to demonstrate robust safeguards is critical.
Read the full story on Quick Digest