X, formerly known as Twitter, has removed more than 600 accounts and blocked around 3,500 pieces of content in India after facing regulatory action over the misuse of its artificial intelligence tool, Grok. The move comes after the government flagged the circulation of obscene, sexually explicit, and harmful material on the platform, much of it allegedly generated through the AI system.

According to government sources, the company has acknowledged shortcomings in its content moderation framework and admitted that lapses occurred in enforcing its internal standards. X has now assured Indian authorities that it will strictly follow all applicable laws and regulations going forward and will not allow any form of obscene imagery or unlawful material to appear on its service.
The controversy began on January 2, when the Ministry of Electronics and Information Technology issued a legal notice to X, citing serious violations of statutory obligations under the Information Technology Act and related IT Rules. The ministry raised concerns that Grok was being used to generate and circulate content that violated dignity, privacy, and digital safety, particularly in cases involving women and children.
Officials described the spread of such content as a grave breach of India’s cyber and child protection laws. They warned that platforms operating in the country are legally responsible for preventing the circulation of unlawful material and must act swiftly when violations occur.
Following the notice, MeitY directed X to conduct an immediate review of Grok’s technical design and governance structure to prevent further misuse. The company was instructed to remove all illegal content without delay, take action against users responsible for creating or sharing such material, and submit a detailed action-taken report within 72 hours.
The ministry also issued a strong warning that non-compliance would lead to strict legal consequences under the IT Act. These penalties could extend not only to the platform but also to its responsible officers and users found violating the law.
In response, X told authorities that it has taken corrective steps and is strengthening its internal systems to ensure safer and more responsible use of its AI technology. The company said it is committed to operating in full compliance with Indian laws and improving its content moderation practices to prevent similar incidents in the future.
The episode highlights the growing regulatory pressure on social media platforms as governments worldwide seek tighter oversight of artificial intelligence and digital content. With India taking a firm stand on online safety, the actions taken against X mark a significant moment in the country’s efforts to ensure accountability in the digital ecosystem.
