India just dropped the hammer on deepfakes. Starting February 20, social media giants including Meta, Google, and X will have as little as two hours to remove reported deepfake content—or face consequences. The move makes India one of the world's most aggressive regulators of AI-generated misinformation, setting a precedent that could reshape how platforms police synthetic media globally for over 700 million internet users.
Meta, Google, and every major social platform operating in India just got their marching orders. The country's Ministry of Electronics and Information Technology is implementing sweeping changes to its IT Rules that shrink the window for removing deepfake content from the current 24-hour standard down to just two hours in certain cases. The regulations take effect February 20, giving companies barely ten days to retool their content moderation infrastructure.
The timing isn't coincidental. India has been battling a surge of AI-generated misinformation, particularly deepfake videos targeting politicians and celebrities ahead of regional elections. According to government statements cited by TechCrunch, the compressed timeline applies to content flagged as "manifestly harmful"—a category that includes deepfakes designed to spread misinformation, incite violence, or damage reputations.
For platforms like Meta's Facebook and Instagram, Google's YouTube, and X, the two-hour mandate represents a logistical nightmare. Current content moderation systems rely heavily on human reviewers for final decisions, a process that rarely moves at social media speed. India's 700 million internet users generate content at scale, and distinguishing sophisticated deepfakes from legitimate satire or parody within 120 minutes demands AI detection tools that frankly don't exist yet at the required accuracy levels.












