In Short:
The IT ministry issued a revised AI advisory, removing the provision requiring government permission to deploy AI models. The advisory applies to major social media intermediaries in India. The new advisory requires labelling under-tested AI models to inform users of potential fallibility. Startups welcomed the move as it encourages innovation. Intermediaries must ensure AI models do not promote bias, discrimination, or unlawful content. The government aims to balance innovation with regulatory safeguards in the evolving AI landscape.
AI Companies Welcome IT Ministry’s Revised AI Advisory
Artificial intelligence (AI) companies have welcomed the IT ministry’s revised AI advisory, which was issued late on Friday. The new advisory eliminates the provision that previously required intermediaries and platforms to obtain government permission before deploying “under-tested” or “unreliable” AI models and tools in India.
Revised Advisory and Startups
Although the advisory was initially sent to eight significant social media intermediaries with more than 50 lakh registered users in India, it did not specify that it only applies to these companies. These companies include Facebook, Instagram, WhatsApp, Google/YouTube (for Gemini), X (Twitter), Snap, Microsoft/LinkedIn (for OpenAI), and ShareChat.
Aakrit Vaish, the chief executive of conversational AI platform Haptik, stated that the revised AI advisory is a significant win for startups. He mentioned that the Ministry of Electronics and Information Technology (MeitY) listened to the concerns of startups, allowing for greater innovation in the country.
Feedback from Experts
Tanuj Bhojwani, head of people+ai, who gathered feedback from 75 companies on the previous AI advisory, expressed that the revised advisory presents a more fair approach. The updated order holds intermediary platforms accountable under existing laws, emphasizing the need for transparency and user awareness.
Chaitanya Chokkareddy, the chief technology officer of Ozonetel, applauded the government for listening to feedback and updating its advisories, while Pratik Desai from KissanAI acknowledged the importance of cautioning users about the limitations of AI technologies.
Transparency and User Awareness
The revised AI advisory now requires intermediaries to label or embed AI-generated content with unique metadata or identifiers. This step aims to combat misinformation and deepfake content, ensuring user safety while allowing for experimentation with new AI models.
Gaurav Juneja from Kapture highlighted that AI is still evolving, and proactive regulatory measures like the revised advisory are necessary to strike a balance between innovation and regulation.
Legal Implications and Compliance
Ameet Datta, a partner at the law firm Saikrishna & Associates, noted the dynamic nature of AI development and the need for clear guidelines to support innovation, creators, and user protection. While the revised advisory encourages transparency, the legal landscape remains complex, requiring a dialogue between technology companies, legal experts, and policymakers to address compliance challenges.