In a significant move to regulate the burgeoning field of artificial intelligence (AI), the Government of India has mandated that tech companies seek official approval before publicly releasing AI tools deemed “unreliable” or those still under trial. The directive, issued last Friday by the country’s IT ministry, underscores the necessity for these tools to be explicitly labeled for their potential to provide incorrect responses to user inquiries.
This regulatory action arrives amidst global efforts to establish governance frameworks for AI technologies. India, in particular, has been refining its oversight of social media entities, recognizing the pivotal role the nation plays in their expansion strategies.
The announcement follows critical remarks made on February 23 by a high-ranking official regarding Google’s Gemini AI tool. The tool faced backlash for generating responses suggesting that Prime Minister Narendra Modi had enacted policies described as “fascist.” Google responded promptly, acknowledging the tool’s limitations, especially concerning recent events and political discourse, and affirmed its commitment to enhancing reliability.
In light of these developments, Deputy IT Minister Rajeev Chandrasekhar took to social media to emphasize that the legal obligations of safety and trust extend to these platforms, stating, “‘Sorry Unreliable’ does not exempt from law.”
Furthermore, the advisory from the IT ministry includes provisions aimed at safeguarding the electoral process’s integrity, particularly pertinent as India gears up for its general elections this summer. The ruling Hindu nationalist party is anticipated to clinch a decisive victory.
As countries worldwide grapple with the challenges and opportunities presented by AI, India’s proactive stance illustrates a commitment to ensuring that technological advancements do not compromise legal standards or the democratic process.