Google has issued an apology to the Indian government after its AI platform, Gemini, made unfounded comments about Prime Minister Narendra Modi. This apology came as a response to a notice from the government seeking an explanation for the AI’s questionable outputs. The incident raised concerns about the reliability of AI platforms and their use in consumer solutions, especially during trial phases.
Minister of State for IT & Electronics, Rajeev Chandrasekhar, highlighted the government’s dissatisfaction with Google’s response and criticized the tendency of some AI platforms to use India as a testing ground. The incident led to the announcement of new regulations requiring AI platforms to obtain a permit to operate in India. The minister emphasized the importance of transparency and accountability, insisting that platforms must inform users about their potential unreliability and the risks of generating incorrect or unlawful content.
The government’s stance is clear: AI platforms must adhere to Indian IT and criminal laws, and cannot cite unreliability as a defense against legal action for spreading misinformation. This move reflects a broader concern over the impact of AI on society and the need for robust regulatory frameworks to ensure these technologies are developed and deployed responsibly.
To address these issues, the government has also advised AI-led startups to label unverified information clearly to prevent the spread of misinformation. This advisory is part of a larger effort to tackle challenges posed by AI, including deepfakes, ensuring that technology serves the public good while protecting citizens from potential harms.