Connect with us

Daily News

Google’s Gemini AI faces global backlash and a controversial silence

Published

on

Google’s foray into text-to-image functionality with its Gemini AI model faced swift criticism and subsequent suspension following a tumultuous rollout earlier this month.

The AI, previously known as Bard, was introduced earlier, sparking an immediate wave of discontent among users who reported inaccuracies and biases in the generated images.

Gemini AI, which replaced Bard, exhibited inconsistencies that triggered user dissatisfaction. Many users took to social media platforms, including X (formerly Twitter), to express their grievances.

The chatbot’s image generation tool allegedly demonstrated bias towards individuals of colour, even in situations where such biases were deemed unwarranted. Some users claimed that the AI consistently favoured individuals of colour while refusing to produce images of “white people.”

 Moreover, critiques highlighted inaccurate depictions of historically significant figures like the “Founding Fathers of America” or the “Pope.”

In addition to image generation, Gemini AI faced scrutiny for biases in its text-based responses, showcasing similar favouritism towards individuals of colour. The chatbot’s responses appeared to be influenced by the racial background of the subject, further contributing to the dissatisfaction voiced by users.

Responding to the widespread criticism, Google opted to suspend the image generation capability of Gemini AI. The company acknowledged the reported inaccuracies and biases, committing to address these issues before reintroducing the feature in the coming weeks.

The incident underscores the challenges and responsibilities associated with the deployment of advanced AI models, particularly in addressing biases and ensuring accurate and fair outcomes in diverse contexts.

The Musical Interview with Anamika Jha

Trending