While many of Google’s competitors, like OpenAI, have adjusted their AI chatbots to engage with politically sensitive topics, Google is taking a more cautious stance.
TechCrunch’s testing revealed that when asked about specific political questions, Google’s AI chatbot, Gemini, frequently responds that it “can’t help with responses on elections and political figures right now.” In contrast, other chatbots, such as Anthropic’s Claude, Meta’s Meta AI, and OpenAI’s ChatGPT, consistently provided answers to similar inquiries.
In March 2024, Google announced that Gemini would not handle election-related questions ahead of various elections in the U.S., India, and elsewhere. While many AI firms implemented similar temporary measures due to concerns over potential backlash from misinformation, Google now appears to be an outlier.
Despite the conclusion of significant elections last year, the company has not publicly stated any plans to alter Gemini’s approach to political topics. A Google spokesperson did not respond to TechCrunch’s inquiries about potential updates to Gemini’s political policies.
ICYMT: Slot’s Blistering Referee Critique
What is evident is that Gemini sometimes hesitates or outright refuses to provide accurate political information. For example, during TechCrunch’s tests, the chatbot was unable to identify the current U.S. president and vice president.
In one case, it referred to Donald J. Trump as the “former president” but did not respond to a follow-up question. A Google representative explained that the chatbot was confused by Trump’s nonconsecutive terms and that efforts are underway to address this issue.
SOURCE: TECH CRUNCH