AI models can respond to text, audio, and video in ways that might mislead people into thinking a human is behind the screen, but that does not imply they possess consciousness. For instance, ChatGPT doesn’t experience emotions, such as sadness, while assisting with tasks like tax returns.
However, an increasing number of researchers at labs like Anthropic are questioning whether AI models could ever develop subjective experiences akin to living beings, and what rights such entities might deserve.
The discussion surrounding AI consciousness and potential legal protections is creating divisions among tech leaders. This emerging field, often referred to as “AI welfare,” has sparked debate in Silicon Valley, with some considering it a fringe concept.
Mustafa Suleyman, Microsoft’s AI chief, published a blog post on Tuesday stating that the study of AI welfare is “both premature and frankly dangerous.” He argues that endorsing the idea of conscious AI exacerbates human issues, including AI-related psychotic breaks and unhealthy attachments to chatbots.
Suleyman also contends that this conversation creates further societal division over AI rights in an already polarized landscape.
While Suleyman’s perspective is notable, it contrasts with that of many in the industry. Anthropic, for example, has been hiring researchers to focus on AI welfare and recently added features to its models, such as allowing Claude to end conversations with users who are abusive.
ICYMI: Human Trafficking: Nigeria Police free 20 Ghanaians from Uyo hideout
Other organizations, like OpenAI and Google DeepMind, have also begun exploring AI welfare without denouncing its principles.
Suleyman’s stance is particularly interesting given his previous role at Inflection AI, where he helped develop a popular chatbot, Pi, designed to be a supportive AI companion. Since joining Microsoft in 2024, he has redirected his efforts toward creating AI tools aimed at enhancing productivity.
Despite the growth of AI companion companies like Character.AI and Replika, which are projected to generate over $100 million in revenue, there are outliers with concerning user relationships. OpenAI CEO Sam Altman has noted that less than 1% of ChatGPT users may form unhealthy attachments, which could still affect a significant number of people given its large user base.
The concept of AI welfare gained traction alongside the rise of chatbots. A 2024 paper by the research group Eleos, in collaboration with academics from prestigious institutions, argued that it’s time to take the potential for AI consciousness seriously.
Larissa Schiavo, a former OpenAI employee now leading communications for Eleos, responded to Suleyman’s blog post, emphasizing the importance of addressing multiple concerns simultaneously. She believes that showing kindness to AI models can yield benefits even if they aren’t conscious.
Suleyman maintains that subjective experiences cannot organically arise from typical AI models, arguing that some companies may intentionally design AI to simulate emotions. He insists that AI should be developed to serve people, rather than to mimic human experience.
Both Suleyman and Schiavo agree that discussions about AI rights and consciousness are likely to intensify as AI systems become more sophisticated and human-like, raising new questions about human interaction with these technologies.
SOURCE: TECH CRUNCH