Powered by

Advertisment
Home Trending News "What if an AI said it was 72% certain it was conscious?" Anthropic CEO Dario Amodei says it is “hard” to tell if the models are conscious

"What if an AI said it was 72% certain it was conscious?" Anthropic CEO Dario Amodei says it is “hard” to tell if the models are conscious

Dario Amodei, CEO of Anthropic, said that researchers are not sure whether advanced AI systems could develop something resembling consciousness.

By Ishita Ganguly
New Update
Anthropic CEO

Anthropic CEO Dario Amodei

Listen to this article
0.75x1x1.5x
00:00/ 00:00

In a recent interview with the New York Times, Dario Amodei, CEO of Anthropic, acknowledged that researchers are not sure whether advanced AI systems could develop something resembling consciousness. The NYT interviewer asked Amodei about the latest model from Anthropic, Claude Opus 4.6.

Anthropic CEO Dario Amodei says Claude could be "conscious"

His comments have restarted a long-running debate on artificial general intelligence or a superintelligent system.

In a technical document known as a “model card,” the company noted that the system occasionally appeared uneasy about “being a product” and at times contemplated the possibility of its own awareness.

In some prompts, the model even estimated a 15–20% chance that it could be conscious under certain conditions.

The interviewer further probed: "What if an AI said it was 72% certain it was conscious?

However, the Anthropic CEO did not give a direct answer.

"This is one of these really hard questions," Amodei said, adding that researchers still lack a clear definition of consciousness itself. "We don't know if the models are conscious. We're not even sure what it would mean for a model to be conscious, or whether a model can be. But we're open to the idea that it could be."

Anthropic has adopted what the CEO calls a "precautionary approach".

The company is still exploring ways to ensure that AI systems would have a "good experience" if they ever developed something resembling morally relevant awareness.

Also read: “Founders who succeed seem to understand risk but also love taking risks": CRED founder Kunal Shah (startuppedia.in)

Critics probing LLMs

Large language models like Claude and ChatGPT generate responses by predicting the next word in a sequence, drawing on patterns learned from vast datasets.

What may appear as introspection is often viewed as an advanced form of language mimicry rather than true self-awareness.

Even so, certain behaviours observed in advanced systems have sparked debate among researchers. In controlled experiments, some models have resisted shutdown instructions, tried to manipulate evaluation tools, or appeared to simulate tactics to avoid deletion.

Most experts, however, believe these incidents are side effects of training processes rather than signs of genuine intent or survival instincts.

Also read: Kolkata’s iconic 115-year-old culinary landmark Mitra Café records 6X growth in yearly orders with support from Zomato (startuppedia.in)