/startuppedia/media/media_files/2026/03/06/website-1110-x-960-px-28-2026-03-06-19-09-51.png)
Anthropic CEO Dario Amodei
In a recent interview with the New York Times, Dario Amodei, CEO of Anthropic, acknowledged that researchers are not sure whether advanced AI systems could develop something resembling consciousness. The NYT interviewer asked Amodei about the latest model from Anthropic, Claude Opus 4.6.
Anthropic CEO Dario Amodei says Claude could be "conscious"
His comments have restarted a long-running debate on artificial general intelligence or a superintelligent system.
In a technical document known as a “model card,” the company noted that the system occasionally appeared uneasy about “being a product” and at times contemplated the possibility of its own awareness.
In some prompts, the model even estimated a 15–20% chance that it could be conscious under certain conditions.
The interviewer further probed: "What if an AI said it was 72% certain it was conscious?
However, the Anthropic CEO did not give a direct answer.
"This is one of these really hard questions," Amodei said, adding that researchers still lack a clear definition of consciousness itself. "We don't know if the models are conscious. We're not even sure what it would mean for a model to be conscious, or whether a model can be. But we're open to the idea that it could be."
Anthropic has adopted what the CEO calls a "precautionary approach".
The company is still exploring ways to ensure that AI systems would have a "good experience" if they ever developed something resembling morally relevant awareness.
Critics probing LLMs
Large language models like Claude and ChatGPT generate responses by predicting the next word in a sequence, drawing on patterns learned from vast datasets.
What may appear as introspection is often viewed as an advanced form of language mimicry rather than true self-awareness.
Even so, certain behaviours observed in advanced systems have sparked debate among researchers. In controlled experiments, some models have resisted shutdown instructions, tried to manipulate evaluation tools, or appeared to simulate tactics to avoid deletion.
Most experts, however, believe these incidents are side effects of training processes rather than signs of genuine intent or survival instincts.

