Dario Amodei

Anthropic CEO claims that AI models are less than humans Techcrunch

Anthropic CEO Dario Amodei believes that today’s AI Hallucinata models, or produce and present them as if they were lower than people, he said during a press briefing at Anthropia’s first development event on San Francisco, San Francisco.

Amodei said it all in the middle of a larger point, which said: that hallucinations AI are not a restriction on Anthropi’s journey to the ACT-AI system with the intelligence of man-UP or better.

“It really depends on how you measure it, but I suspect that AI models are likely to hallucinated less than humans, but hallucinated more surprisingly,” Amodei said, answering Techcrunch.

The CEO of Anthropic is one of the most bull leaders in the field of prospects for AI models reaching AG. In the widely circulated paper he wrote last year, Amodei said that the cream of faith was coming to 2026. During Thursday’s press briefing, the anthropic CEO said he had seen steady progress and noted that “water is rising everywhere.

“Everyone always looks for these hard blocks about what (AI) can do,” Amodei said. “Now you need to see. There is nothing like that.”

AI AI leaders believe that hallucinations are a large obstacle to reaching AG. At the beginning of this week, Google Director Deepmind resigned that AI Today has too many “holes” and get too many obvious incorrect questions. For example, at the beginning of this month, an anthropic lawyer was forced to apologize after creating quotations in a short submission, and the AI ​​chatbot hallucinated and got names and titles badly.

It is difficult to verify the claims of Amodei, to a large extent, because most of the hallucination benchmarks will build AI models against each other; They compare models with people. Some techniques seem to help lower hallucination rats, such as allowing AI models to access the web search. Separately, some AI models, such as Openai GPT-4.5, especially lower hallucination rats on benchmarks compared to early generations of systems.

However, there is also evidence that would indicate that they are getting worse in advanced AI models. The Openai O3 and O4-mini models have higher hallucination rats than the models of the previous general OpenI and the company really does not understand why.

Later in the amodei press briefing, he showed that television broadcasting, politicians and people in all types of professions make mistakes all the time. The fact that AI makes mistakes too is not, according to Amodei knocking on his intelligence. However, the CEO of Anthropic acknowledged the confidence with which AI models represent false things as facts that could be a problem.

In fact, anthropic conducted a fair love for research on the tendency of AI models to cheat people, a problem that seemed particularly predominant in the recently initiated Claude Opus 4. Apollo Research, a security institute that was a high tendency against humos and fraudulent. Apollo went so far that he indicated that anthropic should not release this early model. Anthropic said that some of the alleviations came up that seemed to add Apollo output.

Amodei’s comments suggest that anthropic can consider the AI ​​model or equal to human-learl intelligence, even if it still hallucinates. AI, which hallucinates may not reach the Ang definition of many people.

(Tagstotranslate) anthropic

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *