
ZDNET’s key takeaways
- Anthropic launched Claude Opus 4.1.
- The mannequin exceeds the predecessor’s efficiency on advanced duties.
- It’s obtainable to paid Claude customers, Claude Code, API, Amazon Bedrock, and Google Cloud’s Vertex AI.
In Might, Anthropic launched Claude Opus 4, which the corporate dubbed its strongest mannequin but and one of the best coding mannequin on the earth. Solely three months later, Anthropic is upping the ante additional by launching the extremely anticipated Claude Opus 4.1, which now takes its predecessor’s crown as Anthropic’s most superior mannequin.
The Opus household of fashions is the corporate’s most superior, clever AI fashions geared towards tackling advanced issues. In consequence, Claude Opus 4.1, launched on Tuesday, excels at these duties and might even one-up its predecessor on agentic duties, real-world coding, and reasoning, based on Anthropic.
The mannequin additionally comes because the business is anticipating the launch of OpenAI’s GPT-5 quickly.
Additionally: OpenAI could launch GPT-5 any minute now – what to expect
How does Claude Opus 4.1 carry out?
One of the spectacular use circumstances of Claude Opus 4 was its efficiency on the SWE-bench Verified, a human-filtered subset of the SWE-bench, a benchmark that evaluates LLMs’ talents to unravel real-world software program engineering duties sourced from GitHub. Claude Opus 4’s efficiency on the SWE-bench Verified supported the declare that it was the “finest coding mannequin on the earth.” As seen within the put up above, Opus 4.1 carried out even increased.
Claude Opus 4.1 additionally swept its previous fashions throughout the benchmark board, together with the MMMLU, which checks for multilingual capabilities; AIME 2025, which checks for rigor on highschool match competitors questions; GPQA, which checks for efficiency on graduate-level reasoning prompts; and extra. When pinned in opposition to rivals’ reasoning fashions, together with OpenAI o3 and Gemini 2.5 Professional, it outperforms them in varied benchmarks, together with SWE-bench Verified.
With the discharge, Anthropic additionally posted its system card, which delineates all the security assessments and evaluations it carried out on the mannequin, in addition to its weaknesses, dangers, and limitations. A fast overview of the 22-page doc exhibits that the mannequin was deployed with an AI Security Degree 3 (ASL-3) Customary beneath Anthropic’s Accountable Scaling Coverage (RSP), and it’s nonetheless liable to a lot of the identical vulnerabilities.
How are you going to entry Claude Opus 4.1?
If you wish to attempt the mannequin for your self, it’s now obtainable to everybody by way of the paid Claude plans, which embrace Claude Professional for $20 per thirty days and Claude Max for $100 per thirty days. It’s obtainable in Claude Code, the API, Amazon Bedrock, and Google Cloud’s Vertex AI.
Get the morning’s prime tales in your inbox every day with our Tech Today newsletter.