
Anthropic Launches Opus 4.7 AI Design, Focusing on Coding, Visual Tasks, and Cybersecurity Guardrails
- By John K. Waters
- 04/21/26
Anthropic has actually introduced Claude Opus 4.7, an upgraded large language design that it says exceeds its predecessor on software engineering tasks, image analysis, and multi-step autonomous work, while maintaining rates at $5 per million input tokens and $25 per million output tokens.
The design is now usually readily available throughout Anthropic’s own products and through its API, in addition to on Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry.
Anthropic stated the upgrade provides the most noticable gains on demanding coding tasks. Users report having the ability to hand off tough coding work that formerly required close supervision, with the brand-new design dealing with complex, long-running tasks with higher consistency and paying closer attention to directions.
The company likewise said the design can validate its own outputs before reporting results compared to users, a habits it described as brand-new relative to earlier versions.
On vision, Opus 4.7 can now accept images as much as 2,576 pixels on the long edge, roughly 3.75 megapixels, more than 3 times the resolution supported by prior Claude designs.
Anthropic stated this expands the model’s effectiveness for tasks requiring great visual detail, including reading dense screenshots and extracting data from complicated diagrams.
Perhaps the most noteworthy aspect of the release is its function in Anthropic’s wider security rollout method. The business just recently revealed Task Glasswing, which highlighted both the dangers and potential benefits of AI for cybersecurity, and stated that it would keep its more powerful Claude Mythos Sneak peek model restricted while testing new cyber safeguards on less-capable systems first. Opus 4.7 is the first such model.
Anthropic said it experimented during training by selectively lowering Opus 4.7’s cybersecurity capabilities and is releasing the model with automated safeguards developed to detect and obstruct requests that indicate forbidden or high-risk cybersecurity uses.
The company added that findings from this implementation will notify its ultimate wider release of what it calls “Mythos-class” designs. Security specialists looking for to utilize the new model for legitimate purposes, such as vulnerability research or penetration screening, can use through a new Cyber Verification Program.
Concerning alignment, Anthropic’s examinations show that Opus 4.7 displays low rates of worrying behavior, such as deceptiveness, sycophancy, and cooperation with abuse, and performs better than its predecessor in honesty and resistance to malicious prompt-injection attacks. However, the business acknowledged the design is modestly weaker in some areas, including a propensity to provide extremely comprehensive harm-reduction guidance on illegal drugs.
Anthropic’s internal positioning evaluation described the model as “mainly well-aligned and reliable, though not totally perfect in its behavior,” and noted that Mythos Sneak peek stays the best-aligned design the company has actually trained.
Developers updating from Opus 4.6 must represent 2 cost-related changes. Opus 4.7 utilizes an upgraded tokenizer that can map the very same input to approximately 1.0 to 1.35 times as lots of tokens, depending upon material type. The model also produces more output tokens at higher effort levels, particularly in later turns of agentic tasks, because it engages in more reasoning.
Anthropic stated users can manage token intake through an effort specification, job spending plans, or by prompting the model to be more concise.
Together with the model release, Anthropic presented a brand-new “xhigh” effort level, sitting between the existing “high” and “max” settings, giving developers finer control over the tradeoff between reasoning depth and latency. In Claude Code, the default effort level has actually been raised to “xhigh” for all plans.
The company likewise released job budget plans in public beta on its API platform, and included a brand-new “/ ultrareview” command in Claude Code that checks out code modifications and flags bugs and style concerns.
For more details, visit the Anthropic site.
About the Author
John K. Waters is the editor in chief of a variety of Converge360.com websites, with a concentrate on high-end development, AI and future tech. He’s been blogging about advanced innovations and culture of Silicon Valley for more than 20 years, and he’s composed more than a dozen books. He also co-scripted the documentary Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at [e-mail secured]