Final week, Anthropic gathered twelve of the world’s largest expertise corporations to share an uncomfortable discovering. Its strongest AI mannequin had spent a number of weeks autonomously figuring out safety flaws in broadly used software program, together with vulnerabilities that had gone undetected for practically three many years.
That disclosure got here alongside the overall launch of Claude Opus 4.7. Anthropic is utilizing the newer mannequin to check the safety controls it wants earlier than it may well responsibly launch the extra succesful one. For enterprise consumers, each developments matter.
Analysis from Gravitee, printed in February 2026, discovered that 81% of enterprise groups have moved previous the planning section for AI brokers. But solely 14.4% have full safety or IT approval for the brokers they run. That governance hole seems to be significantly extra critical in gentle of what Anthropic disclosed this week.
What Opus 4.7 modifications for enterprise groups
The core downside with operating AI brokers at scale has all the time been reliability. Fashions that drop context between periods, stall on complicated duties, or want supervising at each step eat up extra time than they save.
Opus 4.7 addresses a number of of these points. It checks its personal outputs earlier than reporting again, retains context throughout periods, and follows directions extra exactly than its predecessor. For groups operating multi-day workflows, that context retention issues most. Re-establishing background at the beginning of every session is an actual operational price that the majority productiveness assessments overlook.
Enterprise testers reported measurable beneficial properties. Notion noticed a 14% enchancment on complicated multi-step workflows with a 3rd fewer instrument errors. Additionally they stated it was the primary mannequin to move their implicit-need exams, the place the mannequin works out necessities with out specific instruction. Ramp discovered it wanted far much less step-by-step steerage throughout duties spanning a number of instruments and codebases.
Picture decision has elevated to greater than thrice that of earlier Claude fashions. That makes doc processing and dense interface work extra sensible. These operating Claude inside Microsoft 365 will see that enchancment throughout Groups, Outlook, and OneDrive workflows. Pricing stays at $5 per million enter tokens and $25 per million output tokens.
The safety discovering each IT chief must learn
Utilizing Claude Mythos Preview, Anthropic autonomously discovered 1000’s of essential zero-day vulnerabilities. These spanned each main working system and net browser. One was a 27-year-old flaw in OpenBSD that allow attackers remotely crash machines. One other was a bug in FFmpeg that automated testing instruments had run 5 million occasions with out flagging. Maintainers have now mounted all of them.
As UC Right this moment lined individually this week, the importance just isn’t the person bugs. It’s {that a} succesful AI mannequin can now discover critical vulnerabilities at scale, autonomously, and quicker than any present testing course of. The typical price of an information breach stands at $4.4 million. Unified communications environments, constructed on browsers, shared media libraries, APIs, and virtualised infrastructure, sit squarely in scope.
Challenge Glasswing, Anthropic’s response, brings collectively AWS, Cisco, CrowdStrike, Google, Microsoft, Palo Alto Networks, and others. The group dedicated $100M in mannequin credit to scanning and hardening essential software program infrastructure. Additionally they directed an additional $4M to open-source safety organisations. Microsoft, which has been constructing its personal AI safety agent infrastructure in parallel, joined as a founding member.
Opus 4.7 is the primary Claude mannequin to ship with automated safeguards that block high-risk cybersecurity makes use of. Anthropic describes it as a take a look at mattress for the controls wanted earlier than Mythos-class fashions can attain a wider viewers. Safety professionals with professional necessities can apply by the brand new Cyber Verification Programme.
Deloitte’s 2026 enterprise AI report discovered that just one in 5 corporations has a mature governance mannequin for autonomous AI brokers. For IT and safety leads, that determine and this week’s information belong in the identical dialog.







