If it felt like OpenAI and Anthropic were moving in lockstep this week, that’s because they were. Within an hour of each other, the two companies rolled out major updates to their flagship frontier models—GPT‑5.3-Codex and Claude Opus 4.6—making it abundantly clear that both are sprinting toward the same finish line: enterprise‑grade AI.
Anthropic first dropped Claude Opus 4.6, a broad‑range enterprise model built for heavy lifting. Thanks to a 1‑million‑token context window, it can chew through enormous documents and codebases without breaking a sweat. The update also introduced “agent teams”—multiple Claude agents that can divide and conquer big engineering or analysis tasks. With this release, Anthropic is pushing toward systems that behave less like chatbots and more like highly coordinated digital coworkers.
OpenAI followed with GPT‑5.3-Codex, its “most capable agentic coding model” so far. According to OpenAI, Codex now runs about 25% faster and handles long‑running developer and ops workflows with much more autonomy. OpenAI simultaneously launched OpenAI Frontier, a full enterprise platform for building, deploying and managing AI agents across internal business systems.
All of this unfolded days after Anthropic rolled out ads ahead of Sunday’s big game, pitching Claude as an ad-free sanctuary and simultaneously sharing a statement explaining its rationale for doing so. This came on the heels of OpenAI’s announcement last month that it would start testing ads in ChatGPT.
But perhaps the most interesting development isn’t any single feature, benchmark or advertisement—it’s the shared emphasis on agents, orchestration layers and tooling, according to Mihai Criveti, Distinguished Engineer for Agentic AI at IBM. “A lot of the recent progress has not been on the models themselves,” he told IBM Think in an interview just as the announcements hit X. The models are incrementally better, yes, but “the real progress has been on the tooling, on the prompts, on the agents, on the MCP servers.” In other words, raw model intelligence is no longer the main event. Infrastructure is.
Ultimately, the real story is what enterprises are watching closely: which company can deliver not just the smartest model, but the most trustworthy, scalable AI infrastructure to run their businesses on.
While the focus on OpenAI and Anthropic this week reflects where momentum is visible, enterprise AI adoption has long been driven by vendors and partnerships centered on governance, security and reliability. Late last year, for example, IBM and Anthropic formalized a partnership to infuse Claude into IBM’s software portfolio with an explicit focus on enterprise‑safe AI. In parallel, IBM partners with Microsoft, ServiceNow and many others working to take AI from demo to production at scale.
In this bigger arena, OpenAI and Anthropic are running fast; they’re also finding their place in an ecosystem that’s been quietly laying the secure AI enterprise groundwork for years.
Achieve over 90% cost savings with Granite's smaller and open models, designed for developer efficiency. These enterprise-ready models deliver exceptional performance against safety benchmarks and across a wide range of enterprise tasks from cybersecurity to RAG.
Put AI to work in your business with IBM's industry-leading AI expertise and portfolio of solutions at your side.
Reinvent critical workflows and operations by adding AI to maximize experiences, real-time decision-making and business value.