The race between OpenAI’s Codex and Anthropic’s Claude Code shows that AI coding has moved beyond autocomplete into delegated engineering. OpenAI describes Codex as a “command center” for multi-agent coding: with cloud environments and built-in worktrees, several agents can work in parallel, and its new Skills and Automations let teams turn repeated workflows—such as issue triage or summarizing CI failures—into background jobs. OpenAI also says that more than a million developers used Codex in the past month, suggesting that agentic coding is becoming mainstream rather than merely experimental. (openai.com)
Claude Code comes from a different angle. Anthropic presents it as an agentic coding system that lives in the terminal but also extends to IDEs, desktop, and the browser. Its documentation says it can read a codebase, edit files, run commands, create commits and pull requests, automate review and triage through GitHub Actions or GitLab CI/CD, and even turn Slack bug reports into pull requests. Anthropic goes further: inside the company, it says, the majority of code is now written by Claude Code, while engineers spend more of their time on architecture, product judgment, and orchestrating multiple agents in parallel. (docs.anthropic.com)
What changes for developers? Probably not the need for human skill, but the meaning of that skill. Both companies emphasize control: OpenAI says Codex is designed for verification through citations, terminal logs, test results, and sandboxed permissions, while Anthropic says Claude Code requires explicit approval before modifying files or running commands. The likely result is a new style of software work in which engineers spend less time typing every line themselves and more time framing problems clearly, supervising parallel agents, reviewing diffs, and deciding what is safe and worth shipping. In that sense, software development is starting to feel less like solitary writing and more like conducting an orchestra. (openai.com)










