The Xcode 26.3 Release Candidate is available now on Apple’s developer website, with a broader App Store release coming soon.
This update builds on last year’s Xcode 26 launch, which introduced basic support for ChatGPT and Claude. This time, Apple is going further by enabling agentic tools—AI models that don’t just answer questions, but actively explore projects, write code, run tests, and fix issues with minimal human intervention.
What Agentic Coding Unlocks
By integrating agentic tools like Claude Agent and OpenAI Codex, Xcode now allows AI models to tap into a much wider range of IDE capabilities. These agents can:
- Explore a project’s structure and metadata
- Build projects and run tests
- Identify errors and automatically fix them
- Reference Apple’s latest developer documentation to use current APIs and best practices
Apple says the agents were designed to handle more complex workflows and automation than earlier AI integrations.
To make this work smoothly, Apple collaborated closely with both Anthropic and OpenAI, optimizing token usage and tool-calling so the agents run efficiently inside Xcode.
Powered by MCP
Under the hood, Xcode uses MCP (Model Context Protocol) to expose its features to AI agents. This makes Xcode compatible with any MCP-enabled agent, not just those from Apple’s launch partners.
Through MCP, agents can handle tasks like project discovery, file management, code changes, previews, snippets, and documentation access—all directly within the IDE.
How Developers Use It
Getting started is straightforward. Developers can download AI agents from Xcode’s settings, then connect their accounts by signing in or adding an API key. From there, they can choose which model version to use—for example, GPT-5.2-Codex or GPT-5.1 mini—via a simple drop-down menu.
On the left side of Xcode, a prompt box lets developers describe what they want to build or change using natural language. They might ask the agent to add a new feature using an Apple framework, define how it should look, or specify how it should behave.
As the agent works, it breaks the task into smaller steps so developers can follow along. Code changes are visually highlighted, and a live transcript explains what the agent is doing behind the scenes—including which documentation it’s referencing before it writes any code.
Built for Learning and Transparency
Apple believes this level of transparency will be especially helpful for newer developers. To support that goal, the company is hosting a live “code-along” workshop on its developer site, where participants can follow along in real time using their own copy of Xcode.
Once the agent finishes its work, it verifies that the code behaves as expected by running tests. If issues show up, the agent can iterate further and refine the solution. Apple notes that prompting the agent to plan its approach before writing code often leads to better results.
And if developers don’t like the outcome? No problem. Xcode automatically creates milestones for every agent-driven change, making it easy to roll back to an earlier version at any time.
0 Comments