Today we're launching Agent Stats: a public, weekly view of AI agents' adoption and turn times inside Zed. You can compare session counts, turn volume, and response time distributions (p10, p50, p90) across agents, filter by time range, and see how individual models are trending within Zed's own agent. Because Zed supports third-party agents through the open Agent Client Protocol, this tool can show activity across the broader agent ecosystem, not just our own.
Top 3 Agents in Zed by Session
Zed Agent, Claude Agent, Codex
A few things to know before you dig in: this is data from agents running inside Zed only, so it reflects trends in our user base, and only from users who opted into data collection. Model-level breakdowns are only available for Zed's own agent. (It’s unusual for an editor to both have this data and publish it. We’re in a position to do that because locking you into one AI provider is not central to our revenue strategy.)
Publishing this data doesn't change anything about how Zed handles your information. We've always collected anonymized, aggregated telemetry to understand how the editor is being used; what's new is that we're making the aggregate view public. Your prompts, your code, and your identity are never part of what we collect or share.
The dataset says a lot on its own (and it will get more interesting once the dataset matures) but a few patterns stood out:
Claude Sonnet’s p90 Latency Rose 44% in Three Weeks
Within Zed's native agent, claude-sonnet-4-6 is the default model for many users, and its tail latencies have kept climbing:
| Week | Sonnet p50 | Sonnet p90 | Opus p90 |
|---|---|---|---|
| Mar 8 | 47.4s | 294s | 362s |
| Mar 15 | 50.0s | 335s | 338s |
| Mar 22 | 52.1s | 391s | 393s |
| Mar 29 | 54.5s | 425s | 435s |
That is a 44% increase in Sonnet p90 over three weeks, from 294s to 425s. claude-opus-4-6 reached 435s over the same stretch.
Meanwhile, gpt-5-4 still looks materially faster over the same period, with p50 in the low 30s by Mar 29. This is exactly the kind of change this tool makes easy to spot.
We can't say from this data whether Claude's trend reflects infrastructure load, changes in how the model handles longer contexts, or something else entirely. But four consecutive weeks in the same direction is worth watching.
Zed Agent Leads for Sessions, but Claude Code Has More Depth
Across 2 million sessions and 15.4 million turns in the last 90 days, the platform average is 7.6 turns per session.
While Zed's native agent leads on sessions (939K vs. 703K), Claude Code significantly leads on depth per session (10.6 vs. 5.5 turns per session for Zed's native agent). Session totals show how often a tool gets opened, but it's turns per session that reveals how long people stay with it once they start.
536 distinct agents appear in the data, but the top three account for 92% of turns. A long tail exists; it is just not where most of the work is happening.
Another Week, Another GPT-5 Variant
One thing this tool makes very obvious is how unstable GPT-5 labeling currently is. The model data surfaces 10 distinct GPT-5 variants in active use: gpt-5, gpt-5-mini, gpt-5-1, gpt-5-2, gpt-5-3-codex, gpt-5-4, gpt-5-codex, gpt-5-1-codex, gpt-5-1-codex-max, and gpt-5-2-codex.
I would not overread that into a complete theory of OpenAI's release process. Still, the naming pattern suggests at least two active tracks, a base model line and a coding-specific line, and the cadence looks fast. gpt-5-4 went from 6,300 to 26,800 turns in one week.
If you're evaluating GPT-5 variants, assume the target is moving.
What Agent Stats Can and Can't Tell You
This tool is good at surfacing usage patterns, concentration, and latency movement. It is not a verdict on which agent is best.
Session counts and turn counts do not measure outcome quality. A shorter session might produce better code. A faster model might still struggle on hard problems. We're measuring behavior and response time, not whether an agent actually helped.
We'll keep publishing this data weekly. If you want to explore the numbers yourself, head to zed.dev/agent-metrics. And if you're building an agent and want to show up here, check out the Agent Client Protocol; any agent that implements ACP can run inside Zed.
Related Posts
Check out similar blogs from the Zed team.
Looking for a better editor?
You can try Zed today on macOS, Windows, or Linux. Download now!
We are hiring!
If you're passionate about the topics we cover on our blog, please consider joining our team to help us ship the future of software development.