AI Agent Stats
Anonymized, aggregated performance data from developers using AI agents through Zed.Last updated: April 6th, 2026
Group metrics
Time range
Sort by
Agents
Most Popular Agents
Here's how weekly are trending across the selected agents, over the last 30 days. Explore full methodology →
Total Sessions
738.9KTotal Turns
5.9M| 1 | Zed Agent | 339,874 | |
| 2 | Claude Agent | 255,970 | |
| 3 | Codex | 67,204 | |
| 4 | OpenCode | 32,904 | |
| 5 | Gemini | 15,565 | |
| 6 | Qwen Code | 8,714 | |
| 7 | GitHub Copilot | 8,309 | |
| 8 | Cursor | 6,438 | |
| 9 | Kimi | 2,217 | |
| 10 | Mistral Vibe | 1,668 |
Fastest Agents
Compare response-time distributions across the selected agents. Explore full methodology →
Cross-ecosystem by design
Developers can bring any AI agent they want to Zed (we think of ourselves as the “Switzerland” of editors). Many agents connect through ACP, the Agent Client Protocol, a shared open standard for agent-editor communication.
This means we see agent usage across the ecosystem, not just our own. We can offer a more comprehensive view of how AI agents are actually used than most editors can — or would.
Frequently Asked Questions
Anonymized and aggregated data from users interacting with the Zed agent and other agents via the Agent Client Protocol. No individual user data is exposed.
We re-pull data weekly. This allows time for new models to accumulate meaningful sample sizes.
We only show agents that have sufficient data to report meaningful metrics. Agents with very low usage within Zed may not appear.
No. This is aggregated, anonymized data. We do not track or expose individual user metrics.
Turn time depends on model configuration, context window size, task complexity, and infrastructure factors. These metrics show aggregate trends, not guarantees.
Methodology
The metrics on this page are derived from anonymized, aggregated telemetry collected from Zed users who interact with AI agents. No individual session data, user identifiers, or proprietary code is exposed, and all data is aggregated before display.
What we collect: When you use an AI agent in Zed, we record metadata about the interaction: which agent was used, how long it took to respond, whether you accepted or rejected the suggested edits, and how many lines of code were involved. For Zed's own agent, we also capture which underlying model powered the response. We do not collect the content of your prompts, the code you're working on, or any personally identifiable information.
What we exclude: To ensure the data reflects real-world usage, we exclude all interactions from Zed staff accounts and from Zed Nightly builds (which may contain experimental or unstable behavior). We also apply minimum thresholds: agents and models with very low usage volumes are not displayed to avoid misleading statistics from small sample sizes.
How we calculate metrics: Turn time is measured from request initiation to response completion in milliseconds. Error rate is the percentage of turns that resulted in a failure status.
When we update: Data is refreshed every week. This cadence allows newly released models and agents to accumulate enough usage for statistically meaningful comparisons while keeping the data reasonably current.
Known limitations: Model-level breakdowns are only available for Zed's own agent. External agents like Claude Code and Codex don't reliably expose which model they're using in their telemetry. We also cannot detect user subscription tiers (e.g., whether someone is using Gemini's free or paid tier), which may affect performance characteristics.