AI Agent Stats

Anonymized, aggregated performance data from developers using AI agents through Zed.Last updated: April 6th, 2026

Group metrics

Time range

Sort by

Agents

Most Popular Agents

Here's how weekly are trending across the selected agents, over the last 30 days. Explore full methodology →

Total Sessions

738.9K

Total Turns

5.9M
1

Zed Agent

339,874
2

Claude Agent

255,970
3

Codex

67,204
4

OpenCode

32,904
5

Gemini

15,565
6

Qwen Code

8,714
7

GitHub Copilot

8,309
8

Cursor

6,438
9

Kimi

2,217
10

Mistral Vibe

1,668

Fastest Agents

Compare response-time distributions across the selected agents. Explore full methodology →

Agent

Zed Agent

Zed

Claude Agent

Anthropic

Codex

OpenAI

OpenCode

Anomaly

Gemini

Google

Qwen Code

Alibaba

GitHub Copilot

GitHub

Cursor

Cursor

Kimi

Moonshot AI

Mistral Vibe

Mistral AI

Turn Time

The time it takes for an agent or model to respond in seconds. Excludes user think time and measures responsiveness only.

Request Timestamp → Completion Timestamp
p10
p50
p90
0s235s470s
6.7s
36.0s
272.0s
0s235s470s
6.5s
34.5s
281.5s
0s235s470s
7.7s
46.0s
326.3s
0s235s470s
7.4s
42.8s
288.8s
0s235s470s
7.0s
61.7s
388.1s
0s235s470s
12.1s
77.0s
468.5s
0s235s470s
6.8s
46.3s
422.5s
0s235s470s
9.9s
43.7s
251.3s
0s235s470s
8.5s
50.0s
303.2s
0s235s470s
2.9s
21.8s
145.1s

Cross-ecosystem by design

Developers can bring any AI agent they want to Zed (we think of ourselves as the “Switzerland” of editors). Many agents connect through ACP, the Agent Client Protocol, a shared open standard for agent-editor communication.

This means we see agent usage across the ecosystem, not just our own. We can offer a more comprehensive view of how AI agents are actually used than most editors can — or would.

Frequently Asked Questions





Methodology

The metrics on this page are derived from anonymized, aggregated telemetry collected from Zed users who interact with AI agents. No individual session data, user identifiers, or proprietary code is exposed, and all data is aggregated before display.

What we collect: When you use an AI agent in Zed, we record metadata about the interaction: which agent was used, how long it took to respond, whether you accepted or rejected the suggested edits, and how many lines of code were involved. For Zed's own agent, we also capture which underlying model powered the response. We do not collect the content of your prompts, the code you're working on, or any personally identifiable information.

What we exclude: To ensure the data reflects real-world usage, we exclude all interactions from Zed staff accounts and from Zed Nightly builds (which may contain experimental or unstable behavior). We also apply minimum thresholds: agents and models with very low usage volumes are not displayed to avoid misleading statistics from small sample sizes.

How we calculate metrics: Turn time is measured from request initiation to response completion in milliseconds. Error rate is the percentage of turns that resulted in a failure status.

When we update: Data is refreshed every week. This cadence allows newly released models and agents to accumulate enough usage for statistically meaningful comparisons while keeping the data reasonably current.

Known limitations: Model-level breakdowns are only available for Zed's own agent. External agents like Claude Code and Codex don't reliably expose which model they're using in their telemetry. We also cannot detect user subscription tiers (e.g., whether someone is using Gemini's free or paid tier), which may affect performance characteristics.