← Back to Blog

This Week at Zed Industries: #1

January 23rd, 2023


The ideas at the foundation of Zed began with a project called Xray, an experimental editor that my co-founder Antonio and I worked on in the first half of 2018. To ensure managers had visibility into our work, I kept a weekly journal in the xray repository chronicaling our progress. It was soon discovered by members of the Atom community, and the journal morphed into a low-key blog, even though I'm not sure any managers ever read it.

Four years later, I'd like to renew that practice and start posting more regular updates for Zed. These won't be as carefully-crafted as our first blog post, but it seems like some Zed users might appreciate hearing more details about what's going on. We'll continue to publish longer-form pieces with diagrams, etc, in addition to these more regular updates.

It would be impossible to do justice to everything that's happened in the four years since the last Xray update, but for anyone who was following us back then, I'll try to summarize...

After GitHub management made it clear there was no path forward for Xray at the company, I took some time off after the birth of my second daughter. When I reemerged in 2019, Antonio and I decided we would continue the Xray work in our free time. We started over with an empty repo, and by fall of 2019 the foundations of Zed and GPUI were beginning to take shape.

In 2020, I pushed the project forward with every hour I could spare, but at the end of that year I received a job offer from Warp. I needed a change, so I decided to join them, and I brought the code I'd been working on with me. But three months in, I realized that I couldn't let go of my dream of building the ultimate editor, so I left Warp to start Zed.

Max and Antonio soon joined me, and we spent the next year building the minimal features we needed to code in Zed full-time. In March of 2022, we made the switch, and we've been steadily improving Zed ever since, gradually expanding our community of private alpha testers.

Most of us have heard the advice that "if you feel like you're ready when you launch, you waited too long." I agree with that, but we also want to make sure we're positioned to make a positive impression on anyone who tries Zed before we launch a public beta. Based on feedback from our alpha community, we think we're getting close.

Last week, we had our first quality week, where we dropped everything we were doing to focus on fixing as many small bugs and friction points as possible. We plan to repeat this every six weeks, to make sure that we fix all the little things that can ruin an otherwise great product. Next week's release will include the results of this work after these changes have time to stabilize on our internal Preview build, and Zed should feel more polished.

One thing we're excited to share is a large improvement to the performance of multibuffers, especially in the context of project-wide search. Previously, there were cases where the main thread would block with very large numbers of excerpts, so last week, Antonio dug in to find out why.

To detect changes to excerpts, we maintain a fingerprint inside our rope data structure. I have a half-written blog post about ropes in Zed, but the short story is that we index chunks of text in a B-tree. The indexed data includes a fingerprint to help us cheaply determine when a buffer matches the contents of the file system. We use the bromberg_sl2 crate, which provides a hash function that provides a monoid homomorphism. From the README:

This means there is a cheap operation _ such that given strings s1 and s2, H(s1 ++ s2) = H(s1) _ H(s2).

This allows us to aggregate a digest of a rope's content upward through the B-tree. To track the clean/dirty status of buffers, we were saving these fingerprints as Strings. This was fine for the number of buffers typically occupying a workspace, but when synchronizing multibuffers with a lot of excerpts, the associated allocations were causing performance issues. We switched to using bromberg_sl2's HashMatrix type directly, which required some modifications to the crate, and reaped immediate gains.

We also discovered that pushing large numbers of excerpts onto multibuffers would block the main thread, so we now construct excerpts in a background thread and stream them into the multibuffer asynchronously. As a result, Zed remains responsive when running a large searches as we incrementally load results.

Streaming results
Streaming results

This is just one of the many improvements that landed last week. Check out the release notes in the coming two releases for more about everything we fixed. We hope it improves your quality of life.

This week, we'll be diving back into a mix of tasks that bring us closer to shipping a public beta. Of particular interest is performance telemetry. We want to know how Zed is performing so we can fix any issues and maintain our high performance standards as we add features.

Thanks for reading. I'm excited to start sharing more details of what's going on with Zed for those who are interested.