Zed Weekly: #21

September 29th, 2023


Most of the work on initial prettier integration is done by now: I want to write more tests and play with it more. Unfortunately, Sonoma and our collaborative server do not work together, so I cannot properly prove that remote integration part of our prettier support works. Ergo, another detour to fix issues with platform-specific backtraces, yay.

As a side effect of this week's prettier integration, LSP servers now display they logs better: every server used in Zed now displays these, including their stderr logs. Apart from that all, I'm slowly getting back at small tasks: toggle file find (cmd-p) became slightly more ergonomic this week, and the inlay hint toggle now shows only where it is useful.


Hey everyone! For the next few weeks I'll be working closely with Nate as we rebuild the Zed UI on top of the upcoming version of gpui.

A big focus of mine so far has been setting up a storybook for rendering our new UI components in isolation. The goal here is to provide a fast iteration loop without needing to boot up the entirety of Zed.

Here's an example of what a basic story looks like:

use strum::IntoEnumIterator;
use ui::prelude::*;
use ui::{Icon, IconElement};
use crate::story::Story;
#[derive(Element, Default)]
pub struct IconStory {}
impl IconStory {
    fn render<V: 'static>(&mut self, _: &mut V, cx: &mut ViewContext<V>) -> impl IntoElement<V> {
        let icons = Icon::iter();
            .child(Story::title_for::<_, IconElement>(cx))
            .child(Story::label(cx, "All Icons"))

We can run this story through the storybook CLI:

cargo run -p storybook -- elements/icon

Which will then show us this:

The Icon story
The Icon story

Infrastructural work aside, I've been diving deep into the implementation of these new UI components. We have made a lot of progress this week and will have more to share in the near future!


This week in Vim, I shipped the command palatte out to preview! Initial user reactions have been very exciting, and it's nice to close out the most requested feature. I also took the time to add some more Vim-specific documentation, and to add a couple of Vim-feeling key bindings for Zed-specific features. With that done, and over 60 other small bugs, features and quality of life improvements over the last 3 months, it's sadly time to stop spending 100% of my time on Vim mode. I still hope to keep fixing bugs as they are reported, and work through the (still extensive) backlog, but there will be fewer major improvements for a while.

This will allow me to jump in to help Max and Mikayla on Zed's collaboration features. The promise of seamless collaboration is the most exciting thing about Zed's approach to building a development environment, and I'm looking forward to diving in and making it even better.


I've been a bit all over the place in the last week, but one of the more fulfilling tasks this week was working on optimizing some charts in our dashboards that were slow and / or memory intensive. One chart in particular took multiple minutes to run on a given Clickhouse cloud machine and it would sometimes exceed the default memory limit and error out. I was able to reduce the execution time to sub 10 seconds and drastically reduce the memory usage.


This was a big week for AI at Zed, as we shipped our initial Semantic Index to stable, along with Semantic Mode in Project Search. Feedback has been pretty positive, and with that out of our way, we can now move on to more exciting features leveraging the technology. One of those being integrating the index with our inline assistant. This week, I got initial tests hooked up for this, allowing us to generate code that understands and has access to our code base. I've been super impressed with these initial tests and I think its opened up a lot of doors for future functionality at Zed. Exciting stuff, hopefully we'll have a blog post out soon walking through some of the technical challenges in standing something like this up and a few teasers for the future.


One interesting thing I did this week was jump in with Conrad on our non-monospace cursor movement. It's nice to imagine that text can be put in a perfect monospace grid and everything makes sense but unfortunately the languages of the world make that significantly more complicated. With multiple writing systems, called scripts, mixed within a single buffer, the text system has to use different fonts for different sections of text. These different fonts very likely have different widths at the same font size. Even if the font used contains all the graphemes, some of them are naturally going to be larger in order to show more detail or have a different shape; common suspects are emoji and kanji.

So we have a buffer which may not or may not contain misaligned graphemes. Things get worse still even if all the graphemes are perfectly grid aligned. Graphemes are the Unicode concept for representing a single visual "character". It's often convenient to think of codepoints as characters but that's not very accurate to the way we humans tend to reason about characters. Similarly to how UTF-8 encodes a single codepoint as one to four bytes, Unicode in general encodes a grapheme as one or more codepoints.

My favorite example is the many variations of the family emoji ๐Ÿ‘ฉโ€๐Ÿ‘ฉโ€๐Ÿ‘งโ€๐Ÿ‘ฆ. Depending on how your editor of choice handles backspace, you can backspace one family member at a time, until the entire emoji is gone. This is because it is made up of a sequence of codepoints for each family member, separated by a special codepoint called a "Zero Width Joiner".

Finally we have the true wild case, proportional fonts. In a buffer using non-monospace fonts all bets are off and all alignment is out the window. Most code editors don't handle this extremely well, Zed included. This makes sense as its highly odd to see a user intentionally using a non-monospace font for viewing and editing code. However there are a number of folks who genuinely prefer proportional fonts for code, and I am personally one of them. So it's uncommon but not unheard of.

When performing vertical cursor movement most editors attempt to maintain the same horizontal location. With naive cursor positioning based only codepoint, or worse, byte offset as Zed currently uses, this horizontal location can jump unpredictably when moving the cursor up and down in the situations described.

The solution is to attempt to maintain a proper horizontal position, rather than any sort of technical offset. By treating that intended horizontal location the same way we would a mouse click, we can maintain a more natural feeling horizontal position as it the cursor moves vertically through potentially misaligned content. The downside is that it makes cursor movement more computationally expensive and complicated, which is why most editors don't do this.

Conrad and I were able to get the ball rolling on this change in a big way this week. He's continuing to push it forward but we're already nearing a rather satisfying end result. It remains to be seen how much of a performance cost this has and how we can make it better, so it may be a while until you see it in a release, if at all, but I'm excited about the improvements we're seeing! :)


This week, I've been working on making following work more intuitively when collaborating. Previously, following was scoped to a single project, and it wasn't possible to follow someone outside of the context of a shared project. But now, we've introduced a feature called channel notes, which are collaboratively-editable buffers that don't belong to a project, so it's useful to be able to follow into those views even without a shared project. So we're making this possible.

We're also changing the way that collaborators' colors are assigned, so that they are consistent throughout a call, as opposed to being different in each shared project.


Just got back from Strangeloop and the Local First unconference after, and I got to meet Conrad :D. Otherwise, I've been working on adding change notifications to our channels feature. It's been quite easy to detect channel note changes, as we use CRDTs for all operations we simply save the last lamport timestamp value that we send out for each collaborator. But doing this kind of operation in bulk, to solve the N+1 queries problem, has turned into an especially hairy multi-step-ORM-join. I'll see if I can post a screenshot next week, if we choose to keep that code as is.


Late arrival here. Apologies for missing these a few weeks in a row. I've been heads down on a substantial revamp of GPUI. While Nate has been going hard using my initial prototype to build out UI in the new layout system, I've been working at my limits revisiting assumptions at the core of the system and applying some of what we've learned over the past few years to simplify the API. It's important to me to get these things right before sharing GPUI with the world. Really looking forward to it.