Zed Weekly: #23
I was finally able to land autocomplete documentations into main this week. While I was in the area I took a look-see at a long-standing issue we've had with poor typing performance with completions.
There are two main phases to showing autocomplete, the request and the filtering. When the user starts typing we don't know about any completions yet, we have to ask the language server to give us a list, that is the request. Once we have a list we can perform our own fuzzy sorting and trimming to remove excess items and bubble the closest matches to the top of the list, which is called filtering.
Just the one request is not enough, rather we need to continually re-request the most up accurate list as what the server gives us may change as the user continues to type. However, while we wait for the server to respond it can be useful to re-filter the list we already have, to re-sort based on the further typing the user has done. This should ideally provide a smoother experience and hide some of the latency.
Unfortunately when doing this re-filter we were blocking the UI thread until the filter completed, causing hitching and increased input latency. Because of this we were also waiting until it completed to request the new copy of the completion list, further exacerbating the natural latency of communicating with a language server, especially over the network. This should not have been the case as the filtering API very explicitly runs the work in our background worker thread group.
The tricky bit is that just because a subsystem offloads heavy work to the background workers and coordinates via async correctly, a callsite higher up may still misuse it, and that's exactly what happened here. The re-filter was triggered, it moved that work to the background, and awaited on it; then we took that resulting future and blocked the entire thread on it returning. In short the entire UI thread was stopped doing nothing while waiting for the background threads to finish the filtering.
With a bit of work I was able to reconfigure the completion menu's state to allow multi-worker synchronization, and set up this re-filter to not block and to race the re-request. This allows the re-filter to improve the experience if the re-request takes a while, but without overtly interfering. No longer will re-requesting a new list wait for the re-filter to complete, hopefully lowering that latency. Furthermore, the UI will not hitch or stutter while typing with many completions.
A big focus of mine coming into this week was finishing up the migration I started at the end of last week of moving all of the UI components over to the latest iteration of GPUI. Previously we were building against an older snapshot, and there have been some bigger foundational changes in GPUI since then.
I'm happy to report that our new UI work is now running entirely on the latest version of GPUI.
As I've started using this new version of GPUI in anger I've been working closely with Nathan and Antonio to continue to refine it as we move closer to real-world usage.
This week also saw some changes to how we structure our UI stories in order to improve the experience of working on UI components.
Previously, the stories for a component were housed in the
storybook crate, while the component definition itself resided in the
ui crate. This separation meant a lot of jumping back and forth between these two crates.
We now have a separate
stories module within each component's module, allowing us to define stories right alongside their corresponding components:
Between a week out on vacation and a cold that knocked me out for a few days, I don't have much to talk about this week other than the gpui2ui work marches on.
We are getting closer to visual parity with many UI elements and hitting improvements over the old UI in some places, especially in places like the Project Panel and Collab Panel. Elements being built from shared components make unifying the UI and refactoring things much faster.
Hopefully, in the next few weeks, I can start sharing some visuals and talking about improvements we are making to enable better contrast, more customization in themes, scalable UI, and more. (Not all of these things will land at the same time as the new UI, but the groundwork to enable them will allow them to be done much more easily soon.)
Otherwise, I've just been writing a ton of Rust in my personal projects to get more familiar with the language and ecosystem. I'm already starting to feel the itch to get back to Rust when I'm in TypeScript land, and the other folks here are cackling with glee over it 😤
As promised in my last Zed Weekly entry, I have spent this week working mostly on Vue.js integration. It is going to land in next Wednesday's (18th of October) Preview release. Other than that, I've also landed a small fixup for completions of Rust's async functions.
Working on the collaboration side of Zed: how do we make it easy for people to collaborate with us on Zed itself, and improve the quality of the collaboration tools?
I spent a significant number of hours wrestling with Apple's Entitlements, and as a result, I think we can now ship with Universal Links (so in the future there'll be a link that takes you straight to Zed without jumping via the website).
Next up is public-access to channels: allowing people to participate, but not control, the call. Most of the work here is defining how it should behave, but the implementation is not as straightforward as expected as Zed currently lets permissions cascade down the tree of channels.
This week, I've fixed a few bugs in following, which I introduced last week in the process of generalizing how following worked.
I've also been working on a notifications panel for Zed, so that we can add
@-mentions to Zed's chat, and the mentioned user will have a place to view the notifications.
This week was a good closure for multiple things:
macOS Sonoma upgrade is now possible for Zed project, due to fixed issues with backtraces (worked around by trying beta XCode) and curl (ended up fixing that myself https://github.com/alexcrichton/curl-rust/issues/524)
I've found and fixed a diagnostics indicator bug, now it updates properly based on LSP server notifications. It was irritating me for a while, but took some time to understand the appearance pattern. Ironically, fixing this exaggerates rust-analyzer issue with false-positive diagnostics, good time to try and carve out an isolated example and maybe even a fix to the language server itself.
Finally, first version of prettier integration ships with the next preview. It aims to cover all existing languages (hence, working with plugins for Tailwind and Svelte), work in collaborative scenarios, detect projects' prettiers from their package.json or provide the default one, and some other small things. Let's see how good the coverage is.
I travel to Italy next week to work side-by-side with Antonio. The following week, the whole company will be convening in Bologna for our Fall Summit. My goal is to have the new version of GPUI ready to have everyone work together on the transition.
It's been a bit of the blur, but highlights from this week include refinement how views nest within other views, and the ability to associate elements with state that persists between frames. The core premise of GPUI is that the document object model is stateless. We can add caching to save on energy, but we've designed the framework to be capable of re-rendering the entire window with minimal latency.
This statelessness creates challenges when elements need to be stateful, such as is the case when implementing a click handler. A click is defined as a mouse down and a mouse up on the same element, but if a view update occurs while the mouse is down, we need to remember which element was clicked. To support this, we require stateful elements to be associated with an identifier. At first this seemed annoying, but then I realized that React requires children to have ids for efficient diffing. We don't need to diff, because we're stateless, but the other side of that coin is that we need identifiers for any element that needs state. Seems like a worthwhile trade off.
The major remaining components for GPUI 2 are focus and keyboard handling, drag and drop, scrolling, IME, and a list component that uses custom logic to avoid laying out off screen elements. It's great to be able to pull the best ideas from the web platform, but exercise the low level control we need to make it all fast.