This is my fifth conversation with Zed's three co-founders Nathan, Max, and Antonio. You can read the previous one here.
This time I had to address the elephant in the room: AI. I wanted to know how each of the founder's found their way to using AI, how they use it today, and how they would like to use it. We also talked about the nitty-gritty of the current implementation of AI features in Zed and what this year will bring for Zed in regards to AI. I also had to ask: is building an editor in times of AI not ignoring the sign of the times?
What follows is an editorialized transcript of an hour long conversation. I tried to preserve intent and meaning as much as possible, while getting rid of the uhms, the likes, the you-knows, and the pauses and course-corrections that make up an in-depth conversation.
(You can watch the full conversation on our YouTube channel.)
Thorsten: When did you first use AI for programming? Do you remember?
Nathan: I used it pretty early on. I think my first really eye-opening experience with it was using ChatGPT when it first came out and just having it do really basic stuff. I think I defined geometry, like a geometry library in it, sort of just for fun though. And I was blown away that I could even do those really basic things like defining a point and a circle and an area function and all these things that it was doing at the time. Things have gotten a lot more sophisticated since that moment, but that was kind of like this mind-blowing moment for me.
Thorsten: Was it mind-blowing or were you skeptical?
Nathan: It was mind-blowing. I don't understand the general like hate and skepticism toward AI that so many programmers have.
I remember in college, I studied natural language processing and I worked for my professor, right after school. He was the head of SRI doing like classic AI, Jerry Hobbs. I remember how fascinated I was with the idea of like, what is meaning? What is language? How does it work? And studying these like combinatorial, categorical grammar mechanisms, where it was like, we define grammar as this directional lambda calculus formalism and, you know, I was really curious and fascinated by all of that and but also came away frustrated because at the time it was... a language model was the dumbest thing ever. It was based on the frequency of tokens or something and you couldn't get anything out of it.
So just the idea that I could sit there and in English ask it to do anything and have it do anything at all to me is mind-blowing. Right then and there. That's amazing. That's a freaking miracle that I never would have anticipated being good. So why everybody's not blown away by that fact that this exists in our world is beyond me. I just don't get it. It pisses me off, kind of, that people are so, so close-minded about it. Like yeah, you drove a Lamborghini into my driveway, but I don't like the color.
It's just this fixation on negativity and what's wrong and what it can't do instead of being amazed and blown away by what it can. And I guess that's just the personality difference between me and people that are like that. I am always looking at the glass half-full and I'm always looking at what's exciting. Now, I never bought a single NFT, right? Just to be clear. So I get that we, in technology, we have these hype cycles and it can get a little exhausting and you're like rolling your eyes at the latest hype cycle and people in your Twitter timeline in all capital letters talking about how this changes everything and is game changing. But I think in this case, it's actually pretty freaking amazing that we have this technology. Okay, I'll stop ranting.
Thorsten: It's funny that you mentioned natural language processing because I come from the other side of the fence. I studied philosophy and I studied philosophy of language. Then when ChatGPT came out, everybody was saying that it doesn't "understand." And I was sitting there thinking: what does "understanding" even mean? How do you understand things? What is meaning? So, I was on the other side of the fence, also thinking that things aren't that easy and that this is super fascinating.
Antonio: I used ChatGPT right after — I don't know, I think Nathan prompted us to use it. I'm not an AI skeptic or anything — I'm amazed and I also use AI for non-coding tasks — but I've never had an eye-opening experience, I don't know.
One thing I struggle a lot with with AI is what I do every day. I write code every day for multiple hours a day and I write it in Rust and in this pretty complex code base. And so my first use case for it was this to try to use it in our code base. And every time I try to do that there's always some friction.
But one thing that I really like and where I think it really shines is when it comes to generating complex pieces of code. Basically, there are certain patterns in code, right? But you can't really express those in regular expressions or by using Cmd-D
to setup multi-cursors, but AI is really good at it. You can just say "okay, I want to apply this refactoring to these five functions" and I can just explain it in a way that I couldn't explain it with any tool like regex. There's a lot of interesting potential.
Thorsten: Sounds like there was a bit of a disappointment moment.
Antonio: Yeah. I don't know whether this thing hasn't seen enough Rust. Maybe that's a problem. But there's also a problem of how we integrate with it probably, right? Where we don't give it enough context. I think the problem of just feeding it the right... One thing that I've started to learn only recently is that crafting that context is essential.
And you really need to kind of express it right. The machine really needs to understand what you're trying to do. Especially in a complex code base where you have, in your brain, like 50 references, but the machine can't know that. How could it? So, yeah, part of my disappointment is just the integration with the AI, not the tooling per se, but just like,
Nathan: We're not there yet, yeah.
Max: Yeah, the difference between using Copilot in the Zed code base — which I still do sometimes, but I wouldn't call it game changer for me — and then using it with some, say, JavaScript script that is a single file where all the context is there and the job of the script is to minimize a random test failure by reducing the steps or something, and it needs to read a bunch of files and invoke some shell commands, etc. The difference is large and in the latter case, the single JavaScript file, Copilot just knocks it out of the park.
So if we can get it to behave like that in our day-to-day work when we're working on a hundreds of thousands of line codebase, there's a lot of potential there.
Thorsten: That's what I noticed in our code base when I use the chat assistant. I often thought, "oh, if you could only see the inlay hints, if you could see the types, then you wouldn't give me this answer." But yes, integration.
Nathan: And that's, again, our failing too. The most successful times I've ever had with it are times when I'm synthesizing together things that are already in the model's training data. I love that mode.
A lot of GPUI2's renderer I wrote just in the assistant panel purely from going "yo, I need a renderer that integrates with the Metal APIs. It's written in Rust." It wasn't perfect it was way faster than me configuring all these graphics pipelines and stuff. That's not something I've done a ton.
I love just like distilling something I need, out of the latent space of one of these models, where I'm providing a few parameters but it's mostly in the weights. But I'm guiding what's in the weights to sort of give me like this StackOverflow-on-acid type thing, where the knowledge is out there. I just need it in a certain shape and I want to guide it.
So I was playing with Claude this weekend in the bath, right? And I literally wrote an entire file index that used like lock free maps to store file paths, interpreted all the FS events coming out of the FS events API. It did everything asynchronously. You know, I wrote randomized tests for it, had a fake file system implementation and I was in the bath, right, on my phone. I didn't have a single moment where I was writing a curly brace. Now, I never ran the thing that it produced, but I reviewed it with my eyes and while it may had a few issues here or there, it was a very legit implementation that this thing wrote of something that took Antonio, Max and I days, days and days of continuous investment to work on. My knowledge of like having solved it before helped me guide it, but I don't know, there's almost some way in which it changes the scale of what you can do quickly.
And then sometimes it just falls flat on its face. For the simplest thing.
Thorsten: ChatGPT came out November 2022, right? When we all should have bought NVIDIA stock. Since then, did you adjust to AI and adjust how you use it? For example, people who use Copilot, they say they adjust to it and kind of leave some comments where they want to guide Copilot. Or did any of you ever get into the whole prompt engineering thing? Or did you reduce when you use it, after figuring out what it can and can't do?
Nathan: I don't really use Copilot for what it's worth. I find it annoying. It's in my face. I never was into running tests automatically on save either. I always just want to... I don't know. I prefer to interact with the AI more in a chat modality. So I'm really looking forward to the time we're about to invest, to get more into that context window.
I just find Copilot to be kind of dumb. I don't know. Because they have to be able to invoke it on every keystroke they have to use a dumber model. And so I guess I just prefer more using a smarter model, but being more deliberate in how I'm using it. But I'm not married to that perspective. I think maybe some UX tweaks on Copilot could change my relationship, but I don't know. I guess I've been willing to sort of use it and even have my interaction with it be slower or less effective sometimes in the name of investing and learning how to use it.
And yeah, like at the time it saved me on certain really hard things like writing a procedural macro to, or enumerate all the Tailwind classes for GPUI. It kind of taught me how to write proc macros because I didn't know how.
Thorsten: Exactly a year ago, I was at a conference and I was meeting programmer friends and we were all talking about ChatGPT and some of them were saying, "oh, it doesn't know anything. I just tried this and it doesn't know anything." But the queries or the prompts they used, they looked like the prompts people used 20 years ago with Google. Back when you still had this keyword search, people would type in, "where can I get a hot dog?" But that's not how it worked back then. One friend of mine, though, he said, "you know what I use it for? I use it like an assistant. I use it like an intern." So essentially, when he's debugging something and he needs a little program to reproduce the bug, he says to the AI, "Can you write me a little HTTP server that doesn't set a connection timeout" or something like that. Because he knows where the shortcomings are. And I think that a lot of us have had this over the past year, we started to get a feel for where the shortcomings are and adjust our use to it. So I was curious whether you had any of these moments.
Max: I have a one in my day-to-day life. I use ChatGPT a lot instead of Google. And I've learned to say, "now, don't hedge this by saying, 'it depends'. I'm aware. Just tell me, give me an answer", so that ChatGPT doesn't say, "there are many possible responses to this."
But I think I have a lot to learn about what to do in the programming world still. There's probably a lot of knowledge out there that I just haven't adopted into my workflow yet for prompting the assistant.
Nathan: I think I have the advantage of just being not as good of a raw programmer as Max or Antonio. A lot of times when I'm pairing, I take more of a navigator role in the interaction. And so I just reach for AI more because I'm just not as fast at cranking out code. And so I think it's less frustrating to me.
Thorsten: When did you decide "we have to add this to Zed"? Was it being swept up in the hype and everybody asking for it, or was there a specific thing, or time when you said, "no, I need this in the editor."
Nathan: For me there's Copilot and then there's the Assistant. So Copilot, everybody asks for it. And I was like, "oh, I wanna see what it's like to work with this". But then I ended up not using it a lot. But for the other one, the assistant, it was just that I was using GPT4 and they were rate-limiting me. So then I was going into the SDK or the playground and writing text in a fricking web browser. And I'm just like, this is driving me crazy. I wanna write text in Zed.
And, I mean, that's what the assistant is right now. It's kind of pretty bare bones. It's like an API, it's like an OpenAI API request editor almost, one that isn't annoying to use from a text editing perspective. That's kind of where things are at right now, which isn't where they need to stay. We have a lot of work to do on it, but that's the thought process.
Thorsten: I kind of want to go into the weeds a little and ask you about the inline assist in Zed. Context for whoever's watching or listening or reading: you can select text in Zed, you can then hit ctrl-enter
and you send a message along with the selected text and some more context to the AI and you can ask it to "change the return type of this" or whatever, "reorder this" or "use a macro", something like that. What then happens when the request comes back from the AI, is that you can see it type the text or change it and you can see it change word by word. It doesn't look like it's just shoving the text into the buffer. So I'm curious, what happens when the LLM request comes back and says, here's a snippet of code?
Antonio: Basically, we implemented a custom version of the Needleman-Wunsch algorithm. There's several algorithms for fuzzy finding and they all stem from this dynamic programming algorithm, which is essentially about finding the lowest cost path from point A, the origin, which is where both strings start, and the end, which is where both strings end. So we're kind of doing this like diff, the streaming diff, because typically diff is this lossy function where you need to have like both texts entirely, but the problem is that the AI streams the response, chunk by chunk. But we don't want to wait for the entire response to come back before diffing. So we kind of have this like slightly modified version of Needleman, in which we try to favor insertions and deletions and we kind of look ahead a little bit and have a different cost function. That lets us produce these streaming edits. It's pretty fun project.
Thorsten: So did you build this specifically for the inline assist? I assumed it code that's also used in the collaboration features, no?
Antonio: No. What we tried at first actually was to have the AI use function calling to give us the edits, as opposed to, asking for a response and the AI just spitting it out, top to bottom. The initial attempt was like, "okay, just give us the precise edits that, you know, you want us to apply". But what we found out pretty early on was that it wasn't working very reliably. It was kind of tricky to have it produce precise locations.
It's really good at understanding what you're trying to do as a whole, but it's very hard to have it say, "okay, at point three, you know, row three, column two, I want to insert, delete, you know, five characters and insert, you know, these other six characters".
So we went back to the drawing board and we said it's good at spitting out texts, so let's just have it write what you wanted, and that's Nathan's idea to do it came in.
Nathan: And Antonio's algorithmic chops actually making it happen. Yeah.
Thorsten: How, it's pretty reliable, right?
Antonio: Thanks. Yeah.
Nathan: It sometimes it overdraws. It... I don't know. It's not always reliable for me. I think that has to do with our prompting maybe. There's a lot of exploration to do here. I'll ask it to write the documentation for a function and it'll rewrite the function. That drives me crazy.
Thorsten: The prompting, sure, but the actual text insertion — every time I see these words light up, I'm like, what's going on here? How do they do this? How long did it take to implement this? I'm curious.
Antonio: Half a day. Yeah, I remember a day. Yeah, something like that.
Thorsten: No way.
Nathan: But to be fair, we had already really explored and just needed a little bit of push for the path matching. That took a little more time, wrapping our brains around it. And I think more of it stuck for you, Antonio, to put it that way.
Antonio: Hahaha!
Nathan: Cause, yeah, traversing that dynamic programming matrix still kind of boggles my mind a little
Thorsten: Half a day — you could have said a week just to make me feel better.
Antonio: A week, yeah a week, no.
Max: Hahaha.
Thorsten: So right now in Zed we have the inline assist, we have the chat assistant, which you can use to just write Markdown, and you can talk to multiple models. What's next? What's on the roadmap?
Nathan: A big piece is just getting more context into what the assistant sees. Transitioning it away from an API client to starting to pull in more context. Kyle, who's been contracting with us, has a branch where we're pulling in the current file. Obviously we want more mechanisms for pulling context in, not only the current file, but all the open files, and you can dial in the context, opt in or out, and so on.
But then also doing tool-calling where I can talk to the assistant and have it help me craft my context, if that makes sense. Also having it interact with the language server, but also using tree-sitter to sort of traverse dependencies of files that we're pulling in so that we make sure that we have all this in the context window. Of course, context sizes have gone up a lot, which makes this all a lot easier, because we can be more greedy in terms of what we're pulling in.
So that's a big dimension, populating that context window more intelligently, but also giving the assistant tool calls that it can use to write a command in the terminal. I don't know if I want to give it the ability to hit enter, you know, but like, at least write it in and stage it and shift my focus over there so that I can run something. I could get help with whatever random bash incantation I might want to run. Having the assistant escape that little box and reach out and interact with other parts of the editor.
That's all really low hanging fruit. I think that we need to pick. That's what's next for me. And then we're also like experimenting with alternative completion providers, for the Copilot style experience. We'll see where that goes. It's still kind of early days there.
Max: I'm excited about another dimension of the feature set. Right now, all the stuff we were just been talking about, that system is very local. You select a block of code and its output is directed into that location.
But being able to — just like code actions in Zed — say "inline this function into all callers" and get a multi-buffer opened up that says "I changed here, I changed here, I changed here. Do you want to save this or undo it?" I can then go look at what it did.
I want to be able to say, "extract a struct out of this that isn't in this crate, that's in a subcrate and depend on it in all these usages of this crate so they don't have to depend on all this other stuff" and then have it go, "here, I changed your Cargo.toml
, I created a crate, I changed this, I did these sort of more complex transformations to various pieces of code in your code base. You wanna save this or undo it?"
I think that's gonna be a really powerful way of letting it do more stuff while keeping control. I think the multi-buffer is a good way to the user and ask "you want to apply all these transformations that I just made?"
Nathan: Speaking of multi-buffer, another really low-hanging fruit thing is invoking the model in parallel on multiple selections. When I pull up a multi-buffer full of compile errors that are all basically the same stupid manipulation that I needed to do, it'd be great to just apply an LLM prompt to every single one of those errors in parallel.
Thorsten: Low-hanging fruits are everywhere — you could add AI to every text input basically, adding autocomplete or generation or whatsoever. There was an example last week, when I talked with somebody who wanted to use an LLM in the project search input where you could use the LLM to generate regex for you. That's cool, but at the same time, I thought, wouldn't it actually be the better step be to have a proper keyword search instead of having the LLM translate to a regex? I'm wondering whether there isn't the possibility of being trapped in a local maximum by going for the low-hanging fruit.
Max: Meaning, like, how much of the current programming tool paradigm, like, regex search do we even want to keep? Or do we say that we don't even need that feature anymore?
Thorsten: Something like that, yeah. A year ago, everybody was adding AI to every field and obviously things changed and people now say, "this is not a good use case for that", and you're now also saying you want it to have access to files, and so on. How do you think about that? What do you see as the next big milestone?
Nathan: Well, I guess I have different perspectives. There's a couple different things I want to respond to that with. One is we experimented with semantic search over the summer and the initial thing was that we generated all these embeddings with OpenAI's embedding API, which is designed for text and prose. I think less for code, that's at least my understanding, maybe people can comment on the YouTube video and tell me I'm wrong. So I don't know how good embedding models are in general for code, but what I did find is that with this initial experiment, that was literally that you would start typing your query and we would just show you the matching files or like the file and line number. And I was just using that a ton for navigation that it was just really useful to be able to mash keys.
It was better than the fuzzy finder and better than cmd-t
, which is the symbol search on the language server. Because at least with rust-analyzer
on our code base, that can be really slow. So I used it as kind of this just quick, convenient navigation tool.
But then I was not super involved and we pivoted that prototype-modal-experience into a feature of our search. And then I just stopped using it because of the friction of using that was too high. And the quality of the results that we were getting at least then, wasn't really high enough. I want to get back to that, restoring that modal fuzzy navigation experience of using semantics to quickly jump to different areas. But it's not like a search result, not quite, it's more like this quick thing. So that's one thing.
But the other thing I want to say is like, I'm skeptical, I guess, of... I was skeptical of AI in general until I was proven wrong. So I want to be careful to be humble about the possibilities of what can be done. But, in general, where I'm at right now is that I really want to still see what is going on. I don't want a lot of shit like happening behind the scenes on my behalf where it's writing a regex and then running it because I don't have enough confidence that's going to work well.
So until I get that confidence I like the idea of there being this very visible hybrid experience. The AI is helping me use the algorithmic, traditional tools. Even OpenAI has the code interpreter, right? They're not trying to get the LLM to add numbers. They just shell out to Python. And so I think giving the AI access to these more algorithmic traditional tools is like where I want to go.
Thorsten: Do you have any thoughts on the context windows? When you have a large context window, you would think all of the problems are solved, right? Just shove the whole codebase in. But then you also have to upload a lot of code and it takes a while longer until the response come back. Any thoughts on this trade off between context size and latency?
Nathan: I'm still wrapping my brain around what causes the additional latency when the context size grows larger. In my mental model of a transformer, I don't understand why it takes longer, but I can see practically that it does. So yeah, I guess, I'm revealing my ignorance here.
But to me it seems like, Giving it everything is a recipe for maybe giving it too much and confusing it. Although my understanding is that this is also improving, they're getting less confused now by noise and the needle in the haystack problem. That's what I saw from Gemini, I'm still kind of waiting to get my API access. But what I saw was that it's very good at kind of plucking out details that matter among the sea of garbage.
I don't know, that wasn't a very coherent thought other than it seems to me that we need to think about how to curate context for a while longer. And the times when I've interacted with models and been most successful has been either when I'm, again, like drawing from the weights, the latent space of that model, and very little needed in the context window because the problem I'm solving is sort of out there in the ether. Or I really set it up with the specific things that it needs to be successful.
But to be fair, I think we have a lot to learn in this space. Yeah.
Thorsten: I asked because you said you used the fuzzy-search when you had it within reach, but once there was a little bit more friction you stopped using it. And I noticed, speaking of large context windows, that I already get impatient when I have to wait for ChatGPT sometimes. "Come on, skip the intro, give me the good stuff." With large context windows, I wonder whether I would rather skip asking when I know that the answer's gonna take 20 seconds to come back, or 10 seconds, or whatever it is.
Nathan: Yeah, I think the higher the latency, the more I'm going to expect out of what it responds with. I mean, I was just having a great time in the bath, while I waited for Claude to respond. I took a deep breath and felt the warm water on my body, you know, and then by the time it responds, I'm just reading it.
Thorsten: I think you should redo this with a control group that also codes in the bath but without AI. Maybe the results are the same. It sounds like a fantastic bath. Let me ask some controversial questions... When I said I'm going to join Zed, people asked me, "oh, a text editor? Do we even have to write code two years from now, with AI?" What do you think about that? Do you think we will still type program language syntax into Zed in five years, or do you think that how we program will fundamentally change?
Nathan: Yeah, it's a good question. I mean, I've tweeted out that it's kind of ironic that as soon as AI can write me a code editor, I won't need a code editor. But as of yet, it's not yet possible to sit down and say, build me a code editor written in Rust with GPU accelerated graphics. I don't know. I don't think AI is there yet.
Now maybe that's the only product complex enough. Maybe the only thing that AI can't build is a code editor, but I'm skeptical right now. Maybe Ray Kurzweil is right and we're all just going to be like uploading our brains into the cloud and I just don't know. All I know is things are changing fast, but what seems true right now is at the very least I'm going to want supervisory access, like that Devon demo.
To me, a potential outcome is that editing code ends up feeling, for a while, like that Devon demo but with an amazing UX for having a human programmer involved in that loop, guiding that process so that we're not just spamming an LLM with brute force attempts. Instead there's this feedback loop of the LLM taking access and the human being involved has to correct that or guide that. Yeah, so it becomes this like human LLM collaboration, but the human is still involved.
If that ends up not being true, yeah, I guess we don't need a code editor anymore. I don't know how far away that is, if it's ever gonna be here.
They've been telling me for a long, long time that I'm gonna be riding around in these self-driving taxis and I've done it a couple times. But I will say, the taxi refused to park where we actually were in San Francisco. So we had to walk in pouring rain to the place where they pick us up. My mind is freaking blown that a car is automatically driving me, picking me up and driving me somewhere else, and at the same time, I'm a little annoyed that I'm walking through the rain right now to the place where it stopped. It sort of feels like the same thing happens with LLMs, right?
Who knows what's gonna happen, but for the moment, I like creating software. I don't need to type the code out myself, but I do feel like I'd like to be involved more than just sitting down to a Google search box and being like, go be a code editor.
Max: I'm bullish on code still being a thing for quite a while longer. I think it goes back to what Nathan said about the AI expanding the set of things that you can build, in a shorter amount of time, it makes it easier to explore a bigger space of ideas, because it's cheaper.
I think there will be code that it won't be anyone's job anymore to write, but that's boring code anyway.
But I think it's just gonna make it possible to have more code because it's cheaper to maintain it, it's cheaper to create it, rewrite it if we want a new version. There'll be all kinds of things that weren't possible before. Like right now, banks aren't able to deliver like good websites, and I think there may be a day where a bank could have a good website. There'll be software that is, for whatever reason, infeasible to deliver right now. It will be feasible to finally deliver. And I think this is going to be code and I'm still going to want to look at it sometimes.
Nathan: Yeah, it's an incredible commentary on the power of human incentives and the corruption of the banking system that a bank having a good website is the day before we achieve AGI.
Max: Ha ha ha ha.
Antonio: If you look at Twitter right now, it's like every post is saying AGI is coming out next month. I don't know. I don't really know. The honest answer for me is that I don't know. It's possible. That's one thing that annoys me about AI, just how opaque some of these things are.
In the past, with technology in general, if there were hard problems or complicated things, I could sit down and at least try to understand them and maybe even create them. With AI, unless you want to do something with ChatGPT or Claude, you have to spend millions of dollars. That part, I don't love that.
That's where my doubts come from, because it's very possible that engineers and researchers from these companies are right there with AGI, right there with super human intelligence, but how much of it is hype? I don't know.
Thorsten: Here's a question that I'd love your thoughts on. I use ChatGPT a lot to do the "chores" of programming, some CSS stuff, or some JavaScript, or I use it to generate a Python script for me to talk to the Google API, and it saves me a full day of headaches and trying to find out where to put the OAuth token and whatnot. But with lower-level programming, say async Rust, you can see how it starts to break down. You can see that this other thing seems relatively easy for the AI but this other thing, something happens there. And what I'm wondering is, is that a question of scale? Did it just see more JavaScript? Did it see more HTML than systems Rust code because it scraped the whole internet?
Max: I think solving problems that have been solved a lot of times, that require a slight tweak — I think it's great it works that way. Those are boring things because they've been solved a lot of times and I think the LLM is great at knocking those out. And some of the stuff that we do, which has been solved — I'm not going to say we're doing things that have never been done before every day — but a lot of the stuff we're doing day-to-day has not been solved that many times in the world. And it's fun. That's why I like doing it. So I'm not that upset that the LLM can't totally do it for me. But when I do stuff that is super standard, I love that the LLM can just complete it, just solve it.
Nathan: I want the LLM to be able to do as much as it possibly can for me. But yeah, I do think that it hasn't seen a lot of Rust. I mean, I've talked to people in the space that have just stated that. Like they were excited, "oh, you're open sourcing Zed? I'm excited to get more training data in Rust." And I'm like, "me too", other than, you know, competitors just sitting down and saying, "build me a fast code editor" and then it's already learned how to do that and all this work comes to nothing. I don't know.
But also if that were true, ultimately I'm just excited about advancing the state of human progress. So if the thing I'm working on ends up being irrelevant, maybe I should go work on something else. I mean, that'd be disappointing, I would like to be successful... Anyway, I don't know how I got on that tangent.
But writing Python with it, which I don't want to write but I need to because I want to look at histograms of frame times and compare them? Thank you. I had no interest in writing that and it did a great job and I'm good.
Antonio: There's also another meta point, which I guess we didn't really discuss. Even in a world where the AI can generate a code editor, at some point you have to decide how do you want this feature to work? And I guess the AI could help you with that, but I guess there'll be a human directing that and understanding what the requirements are and what are you even trying to do, right?
Maybe that also gets wiped out by AGI at some point, but I don't know. Code at the end of the day is just an expression of ideas, yeah, and the knowledge that's within the company or within a group of individuals.
I'm excited about AI in the context of collaboration. I think that would be like a really good angle for Zed as a collaboration platform.
We've talked about querying tree-sitter for certain functions or the language server for references and that's some context you can give the AI. But what about all the conversations that happened? Like what about — going back to our previous interview — if it's true that code is a distilled version of all the conversations that have taken place, well, that's great context for the AI to help you write that code.
Nathan: And we can capture that context with voice-to-text models or do multi-modal shit.
I mean, my real dream is to just create an AI simulation of Max or Antonio — GPtonio, you know? I love coding with them because it lets me kind of float along at a higher level of abstraction a lot of times. I don't know, maybe I'm just being lazy. I should be typing more, but sometimes I feel like when I'm typing, I get in the way or whatever. I just love pairing and I love being that navigator and being engaged. So a multimodal model that could talk to me and write code as it's talking and hear what I'm saying. And I can also get in there and type here and there. That'd be amazing. That's not the same thing as collaboration though.
But it would learn from watching us collaborate. That's my main thing. You know, yeah.
Thorsten: You could train the AI based on all the edits that Antonio has done over the last year or something. And all the conversations.
Antonio: Right, why did those edits took place? What was the reasoning? Why is it better to do things this way and not that way? What's the internal knowledge, the implicit knowledge that's not written down anywhere? We have it just because of shared context. Just sharing that context with the AI.
Nathan: When we started Zed, we always had this idea that wouldn't it be cool to just put my cursor on a character and say, show me that character being written. This idea that there was all this conversation and context tied to all this code and we could store it and make it easily accessible. But even that mode is like, it's a lot of shit to comb through, right? So having a tool that could be really good at distilling that down and presenting it in an intelligent way, that's amazing. And that's a really exciting development for collaboration, bringing all this data, that we could potentially capture but aren't really yet, to our fingertips in a new way.
Thorsten: One last question, a look into the future. Do you know what Andy Warhol said about Coca-Cola? That it's the same whether the president drinks it or you drink it, that there's no premium Coca-Cola and normal Coca-Cola, but that everybody gets the same. Programming is like that to me. I can write code online with the best of the world and it looks exactly the same on GitHub. My avatar is next to their avatar. It's a level playing. I can put my website up and it's also thorstenball.com, right next to the websites of large companies. There's no exclusive club. And what I keep thinking about is that all you ever had to have to program was a computer, even a RaspberryPi is enough. But now with AI and LLMs, suddenly things have started to become really expensive. Too play in the AI game, you need a lot of money, or GPUs. Do you think about that? Do you see the shift or do you think that it's always been the case that if you wanted to have like a multi-million-users scalable web services you had to have money?
Antonio: It might also just be that today the cost of the hardware is because we're not there yet technologically, right? Things have gotten a lot cheaper in CPU land, there's so many of them now. So, I could see a world in which things become a commodity because they become so cheap.
Nathan: Think about the late 70s, right? Ken Thompson and Dennis Ritchie, writing Unix on a teletype printer, hooked to a deck PDP11 that was up to the ceiling of my room in here. Right? And talk about the democracy of access to compute. That wasn't always the case. It just seemed like we were in an era where compute ceased to be the limiting factor on innovation for a really long time. But maybe that's true again now and who knows where it's going.
Thorsten: That's beautiful. Nice.