Matt Dupree 0:07
I’m just gonna jump right into it. We got a lot of slides here want to make sure I don’t throw us off schedule. I’m gonna start by ruining the movie Oppenheimer. They build the bomb. There you go the spoilers out. No kidding. So, okay, we know that nuclear technology is a thing. And we know that historically, after they were introduced, there was a lot of concern about their use, right, and basically, the end of the world, so much concern that, in fact, the American Academy of Sciences had a special panel that they put together, and they had multiple Nobel laureates there, to kind of weigh in on the future of nuclear energy, nuclear technology, and kind of the fate of the world. And that panel agreed that basically, we were screwed. Like we like we had to do some very drastic things, or we were not going to survive as a species like we were, we were done, our kind of extinction was imminent, and then something happened. So the train of history hit a curve. This is quoting from a book called Super forecasters. And in just a few years, the world went from the prospect of nuclear war, to the new era in which many people, including the Soviet and American leaders saw a glimmering chance of eliminating nuclear weapons altogether. So here, we have kind of a situation where experts, the people who are really close to kind of the politics and like the Science of Things, made a prediction that was way off.
Matt Dupree 1:35
And I think there’s actually something similar that’s going on right now with LLMs. And in software in the future of user interfaces. And so that’s what I want to talk about is, how do we avoid being way off in our predictions in the same way that these, these folks were about nuclear technology? And, yeah, that’s, that’s basically the topic for the talk today. So four things we’re talking about predicting the future, kind of generally, and why it’s hard, and how to get better at it. Then we’ll talk about agents specifically. And like their impact, like, you know, kind of soon impact or lack thereof, on user interfaces. We’ll talk about user interfaces that won’t be replaced by AI anytime soon. And then we’ll talk about how LLMs can help us escape something that I call UI rot, which is just the tendency for us to get worse over time. So let’s jump into predicting the future. Really quickly. So we’ve got, you know, comments, people can drop, drop things in quick poll, like, curious how folks think, like human beings in general are at predicting the future. So like, you know, on one extreme, you could say, oh, we’re always right on the other stream, you could say, we’re always wrong. Maybe you think we’re somewhere in the middle, go ahead and throw it in the chat. And we’ll we’ll take a look at the answers here in a second. But, but just so folks know, kind of where I stand, I’m like, kind of in the middle on this actually, I, I probably a little bit further to the left that like we’re we’re very bad at at predicting the future. And the reason, we’re so bad, in fact that like, there’s this there’s this metaphor that’s become really common for describing humans like, inability to predict the future, this kind of dark throwing monkey, it actually, it started in in the kind of finance context. There’s this book where a kind of a trader points out that a blindfolded monkey throwing darts at newspapers, financial pages, could select a portfolio that would do just as well as one carefully selected by experts. So that’s where this metaphor started, go, kind of getting started.
Matt Dupree 3:36
And then it was expanded on in Thinking Fast and Slow. where Daniel Kahneman cites this study by Philip Tetlock, he interviewed 284 People who made their living, commenting and offering advice on political and economic trends. And the results were just like, completely devastating. So reading here now, so people who spend their time and earn their living studying a particular topic, produce poor predictions that are dark throwing monkey, who would have distributed their choices evenly over the options. And so like, you know, definitely like I include myself in this right like I’m, I am studying LLMs and AI, I’m like trying to make predictions about the future. I’m like, a so called expert. But like, if I fare as well as these, like political and economics and financial experts, it’s really not going to be basically you shouldn’t even listen to me like you should, you know, you should make up your own mind. So in general, people are bad at predicting the future, even experts. And if you want to actually like, go deeper into this and figure out how to make better predictions. This is an excellent book. If you’ve not read it, this talk is recorded. Like you can just leave now and go read it. It’s like, everything in that book is probably more important than what I’m going to say here. But we are going to get into a few of the things that are discussed in the book to kind of give you a sense of techniques for improving your ability to forecast like how LLMs are going to impact user interfaces and software in general.
Matt Dupree 4:55
So let’s talk about two techniques from the book. One technique Is it’s named after Miko Fermi, the guy who designed the first nuclear reactor. So it’s Fermi-ising a prediction problem, which is just a fancy way of saying that you want to break up the problem. So for me when he was not working on nuclear reactors, he liked to think about questions like this, like how many piano tuners are there in Chicago? And a lot of people will look at a question like this and say, like, Oh, this is like impossible. Like this is a silly, kind of a silly thing to, to think about. But what Fermi like to do is you’d like to take a very difficult question like this and break it up into smaller parts, and then venture guesses to each of those smaller parts. And what he found is that by doing this, he was actually able to come up with very good guesstimate it’s about about the kind of the overall question. So taking this larger question, and breaking it up to like, how many pianos are there in Chicago? How often are pianos tuned each year? How long does it take to tune a piano? How many hours a year does the average piano tuner work? If you kind of guess at these smaller questions, you can arrive at a reasonable answer for this bigger question. And it turns out that this is something that they talk about and super forecasters.
Matt Dupree 6:11
But empirically, if you look at people who are good at forecasting people who are not so good at forecasting, the super forecasters, those who are talented at it are actually good at Fermi Ising, a forecasting problem breaking it up into smaller pieces, and then venturing guesses at the smaller pieces and combining them into a reasonable answer for the whole. And so that’s something that we can do. As we think about the future user interfaces with our labs. It’s actually something we are going to do later in the presentation. But just wanted to get that, that technique out there. One fun board game actually for practicing Fermi Ising is wits and wagers. And folks haven’t played this. It’s basically a bunch of questions like the how many, you know, piano tuners are in Chicago, but you’re competing against other players to come up with the best guesstimate. And in some cases, you can see what they what they guess, and you can bet on, you know which guesses are the best. But it’s a great way to kind of practice your Fermi-ising. And like sharpen your forecasting ability. Okay, so that’s one technique that I wanted to talk about. Another one is actually measuring the quality of your forecasts. So this is something that that meteorologists do all the time, they use something called a brighter score, to calculate, you know, how, how good their forecasts are. So here’s what that would look like applied to like kind of predicting LLM capabilities and like their impact on user interfaces. Let’s say you you had a prediction, you could say something like, I’m 80% confident that LLMs won’t match a standard calculators performance in the next three months. Like this is actually true for me, like I don’t think like, you know, LLMs can do some math, but they’re like, weird cases where it, it kind of falls off. And so a standard calculator is just like a better bet. And a lot of instances. And so when you make a prediction like this, you’re supposed to assign, like a probability score, like how confident are you that you’re correct about this. And then once you’ve done that, you can actually calculate once you’ve made the prediction, and then either the prediction happens or doesn’t happen, you can actually calculate, it’s basically the squared error between the your probability estimate and whether it happened. So supposing that LLMs do matche standard calculators performance in the next three months, then in that case, my standard error would be point oh, four. So it’s one because the event happened, minus point zero and you just square that. So that’s a lower squared error, then if an LLM actually does match the standard calculators performance, so then it’s the zero, I’m wrong. And I subtract the Union Square that now my standard, my squared error is point six, four. So the Brier score here is basically an average of these kinds of squared error predictions or the mean squared error. And obviously, lower is better.
Matt Dupree 8:52
And something really interesting happens when you when you actually do this, basically, you put yourself in a position where you have deliberate practice at actually making predictions. And you don’t really give yourself wiggle room, like you can make up you know, a lot of times people will make a prediction, and this actually happened in the case of the the kind of you know, nucular, you know, worriers panel that I mentioned at the outset, they made some predictions, and then they turned out they were wrong. And then they kind of like, rationalize their predictions like, oh, well, you know, I didn’t really, I wasn’t really that confident. So by writing down your predictions, you you have the deliberate practice, you can’t really weasel out of your actual prediction. And then even more exciting though, is you can actually sort of plot these things over time. So you can plot your predictions over time and see, like how well calibrated you are to the predictions that you’re making and the kind of phenomena that you’re tracking. And then another interesting thing that you can do is you can actually start to track other people’s predictions. So like Sam Altman, and Gary Marcus are two folks that are very influential in the AI space. And you can compare your Briar scores versus theirs and see like, you know, whether you’re doing better or worse, maybe there are certain AI influencers or experts that you don’t want to listen to anymore because you realize that they’re kind of like a dark throwing monkey or worse. And so, Brier scores are really interesting. Because they enable this kind of measurement and, and just give you a framework for really assessing the value of your predictions and others predictions around you. Okay, so those are just some high level comments on predicting the future. And if anybody wants to, like, do some Brier score stuff about LLMs, and UI and AI, like, hit me up afterward, I’d love to, you know, make some predictions, put some prior scores to it, see who’s see who’s doing the best. Um, so that’s those are just some high level comments on predicting the future that I think are useful for us right now, given all that’s going on with LLMs. Let’s actually, let’s actually get into kind of, like object level stuff now about how I think LLMs will impact user interfaces, with the caveat that I am, you know, barely an expert. And my prediction may be as good as a dark throwing monkey.
Matt Dupree 10:57
So the first thing I’ll say is, like the the kind of biggest threat to, to user interfaces right now is agents. And there’s a lot of excitement about agents right now. But the bottom line is, I don’t think they’re ready. And I actually more than that’s actually not interesting to say, because like, you know, the responses always like, Oh, they’re getting, you know, they’re getting so much better, so much faster. So I will say like, they’re not ready, and they’re not going to be ready anytime soon. And like, here’s some things that kind of lead me to think that. So here’s like, a nice, this is just a nice tweet that that I ran across a while back with somebody who’s actually just like hype aside, they’re not selling anything, they’re playing around with agents. And they just posted their experience on Twitter, so that the two key sentences are highlighted in red. But basically, they say they’ve been playing around with agents for the past three months. And he the summary is like, they’re looping computer programs that have a five to 50% failure rate every time you execute a new loop. And so like if that’s, if you really think about agents in that way, it’s pretty easy to see how they’re kind of a non starter for something that could replace UI at this point. So that kind of addresses the current state of the quality of agents. But the the thing that, you know, folks will often come back with is oh, they’re getting better, GPT is getting better. And there, there are actually cases where GPT is getting worse. So it’s simply not the case that the quality of GPT output is like monotonically, increasing like it’s just like, all up into the right across the board. So this is actually an interesting example from the New York Times a riddle, where GPT is kind of presented with a riddle. And I won’t get into the specific riddle now just for time sake. But basically GPT 3.5 actually gets the riddle correct. It has to reason about space, infinitely wide doorways, whereas GPT4 gets the answer wrong. And so that’s like an interesting case where, you know, it’s not like GPT4 is categorically better than GPT 3.5. And so these language models are not, they’re just not like getting exponentially better, or anything like that, in some cases are getting worse.
Matt Dupree 13:04
Another another example, that’s really interesting. This paper just came out, I think, a week and a half ago, some folks at Stanford and Berkeley looked at GPT is behavior changing over time, across 3.5, and 4. And what they actually found is that there are regressions on some tasks, even within the same model. So the most interesting one here, maybe the question on the top left here, so they would ask is 17077 a prime number, think, step by step an answer. And, you know, in March GPT, for got this question correct. In June, so went from 97.6% accuracy on this task of identifying prime numbers to 2.4% accuracy. So there really are, there are challenges ahead for like making these models reliable enough for these agents to to really work like even as open AI iterates on GPT4 presumably to make it better. We’re seeing regressions with certain types of tasks.
Matt Dupree 14:01
And these these things are really important to have ironed out before. Before we actually use agents to replace a UIs. Now, at this point, some people are saying, well, maybe we’ll just make the model bigger, right. Sam Altman doesn’t even agree with this. Right? Like he, in a, in a talk at you speaking with some folks at MIT is he he said this, this is kind of old news now. But he thinks that like the idea that these like large, like these super giant AI models is actually already over. And it’s really funny because he, in the talk, he, he’s like, he’s like, Yeah, I think we’ll make them better in other ways. And then after he said that, like, people probably know about the GPT for leak now. And if you look at how GPT4 is actually constructed, it’s actually a mixture of experts model where there’s not one giant model that they’ve used to kind of increase the performance. Instead, they’re kind of have a bunch of smaller models that they’re using to, to kind of get things better And so that I think I read that as a huge blow to the idea that scale is all you need, and that the path to, you know, reliable agents and artificial general intelligence is like a sure path. And so I think for these reasons, you know, agents aren’t ready, and I don’t see how they can, they can be ready anytime soon, you know, the subtitle here of like, you know, where it’s that we need these kind of new ideas to make more progress in artificial intelligence feels right. It’s not clear what those new ideas need to be. So, yeah, okay, there’s a highlight there. So agents aren’t ready, they won’t be anytime soon.
Matt Dupree 15:38
I think there are many, many UIs that will not be replaced by AI, in fact, like the majority of them, so we’re actually going to do this, like firming and decomposition technique that that we talked about for predicting the future? And kind of apply it to this question of which, assuming that, you know, agents don’t crush it or whatever, like, or even assuming that they do well, like what instances what cases will agents replace us we can kind of break this up and affirm me in a way. So if the high level question is, what percentage of you guys will be replaced by agents? We could break that up? And we could say, look, what are the most common types of UIs? And what portion of all UIs to the these kind of common types take up. And then which of those types of UIs can be replaced by agents? Even if we could just assume we could, we could imagine that agents are like, somehow amazing, there’s some sort of breakthrough, I think there are many cases where even though agents are hypothetically amazing, we still wouldn’t want to replace them with replace our UIs with them. So let’s, if we wanted to do this kind of Fermi and decomposition, and like look about our look at our different UI types, maybe we could come up with something like this, like, there’s, so I want to go over these kinds of different types. So like, lists are very common in UIs, we often want to, you know, mutate data in complex ways, we will often want to mutate data within variants, spatial metaphors are very common in UIs. And then maybe we can lump everything else. And other let’s look at each of these and see if it makes sense for an agent to really replace the UI type. So let’s start with lists. So lists are interesting, because they were actually discussed in Greg Brockman his TED Talk. And he, he, basically, he’s demoing the chat GPT plugin, and, you know, the, he has it like, I think it gives it a picture. And then it’s creating a recipe and then it kind of kicks them out to Instacart with a with a list of ingredients so that he can actually buy the thing that the the AI is suggesting that he cooks. And when he gets to this screen, this is actually a screencap from the from the TED talk. He says, See, guys, like I don’t think UI is going anywhere. Like, I just want to, like tap the Plus button on the keyboard to add more like I don’t want to speak or, you know, typed out, like add more quinoa to it to a chat agent, like I just want to be able to see this list. And I want to be able to modify it really trivially. And I think that that’s right, like we do. Working with lists is a huge part of what we do in software operating on items is a list is a huge part of what we do in software. And we don’t want to have a conversation about the changes that we want to make. We just want to boop, tap a plus button or a minus button or whatever to make those changes.
Matt Dupree 18:07
And listen, I don’t think that’ll change. Really, regardless of how good the AI is. Some people might think like, Oh, if the AI is really good at Oh, no, you want the extra qinhuai. See, like previous comments about how like, the path to AGI is not clear. And like we’re pretty far away from the AI being so good that it knows exactly how many key ones I want. So let’s, so that’s list. Let’s talk about complex mutations, like we have a lot we work with a lot of data in software. And sometimes when we manipulate that data, the manipulations that we want to perform are kind of complicated. There’s a lot of different pieces to keep track of a really great example of this is it’s actually it’s like an accidental example. So this is a video of the CTO of HubSpot. And he’s demonstrating a new chatbot that they’re building into into HubSpot. And you’ll see let’s just watch this really quickly. You’ll see what happens.
YouTube Audio 19:00
So now let’s say I want to add a contact to add contact. Rita 74. Lovely stuff calm. That’s Ada Lovelace
Matt Dupree 19:13
will say, okay, so you see that ah, right there. He’s, so he has to mutate data. Okay, he’s adding a contact to HubSpot. And when you add a contact to HubSpot, or many other pieces of software, when you add a piece of data, there’s a lot of fields. There’s a lot of different things that you can do when you change that data or add a piece of data. And he has to try and hold all of those things in his head as he’s communicating with this like chat GBT like interface. And that’s why he’s like, that’s like a very sure sign that you basically need something like a user interface to interact with. You don’t really want to chat based interface or like an agent. To do this. You need a visual representation of what’s possible as you change the data so that you can keep it all in your head. So that’s it. Another kind of examples of complex mutations that I think this is like very common in software, and I don’t see how agents can really take away the visual aspect of that. mutations within variants. So like, we often want to change data, but there are rules about how we change that data. And we don’t really want to engage in a back and forth with an agent about what those rules are.
Matt Dupree 20:20
So a really easy example of this is our calendars. If we’re doing things, right, we don’t really want to double book ourselves, like that’s the rule about, you know, about how the event data works. And so we don’t really want to talk with ChatGPT have it scheduled something or try to attempt to have it scheduled something and then for it to say, Oh, you actually are booked for that time, it’s much easier to just look at a user interface and see what times are available or not available, and then create the appropriate data based on that information. And so that’s like a, that’s another that’s an example of like, these kinds of mutations within variants where the visual aspect of what we’re doing really helps us understand what data changes are possible. As we’re, as we’re working with the software. Okay, spatial metaphors, these are very common in software, I’ll just use the Google calendar when again, and we kind of hinted at at the use here, but when you look at your calendar, you have a sense of how full your schedule is, and, or how empty it is, or how available you are. And that’s extremely useful, not just for kind of maintaining this invariant of not double booking yourself, but also really just understanding what your day looks like. And like what things you need to get done by what time, this, this spatial metaphor is not really going to be replaced by text, or like some sort of back and forth with an agent, it’s just extremely useful to be able to see things laid out in space.
Matt Dupree 21:47
And so these are, these are all kind of instances where you know, the UI, I don’t really see why agents would would replace them. And if you think about the evolution of humans, the way that our brains are set up the the evolving of the structures that are related to, you know, processing space and kind of movement, those brain structures are older than language. So it makes sense that there’s a faster neural pathway there, there are a lot of things that we really need kind of visual stimuli to kind of work through, and LLM don’t really change that. So obviously, there are going to be some cases where MLMs no replace user interfaces, and maybe that’s represented by this like kind of other piece in this donut chart. But I think for many, many, many cases, user interfaces, graphical user interfaces, in particular, are not really going to change all that much. Okay, so that’s prediction about UIs, that won’t be replaced by AI. Last thing. I think maybe the, maybe the most interesting or most important thing that LLMs will help us with is kind of getting rid of get escaping this like problem of UI rot. And what do I mean by that? UI rot is basically this idea that UIs get worse over time. And so like an easy example, this is let’s just think about Twitter. So this is like an early Twitter UI. If you think about okay, so this is this is how it was, you know, originally, if you compare that to this is the current UI. Maybe Maybe you can people can put this in the comments, we’ll come back to it. But if you have to compare these two UIs, which one is better? Well, the answer is depends, right? Of course, it depends. But if you just want to look at tweets, like Jack’s tweets in particulars, the former UI is better, because it doesn’t have a bunch of noise, it just have a bunch of stuff that you don’t care about, right. And so we can kind of extrapolate from this. And say that a UI is worse for every additional affordance added that we won’t use, right. And like we this is the principle that kind of underlies lots of what we see in UI patterns and different products. So for example, Twitter, blue just came out with, you know, not just but they have reader mode now for these threads, where they kind of strip away a lot of the noise in the UI, and let you easily look at threads. So there’s kind of they’re they’re operating on this principle that when you just want to read anything besides that is makes the UI worse. Another example this, I know we have a lot of devs in the audience is zen mode. In VS code, I use this a lot. You can you can turn on zen mode. And it, it really removes a lot of the clutter in the user interface, you can just focus on the code. And it’s also this idea of like reducing clutter or, you know, not presenting the overwhelming the user is also you see this in this idea of like progressive disclosure. And so you know, again, going back to the Twitter UI, there’s a ton of, you know, kind of tabs here on the left now, and some of those items have been pushed into this more area where you have to like do another click in order to see what’s there. And again, this is the idea of progressive disclosure. You’re you’re kind of slowly introduced Seeing the user more things so that you’re not overwhelming them or you’re not distracting them with things that they don’t care about. So all of this is kind of built on this, this idea that we really only want to give a user interface for the things that our users trying to do. So companies understand this, and you see it expressed in these different UI patterns.
Matt Dupree 25:17
But so if they understand this, why don’t they just do this? Like, why don’t they just give us UIs that are like exactly what they need. And there’s kind of a hint at an answer at this, like, kind of funny tweet exchange about postman that happened a few weeks back. So you know, this person logged on to postman. And they were kind of inundated with like a bunch of features that are not really a characteristic of postman. And then somebody in the comments was like, Well, did they take VC money like that might explain why there’s all this like extra stuff. And you know, it’s a little bit tongue in cheek, like VCs are getting blame here, that VCs really just accelerate the competitive dynamics that like lead to these kind of rotted UIs or UIs that are not not as convenient to us. And really, the competitive dynamic that that we’re all software companies are fighting is expressed by Bruce Greenwald, this like business professor at Columbia, he says look like in the long run, everything is a toaster, every like kind of technological innovation, including ChatGPT, given enough time, we’re going to be as excited about it as we are about toasters. And so companies have to kind of fight this, this trend towards irrelevance. And they do it in two ways, predictably. So one way not so predictably, is that they, they try and tap into sources of competitive advantage, aside from the software itself, so those are things like economies of scale network effects, you know, brand, stickiness, or high switching costs. So that’s, that’s one thing that they do. And then predictably, they also just, like keep building new stuff. Now, these two things together, these are the things that like, kind of force companies down this path where the UI gets less convenient to use in the way that we’ve been talking about. So an easy example of this is if you think about some of the new stuff that that’s come out with Twitter, I’m never gonna say x, I just, it’s like, it’s weird. Anyway, new stuff that’s come up with Twitter. So you have these like top articles and communities, features that are being added on added onto the product. It’s good that Twitter is innovating. But if you don’t care about these things, it bloats the UI. Now, theoretically, Twitter could have just like, spit out these features into separate apps, right? Like, oh, there’s like, you know, X communities or x articles or Twitter articles, Twitter communities, and then there’s just Twitter, they could have done this and like it actually might have might have been a better UI, or maybe it would have been better if they like, stuck behind the, you know, the more the more button. But they won’t do this, because they want to sell us on these new features. Like they want to, they basically want to spam us, right. And they’re doing this because they need to leverage the distribution that they currently currently have the network effects and everything. They they’re trying to leverage that into additional feature sets that they can build. So it’s kind of a combination of like the need to build new things with trying to leverage their existing network effects economy of scale brand, into feeding us into those new things that lead to this UI, bloat UI rot.
Matt Dupree 28:12
So Oh, yeah, I’ll skip that slide. So progressive disclosure helps here a little bit. This is an example from HubSpot or HubSpot, this is like a massive product, it’s getting even bigger. And they’re, you know, they have pressure to just keep building more things. And so now they have this progressive disclosure, you have the top level navigation, where you need to click through, you know, a few clicks to kind of get to the screen that you want to. And so this helps with not overwhelming the user. But the problem is, you need to kind of understand the terminology that the product designer uses to categorize the features. And like oftentimes, the way you think about your features or a task is different from the way that the company thinks about it. And so I was actually just hit by this the other day with HubSpot, where I wanted to change like my account, basically, they have like a Calendly like feature. I wanted to change my calendar settings. And, you know, for some conversations that I was having. And so I clicked into conversations to like edit my availability for these calendar meetings. And it turns out that the this setting actually lived under sales, even though like I wasn’t really doing any sales and didn’t really think of my conversations as salesy conversations. So progressive disclosure kind of helps, but it’s like not really perfect for this like bloat problem. Okay, so let’s talk about LLMs. Finally, what’s really interesting about LLMs is they actually give us a way to sidestep this UI router UI bloat problem, because LLMs enable users to describe what they want to do in their own words. And then we can actually give them directions on how to do those things within the application that they’re using. So here’s what this would look like. In amplitude, amplitude is a pretty complicated product. You can see like, okay, the user just wants to describe their task. Like I want to create a graph that shows the canvas sessions over time, who knows where that lives in the progressive disclosure. They hit enter. And now the proper turns into this like little yellow orb that’s floating on the screen, it’s telling them where to click it, showing them how to navigate the UI that becomes kind of inevitably difficult to navigate as companies really try and stave off competition. And so that’s, I think, a really interesting direction for MLMs. And like kind of how they can complement user interfaces. It’s so interesting, in fact, that I started a company like building a product on it. But I’m not trying to like spam you with ads about it, if you want to talk to me about it, that’d be great. But that’s we’re getting pretty much close to the end of the talk. So that’s the last thing I wanted to talk about LLMs and how they can help us escape UI rot, These are the things and this is the end of my talk.
Erin Mikail Staples 30:43
Awesome, awesome. First off, amazing talk. I’m glad that I’ve already seen Oppenheimer because spoiler alert, spoiler alert, um, and just quick chat here. If anybody has any questions from the audience. I’m happy to do that. But I’m, I’m really curious. And I’ll toss a question here to kind of get us going and warm up the question machine, I’m, I’m really interested, I work in the LLM space, and work with a tool that helps you fine tune these MLMs. And I can see this like beautiful interaction, like we were talking, I’m like, where do we see the future of tools like Chatbot? Going like, what percentage is going to be program heavy? And what percentage is going to be more process heavy?
Matt Dupree 31:26
Yeah, when you say program heavy, do you mean like, like actually writing code
Erin Mikail Staples 31:31
development lists, like you have your development left? And then you’re more of your process or product thinking?
Matt Dupree 31:35
Oh, I see. And then the percentage split there is with respect to like, building products or using user interfaces, or
Erin Mikail Staples 31:43
Yeah, like building the end products. So I launch a new chatbot for my super cool company. dog beds, custom high end dog beds.
Matt Dupree 31:55
Sure. And so you’re wondering, like, in that case, like if you’re working on something like that, how much of it is going to be, you know, depth developer, like heavy left kind of programming versus like, kind of process oriented? That’s a good question. I mean, I haven’t I have not thought a lot about. I haven’t thought a lot about like the kind of replacing coders thing. I mean, you know, a lot of this is really I spend a lot of time talking about user interfaces. I think the Oh, yeah. I don’t know, I don’t have a good answer. And like, I think like in the spirit of the talk of like, how experts are full of it. Like it would all be better if like experts just said, like, I don’t know. And I think I will say that. I don’t think it’s an interesting question.
Erin Mikail Staples 32:40
That’s a good one. I really I do appreciate that. I went to a talk last week, actually, Brooke Jamison, they mentioned that tech people shouldn’t name things because they somehow make everything more confusing in the way that they name things. And so I appreciate this theme I’ve encountered um, two more questions from the audience here. So this first question coming up directly from the chat from see from Brendan, fascinating. How do you think lol ons are doing it? data visualizations? can mean replace dashboard UIs
Matt Dupree 33:15
Okay, so I’m not going to answer that. Because the we have a speaker who is like, so well positioned to answer it like he’s gonna be, he’s gonna give an answer way better than me. His name is Zack. He’s the CEO of of zing data. The panel that he’s Oh, actually sorry, he’s not speaking on a panel. But he is he is speaking. He has a talk later today where he will address that. So check that out. Like they’ve spent a lot of time on data visualization querying data with MLMs. And so he’s the one you want to talk to.
Erin Mikail Staples 33:41
Yeah, well, first off Sneak Peek gets you have to stay tuned there, Brendan. And last question from the audience here. In the spirit of keeping your predictions honest, what confidence level do you give your pessimism of AI replacing those classes of UI example like lists modification, so on and so forth? And I say as someone who uses this one, I’m stuck in my own front end development. Copilot is my Savior.
Matt Dupree 34:06
Yeah, yeah, good question. I think so. I’m gonna give a number it’s not it’s gonna be like a little weird, because this is actually another property of people who are good at forecasting. They don’t do like 80% They do like 84 or like 87. Like, they’re like very nuanced in their predictions. So I’m apologizing for like, my, my, like 86% is how confident I am in that. Hopefully my investors are okay with with that predicted with that level of confidence.