See full event listing

The Evolution of Serverless (A Fireside Chat)

Serverless has come a long way since it’s beginnings and continues to evolve in surprising ways. Tanushree Sharma of Cloudflare and Sean C Davis of Netlify share their experiences and observations of what has changed and what we can expect from serverless technologies.

Tanushree Sharma is a Product Manager at Cloudflare building Workers, Cloudflare’s serverless compute offering. Her role focuses on unlocking new use cases for Workers and solving developer pain points, with an emphasis on building a great developer experience and ensuring that performance and reliability scales. Outside of work you can find her doing yoga, trying to walk her cat (key word: trying) or outside scouring green spaces in New York.

Transcript

Sean C Davis 0:10
And joining me for this Fireside Chat is tanushi Tanushree. Sharma. Tanushree is a product manager at Cloudflare building workers Cloudflare serverless compute offering. Her role focuses on unlocking new use cases for workers and solving developer pain points, with an emphasis on building a great developer experience and ensuring that performance and reliability scales outside of work. You can find her doing yoga, trying to walk her cat, the key word there being trying or outside scouring green spaces in New York, please welcome Tanushree.

Sean C Davis 0:51
Yes, really excited for this, and real quick before we dive in for all of you in the audience, because we’re kind of compressing the time we would usually have for presentation and Q and A. If you have questions, just keep the conversation going in chat. I’ll check every couple minutes and try to filter those into our conversation. As we were talking earlier, Tanushree, we were we were saying, you know, to set this up, probably looking at a quick history lesson. So can you start with, how did serverless computing emerge? How is it? How has it changed the life for developers?

Tanushree Sharma 1:27
Yeah, yeah, that’s a great question. I actually love diving into the history a little bit before getting into some of the details of serverless and all of the fun things that that people are building, because it really helps you appreciate how far the industry has come in, not a lot of time over the last maybe like two, three decades or so. It also helps you appreciate how much more accessible software development is now when compared to a couple decades ago. So let me walk you through, kind of some of the history, and walk folks that are watching through as well. And it’s kind of crazy when you kind of think about it linearly, how things have changed, and how there’s been a huge shift in how applications are both developed and deployed. I think the the TLDR, and the thing to get from this though, is that it really boils boils down to each step of a way, there’s been more and more abstractions from managing the underlying infrastructure, which has led to a focus on just building and deploying the applications themselves. So let’s focus on infra, more focus on development, which, which is the fun part, which is what we’re all here to do, right? So I think we’re where this really started is kind of late, late 90s, early 2000s where, you know, in order to spin up a website, you would have to buy physical servers to host content on, or maybe rent some dedicated servers that lived in a shared space. But then going from just a just a server to a web application was not simple at all. So you as a developer, you’d have to think about things like the specific hardware specs that you need for your use case, things like networking, bandwidth, internet connectivity, like all of those things had to work in order for to provide a good end experience for users. Things like, how do you scale if there is sudden growth? Or maybe the answer is, you don’t scale at all because you just have, like, this fixed amount of space, and you just kind of cross your fingers and hope for the best. And also something like, you know, if you’re a business that’s renting a rack somewhere, you want to be able to understand what is the physical security of this building, of this place that I am renting? How do I make sure that no one is going in and tampering with things? So those are things that these days that as a developer, doesn’t even cross your mind, but there was so so much overhead involved when things first started out and then, and then, from there, we kind of shifted into cloud computing and more of like the Infrastructure as a Service mindset, which is very similar, similar to buying your own servers, except you’re renting virtual machines from some sort of managed provider that manages the hardware rather than, rather than, like the physical hardware themselves. So easier to get something spun up less overhead. You’re not interacting with that hardware directly, but you’re still managing the infrastructure. You’re still taking care of scaling and any of the the operational tasks that but I mentioned just before, and it’s kind of funny when you think of like, I just want to build a website back then, and what what an app developer had to think about, what they had to do. You need to be a developer, but you also need to be a network engineer. You know, be able to figure out all of the configuration, all the DNS config. How do I how do I make things work? How do I make sure that all the plumbing goes through? And you also need to know yourself. When it came to hardware, there’s just so much extra thing to think about. You need to think about everything that sits below the application layer. So either, like, you have to go become a pro or figure out how to, how to hack things together in. This way, or you need a team of people to just build a simple web application, which, when you think about that today, you’re like, Whoa. Like, things are so easy. I can just do this myself. I can have something up and running in less than 10 minutes. Like Sean, you’re in Netlify, and your dx is super easy, super quick. And don’t need a team of people. You just need somebody with an idea, somebody with some designs in their head to then go, like, put that on put that on paper, or put that in their keyboard, and kind of make it happen. So kind of cool to kind of see how some of the some of the skills that are needed have changed throughout the years there and then I would say, Yeah, go for it. Yeah. That

Sean C Davis 5:41
that we, I so I spent a number of years, before Netlify even existed, working for a small agency, and we were just mostly, I mean, we built some complex business applications, but for the most part, we were building marketing websites of various sizes. And the it still was like, well, but we, we kind of want this go to tool set for everything that we’re building. And so it was all, everything was Ruby on Rails, which meant you had this content. That was all. It was edited, what, maybe once a week, or if they have an active blog, then maybe once a day or so. But that every time somebody was, somebody was visiting the site, we were we were running the server, we were hitting the database. And so we needed as a small team of people, we needed folks who were experts in front end engineering, but also we needed to know how to manage all of that infrastructure, and there was the risk of everything going down. So yeah, when we got into this era of having these tools in place to be able to offload the management of that infrastructure to some of these providers like Netlify or CloudFlare, that then we it’s like overnight, that’s one less thing that we have to worry about, and we can have some more confidence that that website is going to be up. We can spend our time more on the business logic. And I feel like, yeah, that really, that really changed things in terms of my, my quality of life as a developer building those, you know, everyday websites.

Tanushree Sharma 7:12
Yeah, I bet you also felt pain, not only in kind of the development flow as well, but if somebody wants their blog updated, you know, once a day actually, like, how do you deploy that out when you’re managing your own infrastructure, and how do you make sure that things are scalable and that you’re not adding, like, extra complexity in there that’s going to have more overhead? Kind of, do you want to touch on that a little bit too? Was that challenging?

Sean C Davis 7:36
Yes. I mean, the we generally wouldn’t build and deploy, unless we had actually changed code. But the complexity got shifted to for the most part, especially with those content heavy sites that it was, we were saying, Hey, we’re Do we really need to be hitting this database every time and so what the complexity was largely shifted to caching logic, and that was really all on us to determine how to, how to when to set that cache and when to break it. And those are the things that I haven’t even thought about that kind of stuff in years, because it’s just, it’s just handled for me today.

Tanushree Sharma 8:19
Yep, totally, totally. I also want to touch on kind of the next thing that stemmed from, from the shift into cloud computing, was from like infrastructure as a service, then to more of like a Platform as a Service technology. So this is like another layer of abstraction on top of infrastructure as a service, so things like servers, storage, networking, operating systems, those are then offloaded into the platform as a service provider, and then that provide the developers benefits, like if you’re not managing infrastructure directly, you don’t need to worry about things like software updates or patching. You know, those scary TVs that come in that you need to patch overnight. Those are things that the platform provider would then handle. So things like scaling, for the for the most part, there’d be some built in scalability features. You know, if you’re, if you’re posting an e commerce website and you need to handle a Black Friday, skip Black Friday sale. You don’t need your team kind of working days and nights up to Black Friday to make sure things scale. And I think the other cool thing that came from this platform as a service model was some of the billing really changed here, right? Like it was less of an upfront pay for exactly what you’re going to use, predict what the usage is going to look like. Was less of that and more of pay for what you actually use, billing, which made me development so much more accessible to more businesses, to more people, especially especially kind of things that weren’t big corporations that had the had the investment ready to spend. I will, yeah, I will say the it

Sean C Davis 10:05
was, it was interesting, because I if, like, thinking of that change that, yeah, it was. It was like, if you had this predetermined amount of computing power on the box that you were renting or using, or whatever it was that, yeah, to scale up, like, well, we got to shut everything down and go down, and then we got to recommission a new server, or, you know what, whatever, however, service, whatever the process was for the service that you were using. And, yeah, and it’s, it’s interesting, because that that, like, meter, that’s shipped to meter based billing is it’s kind of, I found it to be this double edged sword, depending on the type, type of project that that I’m approaching, because for most of them, like, Okay, well it’s going the usage is going to be so small that the cost is going to be nothing, or nominal. But I’m curious on that point, how do you think about what’s your approach to being able to predict the cost for higher, higher scale or more complex applications?

Tanushree Sharma 11:15
Yeah, that’s a good question. So I think that we typically see like, two varieties of usage patterns, either, maybe you have a blog post and it gets fairly consistent traffic, and then you don’t need to think about really like, what is this happening here? It’s just more of like, how many users do I expect to see a day? And then I can kind of manage costs, both from a, you know, if you’re looking at serverless, or if you’re looking at other technology standpoint, that’s obviously the more simple case there and then I think, I think generally, more broadly what we see, especially with like E commerce websites or things that are seasonal, you see that spikiness and traffic. And as developers, you need to be able to handle that load and also handle predictability and costs. I talk to, like enterprise customers all day on about workers, and they’re like, how do I know how much, like, requests or CPU time my worker is going to be using to be able to predict my costs? And I think it generally tends to be like, run a POC at small scale, figure out, like, actually, how much compute does your application need, and then try and scale that up and predict and, like, have some margin of error built. No, it’s not going to be perfect, that sort of thing. I also do know a lot of platforms have like limits baked in to the platform as well. So just in case you ship code that accidentally has some sort of bug, you don’t get charged for runaway CPU time or duration for your functions. So taking advantage of that and making sure you have guardrails in place, because that is a scary side of serverless, is that if something goes wrong, you’re not you’re not managing it. It’s not like your server goes down. It’s actually you’re just racking up, racking up fees. So, yeah, I think, I think those are the approaches I would take, generally is like ballpark estimates and then making sure you have guardrails as well,

Sean C Davis 13:00
for sure, because, I mean, on at that large scale, it’s like the there are, there are types of events or situations that would spike the cost that could be really good, like your, your site went extra viral today, and you’re, I’m happy to pay that money because it means more sales for me. But then you end up with, you know, there’s the other side of it, the DDoS attacks and things like that, where, like you want, you want that visibility and those hard limits in place so that you can protect against that unnecessary and unexpected spending.

Tanushree Sharma 13:34
Yeah, and I also think things like, things like alerts, so nothing, you know, you don’t actually take an action on behalf of your code running, but just like, tell me when something crosses this threshold, because maybe something’s wrong, I need to go check on it. Those are also really helpful signals to configure as well. It’s more of like a sanity check rather than, rather than taking action on something that actually might be like legitimate traffic,

Sean C Davis 14:01
for sure. For sure, definitely.

Tanushree Sharma 14:03
Yeah. The other thing I’ll add around platform as a service and spikiness and handling scaling is, I think, with with these types of platforms, you still need to answer questions like, you know, how does my compute talk to my database? And what do I need to set up so that my application doesn’t crash at different levels of scale. So those are there are some problems still really haven’t been solved with platforms as a service providers and technologies. If I think back to actually, like, the first time I deployed an application, Heroku has a very soft spark in my heart. It was one of my first software projects was deploying a flask app to Heroku. And it was really cool and satisfying to see, like, you know, you hit the button, you see the build logs, you know, like waiting for it to go and then successful, and then you can then go, kind of hit it, and it’s accessible. So Heroku is an example there. Like Google App Engine was one of the big first products. So. Made past popular. Do you have any kind of similar stories or sentiments to Sherry, like your first time deploying something where you didn’t have to worry about infra?

Sean C Davis 15:09
Yeah, there were, I very clearly remember, two key points in my career, and the first one was Heroku. We were using different services over the years for primarily Ruby on Rails applications, and mostly Digital Ocean as the service provider or something similar, and so and to build and deploy. It wasn’t just handled by Digital Ocean, it was also, I recall needing additional tooling that would do all of the compiling, building, running, we’d run the tests and everything, and then push it up. And so Heroku was the first one where still running Ruby applications. I was like, You mean, I can just get push and then it’s going to do everything for me. This is amazing. This is amazing. And then, but there was still some that those were still in my Ruby days. And so there was still a bit of, you have this running thing and and it could go down for whatever reason. It’s, it’s, you know, it’s running all the time. And so it was, I mean, years and years before I worked for Netlify. Netlify was my other milestone where I was like, Oh, I can, I can push. I can do the get push thing again, and then it deploys, but also it deploys static files for me, and now I can just kind of like, walk away and nobody’s gonna wake me up in the middle of the night. This is amazing,

Tanushree Sharma 16:36
Yep, yeah. It’s kind of crazy. You have those like, magical moments. You’re like, this technology is so cool. It’s a game changer. And I felt that with, like, the first time I deployed a worker as well, I was just like, you just like, hit a couple months on the dashboard, give it a URL, and that’s it. And then you have a functioning, like, Hello World type application. It’s kind of really cool to see some of that, that evolution, the last thing I’ll touch on about, oh yeah, no, no. Go for it.

Sean C Davis 17:08
I was curious for you to talk a little bit about workers in particular, because that’s one area that I I just, I haven’t played with them much. I hear, I hear a lot about Cloudflare workers, but I haven’t dug in. I’m just curious, could you give like, yeah, just, just, like, quick intro to them?

Tanushree Sharma 17:24
Yeah, totally. So Cloudflare workers run on Cloudflare network, and we run a distributed network. Cloudflare spread and butter was really our CDN, caching services and security came on top of that. And when you’re running a CDN, you want to be distributed. You want to be as close to your end users, or we tend to call them eyeballs. You want to be close to your eyeballs as possible to be able to prove latency. And so we had this kind of huge network, and we had a lot of compute that was available. And so the inception of workers kind of came about. It started off as a little bit of like, more around like CDN augmentation. How do I make, how do I make the WAF programmable? How do I, you know, I’m a customer, and based on the example use case here, I’m a customer, and based on the location of my users, I want to be able to serve them like slightly, a slight variation of what my WAF rules are configured as. And so as a company, we found it hard to, like, be able to program all of this into the product itself. And that was kind of the inception of workers, where it started off as, like a really lightweight upload some JavaScript that has a little bit of logic, and then it impacts some of our kind of security focused systems. But from there, once you can do that, you’re like, what is actually stopping me from hosting on, on our network itself? You know, we have all this compute. We’re distributed. We’re located close to users. Why can’t people just actually, like, build applications, build full stack applications with front end applications on, on workers, or the kind of front end counterpart of workers. What’s what’s actually closest to Netlify is our pages offering that’s kind of a little bit about how it came about. I think when you think about serverless platforms, there’s kind of this big bucket of serverless, but there it gets a little bit nuanced as you get into it, where, when you compare us to a provider like AWS lambda, lambda is kind of an abstraction on top of containers, whereas Cloudflare workers run on on the eight engine, which is the same thing if you’re using Chrome browser to watch us today. It’s the same thing that Chrome browsers use behind the scenes. And so some of the specific architecture decisions on the different platforms have impacted the use cases that they’re good for, or things that you know a little bit harder on the platform that you need to work around. So yeah, that’s kind of general high level on workers.

Sean C Davis 19:51
Yes. Okay, great. Thank you. I feel like that, yeah. That really sets the context for, yeah, for some of these other topics that we’ll get into for sure. So, yeah.

Sean C Davis 25:00
Right? How have how have you? Well, have you seen that evolution as well? And if so, how have you handled it?

Tanushree Sharma 25:07
Yeah, I think, I think the big benefit with Serverless is composability, so you can have kind of a piece that you start very simple, and then you add on to it, and then you’re like, I want to add maybe a notification system onto my web app, so I can still a sort of a separate serverless function that then can talk to this one that has the core application logic. So I’d agree with you, and like, starting simple, don’t try and overcomp complicate or over engineer something when you’re first starting out. Just kind of focus on the customer use cases. Focus on exactly what you’re building. Build for that. Strip away all of all of the complexity, and then as you’re getting more ideas, obviously more fun ideas, and more requests from users, then, like, start baking some of that in and figure out how to do it in a way that I always recommend, like to trade off between, like, I have a serverless function that does everything like I have, like, teeny, teeny microservices that, and I have 100 of them, versus, like, maybe you have 1010, microservices that, kind of, like, handle chunks of logic, so being able to kind of balance some of that as well and figure out what are your development practices, kind of what services make sense to couple together, versus what makes sense to split out, because of both from, like, a development standpoint and a risk standpoint when you’re deploying, I think some of those things factor into as well, but I fully agree. And just like starting simple, the benefit of serverless is that it is the sun cost is low, and so you can make changes and re architect as needed, especially when compared to something like a monolithic application.

Sean C Davis 26:41
That’s a that’s a great point, that the sunk cost is low, for sure, for sure, much easier to get into build a proof of concept. But on that note, what are some of the challenges that you’ve seen even as far as we’ve progressed in recent history?

Tanushree Sharma 26:56
Yeah, I think three of like, broad challenges that I tend to hear from customers are along the lines of like vendor lock in around standards as well, with serverless. And then the last thing that comes up quite a bit, is observability. So I’ll touch on on each of these a little bit. The vendor lock in part is, it’s kind of interesting. It’s it’s a real kind of trade off, because as a developer, you’re excited to take advantage of maybe primitives that the provider that you’re using offers, but then you have the trade off between, like, portability between different providers, or maybe to an on prem environment, and taking advantage of some of those building blocks that might be unique to that specific provider. This isn’t a concern when you’re running something in a container, because they’re, you know, like they’re self contained, portable unit that you can typically run, build ones run anywhere, that sort of mindset, but you’re kind of, like, taking a bet on a provider and balancing concerns about portability there. So I don’t have a great answer to this question other than, like, what’s your use case, kind of, what are you building, and what are you willing to take bets on, versus what makes sense to actually, like, step back and kind of hedge between platforms. But I will say, like, there are some really cool parameters that lots of providers have when I can provide for Cloudflare is, is we have a product called durable objects. That’s a concept that’s unique to Cloudflare. As far as I know, they’re one of our compute products, and what they do is durable object is like a strongly consistent data storage with Compute attached on top of it. So we have lots of customers that are building really cool collaborative editing tools, interactive chat tools like building for real time environments and durable objects come in really, really handy there, just to abstract away from some of the actual like coordination layer that we kind of take that complexity on for you. But if your user and you’re concerned about vendor lock in, kind of it’s a barrier to being able to use some of those is, is thinking about hedging or thinking about other providers that you might want to turn to. So yeah, balance and a trade off, kind of depending on on what people are building. The other one I’ll mention is open standards. So it relates to the first point, but it’s a slightly different angle on it. It’s more around standards on the actual like JavaScript runtime in itself. There’s a group called Winter CG, which both CloudFlare, Netlify, a suite of other providers are part of that focus on helping improve interoperability and create standards for for web API. So excited to be part of that, and I always think that’s kind of the right direction that the industry is heading in is some more standardization there. And then the last thing I’m sure that you’ve come across this as a dev as well is observability and debugging becomes tough with serverless. You know, you can’t SSH into a server to view the log, so you can’t access the file system of a VM directly. I think. Think that some of the challenges comes from can be a black box, unless the service provider is giving you, is giving our developers the tools that that they need for visibility. So I think there’s been a lot of improvement over the last few years on this, and I know personally for cloud well workers, that we have a lot of way to go from there, but it’s something that’s a little bit hard and kind of a little bit scary too, as a developer, as a developer,

Sean C Davis 30:25
100% yes. And on that last point, also, where Netlify is putting a ton of energy right now is just more more visibility, more insight into what, what is, is happening, because there’s just more work being done in on that side of the platform than in the past. And I wanted to mention the lock in piece too, because that’s that’s always been a really interesting conversation to me. I spent about a decade in in agency and and, but before moving over to the product space, and it in that world that was that was definitely concerning to me, especially when shifting into the JAMstack JavaScript world and kind of leaving Ruby behind. There was so much changing all the time, and it was like, well, this, this framework is cool to use, and it seems like it works really well today, but what’s it going to be next week? I don’t know, and it has, it has settled maybe a little, but even over a year, year, over year or so, there might be another one that pops up that starts to get some attention for one reason or another. And so as a as an individual, if you look at me being that person who’s building something on behalf of someone else, I actually the way that I’ve talked to my previous clients about it was more like I’d kind of rather be locked into the platform than the framework in or then the various little tools that I’m using that are closer To me, because those are the things that seem like they are changing more rapidly, but, and, and part of that conversation is, well, you have to get locked in somewhere. To some extent, we have to make decisions. We’re just trying to make the best decisions, and, and, and make the appropriate predictions of where are we most likely to migrate first, based on the conditions I know today and now, if I put my Netlify hat on for a moment, and I think that the responsible thing that we can do, even if we were to say, okay, the place you’re going to get locked into the most is the platform that then, I think it’s up To the platform to also say, you know, recognize where, where that lock in might happen. And also, then, you know, make sure we’re doing what we should be doing. To help you decide if we’re the right choice and if we are the right choice, that it is as as minimally painful as possible if it comes time for you to switch, and it’s, I don’t know, it’s a hard place to land, but, but I feel like that’s, it’s such an interesting part of the conversation is like, what, where are you choosing to get locked in? Because it’s, it’s somewhere, and to what extent are you doing, Yep,

Tanushree Sharma 33:15
yeah. And as you said before, there’s like, oh, there’s 100 different ways to build something. That means that kind of, depending on the framework you choose, depending on the storage you choose, really go one of many ways, which is a good thing, because you have the ability to move around. And let’s say a specific framework isn’t working well and isn’t scaling well for for the type of, you know, application you’re building anymore, you can switch. But also, yeah, it makes it hard to kind of keep up and also make sure, like, what’s the right thing to place about on here?

Sean C Davis 33:47
Sure, for sure. All right. Well, we have just a few minutes left. I’m curious to get your take on where you see serverless technology going in the next couple years and and it’s hard to ask that question with, also without also hinting at it probably has something to do with AI in some way, right?

Tanushree Sharma 34:07
Yep, that’s, that’s the big buzzword in every industry, but I would say especially So as a developer, it’s such a great time to be a developer and to play around with interesting tools. I think, I think the big thing that I always think about, when we think about the future of software development and serverless, is, is, you know, the goal of any any company, big or small, anyone that’s building any projects, is like, time and speed are your biggest assets, right? The faster that you can build, the faster you can go to market, the faster you can get idea that’s in your head on the screen, the more fulfilling it is, the more rewarding it is, like, that’s what people are striving for. And so I’m going to give some answers, but I’m heavily biased with this question, just based on Cloudflare approach. And then I’m curious to hear about kind of how things work at Netlify as well, but I think I. A couple things I’m really excited about. We’ve been playing around a lot with AI code generation. That’s going to be a really fun space to be in where, like, you have an idea in your head and you can describe it really well, and then let let the model kind of actually deal with all the complexities, on all the nitty gritty pieces, and, like, think about, kind of how fast that you can move in that world. So excited to see more of those types of applications being built to enable developers to do that, or even at least get, like, get some templates started, so that you have something to start on, and then you build in your customization, or you build in all the fun, wacky things that you want into that. So excited about that piece. We also launched an AI inferencing product at Cloudflare a couple months ago. It’s called workers. Ai essentially like serverless. AI is the way we like describe it, so you’re not renting out your own GPUs for inference, but relying on us as a managed provider. Lots of other companies have cloud providers have similar concepts as well. So that’s been cool, and we’ve been seeing some really interesting applications spun up from there. One example I can give as of recently is we had an intern that just joined the team that built an application that you put in a web page. It goes, it scrapes the URL, gets the markdown, and then you can give it a prompt, like, give me a summary of this page, or ask me some questions about test me, quiz me on this on this website. And so they just kind of went and built that out. And it always use uses workers. Uses work as I add a couple other products in there. So really cool to see some of those use cases coming out. The other thing I’ll touch on quickly is, I think the general approach of Cloudflare in terms of thinking about the future and what to build has been really minimizing complexities and challenges that developers have to face that don’t contribute directly to business value. It’s kind of like, how do we make building an application dead simple, and what are the parameters we can give you for that? And also, where can we as a platform be opinionated, where maybe the network is making decisions rather than a human making a decision. Things like, you know, where should this function be placed? Is something that we think that we can actually do a better job at than the new as a person can a ton of data. So those are things that we’re trying to, that we’re working on making better and that, and also connectivity from from a serverless function out to the rest of the world. So how do you connect to databases in a safe and performant way? Are things that we’re thinking a lot about and kind of focusing our product muscles on?

Sean C Davis 37:34
Yes, that’s great. And honestly, as you were talking, I was thinking of like, okay, well, yeah, where are we going? And there’s so many parallels there, which means that the industry is definitely moving in a direction which is, which is really great. We’re also exploring generative AI options, also looking at AI from the more of a also an aid, so not, not always something that’s necessarily going to give you content or code that you’re going to use production like definitely still exploring that. But also, one of the features we released recently was a was a button next to your build log if your build fails, and says, Why does it fail? And it reads the log and tries to very quickly tell you what went wrong. And this is what you should do, and that immediately, because that’s like, when, when a build fails, that’s, it can be a really stressful part of the flow for a developer. You’re like, back to the black box thing. If you don’t have the right logs, it’s hard to know exactly what went wrong and and so that’s, that’s been a really interesting project to pursue. And, yeah, and the and then the other point, also similar to platform primitives, is that we, I mean, we’ve, we’ve known for years that the framework of choice is going to change. It’s going to ebb and flow. Things are going to come in and out of style or preference and and not necessarily move in any predictable way. But if we look at the last few years, when a new framework arrives on the scene, it often brings some new idea or some new pattern with it, which is particularly attractive to developers for one reason or another. And so what I’m what I’m seeing happen with the on the platform level is being able to kind of even that out across all of the frameworks in the ecosystem today, so that if, if framework a lands on the scene tomorrow, and all of a sudden, devs are all over it, and it brings some new pattern that we can then say, okay, great. How can we build a primitive that can transform that pattern into something that works on our platform? But then, if that pattern isn’t also supported in framework B over here, there’s still a method for someone working in that framework. Work to be able to get that type of functionality with, with, on top of or with, yeah, with the platform, they’re just the code that they write is going to look a little bit different. And so for, for some of the more powerful, or maybe maybe, like, more feature rich frameworks out there today, that developers can feel like they’re they have all the features of that framework at their disposal, and they just work when they deploy them. But then framework that might not have as many of those features available, you could still make all those things work, just going to do it in a little bit of a different way. Very cool. Yes, yes. Well, Tanushree, thank you very much. This was a really, really fun session for me. Appreciate your time and your thoughts.

Tanushree Sharma 40:52
This is great. Thank you for having me, and I hope, hope the audience at least learned a thing or two, and if nothing, had a good time kind of chatting through this with you, and also getting different industry perspectives on how you see things versus kind of what things are like a Cloudflare so yeah, thank you so much for having me on this.

Sean C Davis 41:09
Absolutely 100% feel the same way you.

More Awesome Sessions