Video details

The Past, Present, and Future of Serverless - Jeremy Daly

Serverless
07.17.2022
English

Since the introduction of AWS Lambda in 2014, “serverless” has exploded into a vibrant ecosystem filled with a multitude of frameworks, services, tools, providers, and practices being adopted by startups and enterprises alike. In this session, we’ll look at the origins of serverless and where we are now. Then we’ll explore how companies are leveraging serverless, the evolving tools and processes they’re using to do so, and the emerging architectural patterns that are changing the way we build software.

Transcript

So thank you for showing up. I'm excited to be here. I was at Starlosta's New York in 2018, and it's pretty exciting to see how far it's come. So today I want to talk about the past, the present, and the potential future of service. So I will start by what we're going to talk about. So we're going to look at how serverless has evolved over the last seven years. We're going to take a look at how developer workflows and responsibilities have changed. We'll look at the tools, services, and patterns we've adopted over the last seven plus years, and then we'll take a little bit of time to look at the future of modern application development. So some pretty exciting things happening. So just quickly, about me. As Rob mentioned, I'm the GM of Serverless cloud at Serverless Inc. I do a lot of consulting with companies. I've been doing this for a very long time. Started with AWS in 2009 and then Lambda in 2015, just after it went ga. I blog, speak, have some OSS packages, I do the newsletter, service, chats, podcast, and then I am working on a [email protected] So if you're interested in no sequel modeling, you check that out. And I'm an AWS service Hero, which is a great program, lets us get some early access into things. Unfortunately, they do not let us help with naming of new services, so it would be nice if they eventually extended it to do that. So I love to start all of my talks with a quote from Vernon Vogels. Usually everything fails all the time. I think that gets us in a good headspace when we're talking about serverless applications. But I had him on my podcast actually not too long ago, and he said, I still get annoyed by every piece of AWS that isn't or that is not serverless. And that brings up a really interesting thing because his definition of service is probably different than yours, different than mine. Everybody thinks about it a little bit differently. So I know we're at a surplus conference. Hopefully all of you are familiar with serverless in some degree, but I want to talk a little bit about what it means to be, quote unquote, serverless. So the ones that we're all, I think, super familiar with are things like no server management, right? We want to make sure that we don't have to install packages or install updates to the OS or any of that sort of stuff. Then we want flexible scaling. Now this has changed a little bit. It used to be scaled to zero, right? But that's not quite what we get anymore. For some of these services, a lot of things are provisioned. You still have to pay sometimes for things that are still running in the background. Pay for value is, again, another one of those things. It used to be pay every time something ran. That has evolved a little bit. Now we really do think about paying for value. So whether that's storing database or again, pre provisioning certain things, high availability is again top of the list. We need to make sure that everything we run is automatically available and high available for us. And then the one that a lot of people don't always put at the top of the list, but I do, is event driven. So I still believe very much so, that serverless is something that gets triggered as a result of something happening. Right? So this is where we'll talk about serverless containers a little bit later on, and where some of that stuff fits in. So what we should do, though, before we get into where we are now, I want to get in our time machine, go back and start at the beginning and look sort of how things have evolved over time. So sort of before there was serverless, and I use 2014, when Lambda was introduced sort of as the before serverless or preserverless error. And if you look at some of the tools and services and things that were built before Lambda came out, you have things like S Three way back in March 2006, SQS was shortly thereafter. A couple of years later, we got Cloud Watch, then we got SNS, then there was Cloud Front, Route 53, cloud Formation, and then DynamoDB, and then finally Kinesis in December of 2013. And the reason why I mention all of these services is because if you look at these, these all came out well before Lambda, and they're all sort of core serverless services. Like, we think of these as serverless services. So in 2014, we introduced lambda or AWS introduced lambda. And this was sort of a completely different way to start stitching all of these different services together. So it was really interesting when we first got Lambda, you could react to S three bucket uploads and things like that. You could read off of a Kinesis stream, but you still couldn't respond to an Http event. For example. API gateway actually didn't exist when Lambda first came out. So over the course of the next two years, I called these sort of the Wonder Years because we were wondering what, what can you do with this new paradigm? So over those two years, we've got some interesting things. API gateway came out in 2015. That was exciting because now you could actually run in reaction or in response to an Http event. So we could do APIs, we could do web sites, we could do some interesting things that led to the creation of the serverless framework. So that actually was at, I think, reinvent 2015, they introduced the serverless framework. Now you could start building web apps with serverless. Jamstack was coined shortly thereafter by Matt Bilman, then VPC. Support didn't come until February of 2016, so we couldn't even connect to a database. So if you had a relational database, you couldn't connect to it from your serverless application, you just couldn't connect to VPC, so it was kind of limited in what you could do. So no caching and things like that. And then other players started to get in. So Azure functions in March. Athena, which is a really great service that AWS has, sam came along in 2016. We got Step functions so we could do orchestrations. And then the Service Application lens came out. And you'll see as we talk about the history here, that AWS has tried a number of times to sort of codify what the best practices are for how to build a service application. The Surplus Application Lens, which is part of the well architected framework, was one of their first attempts to do this. This came out later in 2016. So then past that, there were a bunch of things you couldn't do, and there were a lot of complaints, cold starts and I can't do this, I can't do that. So the next year, 2017, there was a lot of investment into expanding the different use cases that were possible with Serverless. So Google cloud functions came out. FaunaDB was introduced to give another alternative to DynamoDB, the architect framework, and Brian Lerou and his team created that. Firestore came out again, I marked this one as Beta, because everything in Google is usually beta for like ten years before it actually becomes an actual thing. And then Fargate came out in 2017. This is actually the first time they started talking about serverless containers was around here. And this is an interesting thing, and we're going to talk more about that. But then they introduced App Sync, which was the GraphQL service that was built in, and then the Amplify framework, that was a way for you to build apps using App Sync. And then we also got a Raw Server. So our first sort of SQL or Serverless? SQL Server type thing. And then the Serverless application repository, which was another attempt by AWS to try to codify how to build serverless applications and what was the best way to sort of stitch them together. So we already have a lot going for us with Serverless, but the next sort of year in 2018, there was still a lot of pieces missing and there was more integrations we needed and more control that was needed. So Cloudflare workers, March 2018, those went ga. So just a little bit of history there, but then SQS as an event trigger. So from 2015 to 2018, there are a lot of people that were using SQS queues as an event bus in AWS, but unfortunately you couldn't read off of that from a lambda function unless you were pulling it every minute or every 5 minutes terribly inefficient. So that wasn't actually introduced until 2018. It's kind of buggy when it was introduced, but it served the purpose and it actually solved quite a few problems for people. Then Google cloud run in August of 2018, another Serverless container service that has gotten much better over time. Nimbella was sort of a serverless container type thing. Lambda got 15 minutes timeouts WebSockets lambda layers and the runtime API. So people are able to run PHP on their lambda functions, which led to the creation of Breath and a couple of other platforms. And then this was super important, the application load balancer. I was consulting with a lot of companies who had massive workloads that were already running on AWS, and the problem was that they would want to build these net new serverless applications, but they didn't really have a good way to split them. So if you wanted to split traffic between an API that was running on, that was running on a serverless application, or was running on lambda functions versus one that was on EC two or something else, it was very hard to do that. You had to either route it through an API gateway and there was all this extra latency. So when they introduced the application load balancer support, that was a way for people to start using that strangler fig pattern and make it so that people could start writing parts of their application over to serverless applications. So it was actually a pretty big step right there. So then in 2019 there are still a few things missing, but we got the AWS Data API. I just added AWS to this slide last night because there's a new data API from MongoDB, so this could get confusing. EventBridge was introduced in 2019 as well. This is the Global Enterprise Event Bus, which is a very cool service that keeps on getting better. The CDK was introduced at the same time that was at the AWS New York summit, lambda destinations. So this is also where AWS almost, I don't want to say walk things back, but AWS keeps putting out services. Lambda Destinations was a way to do dead letter queues that was different than the original way to do it and sort of like a more preferred way. So now all of a sudden you get multiple ways to do things like the letter fuse and this was a better way to do it. And the same thing with http APIs rest APIs were expensive, the latency wasn't great. So they introduced a new type of service that would do this that was very helpful. And then RDS Proxy, if anyone is familiar with the concurrency issue, if you connect to a lambda function, that lambda function connect to a database, but it can only have one connection to the database, and then that database connection is taken up. So you can't take advantage of things like connection pooling and whatever. So RDS Proxy was introduced to sort of solve that and then we get to 2020. So 2020, beginning of the pandemic and so forth. But there were a bunch of things that were still constraints, so they added things like lambda EFS integration. So from your lambda function. You could connect to an EFS drive and have all kinds of extra storage as part of it. They introduced lambda extensions to make it easier to understand the different lifecycle events within the lambda function. This was great for monitoring companies so that they could go ahead and capture events and know when different things happened within that log streaming, stuff like that built in. They added the container image support and this massive container sizes of 10GB. So if you deploy ten gigs to a lambda function, that is a lot. But they built that in and then ten gigs of memory they added. And then we got a Roar Serverless V two in preview, and then we ended up with this. So we got all these services that we had. But actually, Xavier Lefair, who used to work, I think he was at the Auto, right, he put this article, I think at the beginning of 2021 that showed this typical serverless architecture. And this is what essentially happened, is we got all these primitives, all these individual pieces that now needed to be stitched together. So there were very little that you couldn't do. But the problem was that you did have to create sort of these complex architectures in order to do this. And this gave rise. Again, we talked about some of these frameworks, but this basically, if you're building applications now, you needed to know those primitives that were there. So if you look at something like the serverless framework, if you wanted a lambda function, you had to say, I want a function. And then you would specify the different types of event mappings that would map to SQS or to event bridge or whatever it was. And then those would map into your lambda functions or whatever. And then if you wanted to define additional resources, you had to do that through cloud formation. So you have to specify all the knobs and all the levers of all the different things you wanted, whether it's DynamoDB or whatever. Now, Architect came along. They did a little bit of a better job here with the abstractions because they still map things to specific services, but they kept it a little bit higher level and made it a little bit easier for you to understand how to map some of these things together. And then with the CDK, they came up with this idea of constructs that allow you to package a whole bunch of these different sort of wiring and configurations into these constructs and be able to deploy those. Same thing with Polluting, they have component resources, but at the end of the day, you still need to know and understand all of the infrastructure that you're deploying and you're responsible for that infrastructure. So the serverless developer, as somebody who's building applications now, your responsibilities have really changed and expanded over the years. So it used to be you start with business logic and maybe you're putting things in the lambda function. And then you need to understand how that infrastructure and cloud architecture works as well. But now, because your code is so tightly coupled to the deployment process and to the infrastructure that it actually deploys to, now as a developer, you're part of that build and deployment pipeline creation and managing and maintaining those and then monitoring and observability is huge. You cannot test a distributed application in dev if you have a few people hitting against an application that's not going to show you where the holes are. So really the only way the test is in production. Which means your code has to be instrumented in a way that you can see when things are going right or see when things are going wrong. And finally, this is something that no one ever seems to talk about, but security and compliance, this is a very big thing, especially when you get into highly regulated industries, or again, just to make sure that you're protecting against things like SQL injection and all the OWASP top ten and all that kind of stuff. These are things that now you were responsible for a lot of that as a developer before, but now it all comes together into this one big package of where like, you got to be responsible for all this stuff. So there's a bit of a burden there. And then the other trend that we've seen is this idea of configuration over code. And this is something that I know AWS talks about a lot because they have a lot of great capabilities here. The idea is that the cloud provider is so much better than you are at doing things like handling errors and retrying failures, right? If a lambda function dies somewhere in the middle of an execution, you have no way of capturing that or knowing that, but the cloud provider does, and they can handle those errors and those retries for you failover redundancy, you're not doing any of that yourself. That's all the cloud that's taking care of that for you. When you're transporting data, maybe you're moving from S Three into Athena or you're moving things around or whatever, the cloud can do that as well as transform some of that data. So if you're using data fire hose, you can actually run conversions on the data as part of that, or if you're using DTL to translate certain things and step functions and so forth, and then orchestrating those workflows. That is something that if you're writing all of that within a lambda function, you're saying call this, get the information back, then do this. If that's on a lambda function that fails, that's not good for you because you're not going to know where the state of that was unless you're writing that back somewhere. So the cloud does that so much better. And the problem though with that is that even if you're moving that code into infrastructure as code, even if you're programming that into your cloud formation templates or Sam templates or the CDK or whatever, all of that code is still a liability, right? So even if it's infrastructure as code, it's a liability. Now the thing is that over the course of all these years that we've been working on Serverless, we've developed a whole bunch of patterns. So in 2018 I put this blog post up and I think there was 18 or 17 patterns or whatever it was. And now these weren't ones that I made up, these were just things that I was seeing people doing. And so I wanted to put it out there and say like, I'm not sure this is right, but this is what I see people doing. And it's just got so much feedback and so many people were like, that's exactly how I'm doing it. So it was just good to kind of have that validation that this existed and then piggyback off this. Matt Coulter at Liberty Mutual created a CDK pattern site where he took a lot of those patterns plus other ones and started bringing those in and making sure they fit into the well architected stuff. And then I think it was March of 2021, the AWS Serverless Developer Advocates created the Surplusland Compatternsite and now I think it has over 300 different patterns or whatever. It's amazing. And they basically show you how if you want to do Cognito, API, Gateway, Private Rest API, this is exactly how you do it. So the point is that we've essentially figured out how to do all this stitching together, but we're still requiring people to go in and do it themselves. And so when you have some of these bigger companies, they're trying to figure out ways to take those patterns and basically enforce them and make sure that they're consistent. So like, Liberty Mutual, for example, has something called their Service Enablement Team and they basically take those patterns, put them into CDK, and make sure that when a new employee comes on, they can actually deploy code on day one, which is kind of interesting. Now they fully admit they're not 100%, the people who deploy it are 100% sure what's happening behind the scenes and they eventually have to learn it. But the point is that that is codified for them and they can do it. And then Lego Group has the Platform Squad that they basically take all their standards how they do shared code compliance and then also that observability and bake those into a bunch of patterns that then they can use and do that. So this is great if you're a big company and you have the resources to have developers that can just do that. But the theme of today is servers for everyone, right? We want everybody to be able to do serverless and we want people to be able to do it, right? So 2021 was sort of an interesting year because we saw a bunch of companies trying to, including AWS and Microsoft and others, trying to figure out a way, how do we take these patterns that we already know, that are already proven and codify them in a way that makes it easier for people to develop applications. So AWS. App Runner came out. We'll talk a little bit about that in a minute. Step functions. Integration with AWS SDK. So it was very much a visual workflow to start putting together similar to like logic apps or I think it's Google workflows. Amplify Studio came out with some really interesting integrations into Figma and just making it really easy to go from a design to react to deployment and so forth. And then AWS create a bunch of different things like event filtering and some of that stuff, and then introduce a whole bunch of new serverless services. And then we got Lambda function URLs, which made it possible to bypass an API gateway, which was great. And then ten gigs of the female storage that now you could do a lot of processing that required a fair amount of data storage within your lambda function without having to do the whole EFS thing and be connected to a VPC. So a bunch of really cool things that happened there. Then Serverless Stack came along. IBM Cloud code engine planet scale, which is a serverless database for MySQL ACA serverless, which is now Calyx serverless. Cloud is the company that was the product that I created with my team at Serverless Inc. And then Zeta is another postgres Alaska search redis mix. Really interesting service there. So there was a bunch of things that people were trying to do. And then, by the way, Nimbella was bought by Digital Ocean, and Digital Ocean just announced, I don't know, three weeks ago or something like that, two weeks ago, that they now have Digital Ocean function. But anyways, there was all this stuff created in this last year or so, really interesting stuff, made things a whole bunch easier, but at the same time kind of made things a little bit more complicated, depending on how you think about it. But this is where we're at. So this is now now, and this is sort of where we are in 2022. And this is where I like to think of everything that we've built with Serverless over the last seven plus years, really over the last 15 plus years, or whatever it's been, has led us to this point where we can pretty much build anything we want to, but it also has led to an explosion of tools and services and so forth. And this is where we kind of talk about serverless containers. So you've got Cloud Run. We've got App Runner, cloud code engine from IBM, Azure Container instances and Fargate. But I was not a big fan of this term serverless containers, because how can a container be serverless? It doesn't make any sense. I've been convinced a little bit otherwise of this. But I do have some conditions. So I'll set up my conditions for what I think it means to be a serverless container. So it needs to use standard container images or it can't be something different that does that. These obviously have to handle multiple concurrent connections. We're not spinning up a full container just to handle one connection. Load balancing, auto scaling has to be built in and configurable and then it kind of sort of has to scale to zero. Right? So I get you need pre provisioning. I'm fine pre provisioning stuff in production, but when I'm using something in a development environment, or a preview environment, or PR branch or a feature branch, something like that, I want that to scale to zero. So it's not cost me any money when my developers aren't working on it. So most of these services that I mentioned and there's more out there fit the bill. But I know we're at Microsoft, so I do apologize, but I don't think Azure Container instances quite fit this mold because you still have to use AKS to route the traffic to it. And I also don't think Fargate fits this mold because Fargate you have to use ECS to route the traffic and do the scaling and so forth, and then you have to have a load balancer that you throw in front of it. So Cloud run is absolutely amazing. It's a very good service. I love the way that works. It now handles things beyond Hcp traffic. You can do Web Sockets with and stuff like that. App Runner is also a really great service. You can't do Web Sockets with it and some eventing things yet, but IBM cloud code engine is getting there as well. So the other thing we have our service App platform. So these are all these new services coming out like Cobb and Calex, Railway Denmark. I'm not really sure where to fit that in. It's sort of a minimalist thing, but Catalyst by Zoho is an entire thing. But these are interesting services because they're combining a whole bunch of different types of packaging format. So a lot of them are containers, but some of them just let you do source code. I say proprietary ish hosting environments like Coeb, for example, has their own firecracker runtime thing that they run on. Bare Metal denver is its own thing. Calyx, I think, runs on top of Kubernetes, but it's managed through them. So, interesting stuff, but again, it's all to help you get your applications into production faster. And then every one of them does the load balancing and the infrastructure provisioning discovery stuff built in. So these are really cool platforms. Interesting, this sort of trajectory that some of these have taken. And then we have serverless databases. Right? So we start with the no sequel side of things. And it used to be the granddaddy was DynamoDB, but with Firestore, Firebase, Fauna, azure Cosmos DB has a serverless option. And then MongoDB just announced the Atlas serverless version as well. And then on the other side of this, this is the one that has been really hard. I don't want to say it's easy to build a serverless database on the no sequel side, but it's certainly a lot easier I think, than building a SQL database. And I think anybody who is working on one of these products will tell you that it's not easy. So Aurora Serverless was sort of the first one that came out. Read through my Twitter threads, though I do not think version two is serverless and there's other people that agree with me, but it doesn't scale to zero and there are some issues there, but cockroach DB PlanetScale, Azure, SQL has a serverless version now, Zeta and or Zada, I don't know how to pronounce it. And then a new one that just launched or just was announced was Neon, which is another postgres. So Zeta and Neon are both postgres. PlanetScale is MySQL. But really interesting things happening with serverless databases. You just have a lot of choices. Then there's edge computing platforms and this is super exciting. So things like Cloudflare Workers, Fastly, Amazon cloud front functions, these give you really low latency compute, globally distributed, very close to where your users are. Some of them have KV capabilities. Cloudflare Workers, if anybody's been paying attention to Cloudflare Workers, or if you haven't been, you should start paying attention to them. So they've got not only do they have their KV, which is interesting, but then they have durable objects. They just launched the preview, I think of D One, which is like a distributed sequel lite. So there's a lot of really cool things that are happening over at Cloudflare Workers. I think somebody here might be working on something cool later, but I won't tell anybody. And then the problem though with some of these cloudflare workers are getting better, but there's limited execution time, so it kind of limits the workloads that you can do. So a lot of it is like redirects header, manipulation, authentication, things like that. Cloudflare Workers is again going way above and beyond. I think Fastly is making some improvements in that as well. And then you have jam stack. So this is another thing for sale. Netlify, AWS Amplify, cloud player pages, all doing really, really amazing stuff here. So whether it's server side rendering or static site generation or incremental site regeneration, so forth, framework detection, deployment management, they have serverless and edge functions. Versailles doing something cool now where they're going to have like an auto thing where it will determine whether or not it needs to run in a region or run at the edge. Netlify is doing a cool thing with Denmark for edge functions and then you have integrated back end. So AWS Amplify, if you want to get down into the weeds, you can connect to pretty much any AWS service. You want to. And then Netlify has Netlify graphs. So there's some really interesting things happening and all of that leads you to making it highly extensible, right? So it's very easy for you to go ahead and build these things out and add functionality, even if you're just using parts of this functionality to do what you need to do. So this is where we are now. These are all these amazing services, but unfortunately, now we're here. So this is the Cloud Native Computing Foundation's. Cloud Native Landscape. So not all of these are serverless, but they're all basically trying to do the same exact or they're all trying to target the same thing. They're all platforms or open source packages or something that is trying to help you make building your application, I think it's supposed to help make it easier. I'm not 100% sure how thousands of these things do that because I don't even know. I don't know half of these. I don't even know what they are. And then of the ones that I know what they are, I'm sure I don't know what most of them do. And my level of experience with any of them is very minimal, if any, for the vast majority of these. But this is kind of where we get into. So the question is, we built all this technology, we have all these capabilities, we can do all this amazing stuff. We've got these different platforms, we've got open source, we've got proprietary platforms, we've got ads, we've got all this great stuff, but we really have to sort of talk about what's next, right? So are we going to keep adding more services, make it more complicated, or are we going to do something a little bit different? So if you're familiar with Sean Swift Wang, he was a developer advocate at a bunch of different places. He actually was with the Amplify team for a while. Then he was at I can't remember the name of it temporal. Anyways, he wrote this article a while back where he talked about this idea of essentially this combination where you had the resources that you needed would basically be automatically provisioned for you just based off of the code itself. So rather than going down this idea of saying, I have to write infrastructure as code, that tells me, here are all the primitives that I need, here are my sort of my control plane specific instructions for which services I need to spin up. Instead you just say, oh, I wrote this application and there's an API in there, and then maybe there's a queue and maybe there's something else, then have it automatically build that and deploy that for you. Now he called this the selfprovisioning runtime. Now, when he actually wrote this article, he was unaware of how many people are actually working on this. So there's some really cool services out there. So Dark has been around for quite some time. The work that they did. There was sort of a proprietary language so it was a little bit didn't quite take off as much as it needed to. But Cloud compiler run is really interesting. Oncor is basically infrastructure or self provisioning sort of using go nitrate is a new framework that came out or Nitric, I think it's out of France or Sweden somewhere. Anyways, they're doing something very similar to where you just write your application code and then they compile it to pollutions so that you can just deploy it to whichever cloud provider you want to. And then Servers Cloud. This is something big that we've been working on with Serverless Cloud now sean called it self provisioning runtimes, we call it infrastructure from code. And essentially what this is our sort of contention is you should only have to write application code so you shouldn't have to go and put something in infrastructure as code because that's going to make it specific for the primitive that you're writing against. And we actually think that you could do this without configuration files. If you focus on the use cases and not in the outcomes as opposed to those primitives, you use familiar patterns. So like with Servers Cloud we use like an express type API get or schedule every like very simple patterns and then automatically deploying that infrastructure to support the app. So you don't have to worry about converting it to CloudFormation or to Polume or to TerraForm and doing that the system actually does that for you. The runtime does it and then it gives you all those simple workflows because all of that deployment and compiling and whatever is all built in, then all of the development, the testing, odd stuff is just simple workflows for it. And this is the really interesting thing is this where I want to get back to apprentice. So the idea is that if I deploy a lambda function and that lambda function is out there running and maybe it runs fine, there are tools like power tuning for example, where you can go in and you can tune the memory and try to make sure that the amount of memory you provision is enough to run it most efficiently. The problem with doing that is that you have to go back switching thousands of lambda functions. So wouldn't it be better if the platform itself could actually do the optimization for you? So one of the things that we actually do with Service Cloud is most of our APIs and everything we were deploying to lambda functions because it made sense and we're watching those lambda functions, trying to optimize those. But the actual cost of posting everything on a lambda function when it gets very high, when you get to a very high load becomes expensive. And so what we did is we actually made it so that if we get past a certain point with the number of invocations on any given day we will automatically switch your workload over to App Runner and route your traffic there. It's completely transparent to you. And then any processing, like reading off a queues or change data capture, things like that, those will all come back and be processed off of a lambda function. But we can optimize that one specific thing and there's more that we can do about that. And so I want to end with this idea. So this comment was made a number of times. Corey Quinn has said it, but I was trying to find who originally said it. I found it in like an article from 2012, I think it was Equinix or something like that. But this idea of own the base, rent the spike. Now, at the time this idea was about hybrid clouds where it was very much so. Like, well, I'm going to have my own data center and I'll have a certain level of provisioning that I can support traffic there, but then if it gets too busy or I have a spike of Black Friday sale or something like that, I'm going to shift that workload over to the cloud and then the cloud can go ahead and scale and I'll rent the spike. Well, hybrid clouds do not work out very well. Just look at Zynga as an example of that. It did not go well for them eventually. And so a lot of people just said, well, let's move our stuff into the cloud completely. So you're essentially renting everything. But there is this idea that says, look, if we look at the cost of serverless applications, they are very linear, right? So if I add every ten invocations, I add whatever, that cost keeps going up and up and there's no optimization over time. And something that was announced at MongoDB World two weeks ago, whenever it was, was this idea of MongoDB Atlas using a way to bend the curve, right? So essentially what MongoDB Atlas does is once you get over a certain amount of read requests per day, it drops you to another tier that is like 50% of the cost for the next however many invocations, and then it drops to 90% of the cost and it actually doesn't take that much. So if you have a fairly active application and you spend about $35 a day, you automatically start to see the benefits of that cost curve bending. And this is something that I sort of contend, I really hope is the future of the way that we price serverless applications, whether that's lambda, whether that's Http or Hcp, APIs, any of those things where right now with Http APIs, you get a discount of 10% once you hit some insane number of requests per month. And it's not until you hit that request per month that it actually drops. But I think if you're going to get people that are going to buy into service applications, they're going to want to say, if I already have a provisioned or if I already have a sustained amount of traffic, I should be able to pay that provisioned fee and then just pay above that. So, interesting conversation. We see where that develops into. But I think that could be an interesting way to price serverless in the future and have a way to sort of own the base or rent the base at a lower fee and then maybe pay a little bit more to handle those spikes. All right, so just some key takeaways here. Serverless has evolved tremendously over the last seven plus years. Pretty much any objection you've had to serverless that has been addressed. Still some rough edges here and there. Pricing is, I think, one of them now. And then if you're building anything, if you are building anything that does not add value to your business, you probably shouldn't be doing that because it's almost guaranteed a service, managed service or primitive out there that does that for you. And then configuration over code. It's one of those things where again, I think it adds you move your business logic into your infrastructure's code. It's interesting. It will do it better than you could probably write it yourself, but it adds a new type of technical debt that is sort of there. Also, event driven application is back with a vengeance, right? So everybody's writing event driven apps. Again, eventual consistency. If you don't know what that is, you should learn it because lots of things are eventually consistent now and then. For me at least, complexity is still a major concern. And again I ask the question is, is there another level of abstraction that is the future? We don't manage memory anymore, right? That's taken care of for us. Should we be managing all those knobs on a lambda function or on the compute layer or the databases and things like that? So anyways, thank you for listening and thank you for coming. My blog, podcast, newsletter, check out servers. Comcloud. I would love to hear your feedback on that. So thank you very much.