An exploration of serverless platforms, and some of the unanswered questions around themServerless computing is the hot new thing. Like any hyped technology, it promises a lot. However questions remain around concept and implementation, especially when you start to compare how we've built systems in the past, and what serverless offers us now. Is Serverless the future, or just the emperor's new clothes?This talk will very briefly introduce serverless computing, but will then dive into some of the questions that aren't always asked in conjunction with this technology. Topics will include:How does your attitude to security change?Is it easier, or harder, to create reliable, resilient systems?Do patterns like Circuit breakers and connection pools make sense any more?Is vendor lock-in a problem?Is serverless computing only for microservice architectures?Which problems fit serverless computing?By the end of the talk you should have a firm grasp of what serverless computing really can offer, cut through some of the hype, and get an understanding about where and how you can use it in your own organisations.Check out our links below, and don't miss our next conference!https://www.ndcconferences.com/https://ndccopenhagen.com/
Hello, everybody. Thanks so much for coming along to you. Well, I guess this is a room, albeit a virtual room, and I'm beaming, too, all over the world. So thank you for being part of this sort of online conference that was not never designed to be online. But now we are we've sort of I want say big thank you to all the people behind the scenes as sort of making this work seamlessly as possible. I hope that you as attendees that this experience has been so educational and enjoyable. I suspect many of you are perhaps enjoying or not enjoying forced lockdown in your homes. So I hope your your house, you'll set yours. You're safe. You're well. And your family well. I also hope you're keeping a distance and washing your hands. Those that we have to say that would wash your hands. New information for some people. We're actually here not to talk about anything to do with, you know, viruses or lockdown or actually. Well, we might talk about Lochhead, which is a bit of a different thing, but we're actually here to talk about serverless computing in general. And I've got some specific things to talk about, some of confusion that I still have. When I first wrote this talk would have been about four or five years ago. I was very confused by surplus, now slightly less confused. But there are still there are still puzzles I have in my head. I want to share that with you today. I actually you with a couple of books on Marquis Sevices. So I wrote the book Birding Marcus Services back in 2014. It's published in early 2015. I think I just happened to be the first person to write a book on Microsoft since which is good timing. I've actually also just published a brand new book, came out the end of last year called Monoliths to Marcus Sevices. This is a book all about system decompositions. I actually just did a two day class as part of NDIC Copenhagen. Maybe some of my workshop attendees are joining me here, in which case this plane wall behind me would be all too familiar as well as writing books. I also run my own training advisory and consultancy firm. I sort of do a lot of work remote. Why should it be for this as well? So you think I could do to help you and your organization in your micro services journey? I'm very easy to find on the Internet, but I'll place a link to that at the end, throughout the clutter, throughout the talk today. I would really encourage you to ask questions. So go over to slideshow. Use the hashtag FDCPA. You can find my session in there. If you pop the questions in there, when I come to the end of the talk I've got, that's 50 missed free to sort of go through and answer those questions as a little bit of a lag between me speaking, you getting to hear that and then you begin to ask questions. So if you ask questions as they come up, then towards the class, I have a lot of questions that I can go through. I'd also spent a bit of time on stock after tzatziki, as many questions as I'm able to, although I have to actually go off and get some shopping for some of the local in firms. So I do my bit. So in case you are, you're not entirely clear, Shafei, who I am. This is me is my picture a profile picture despite despite how I might look, I am actually quite an old person now in the grand scheme of computing. I think doing computing professionally, I suppose, which means I think I was getting paid to be a computer programmer for about 24 years now. So although I might to some look youthful and fresh faced, I do in fact feel a lot more haggard. Just to give you some background of how old that makes me. I still remember when the Internet came on a semi floppy disk to the post. When you are used, have dial up broadband, some people still have a dial up broadband all over the country. It feels like I've got dial up broadband, but I only too well remember the thrill of getting my first fourteen point four K modem. I remember when Facebook was the Facebook and I still think this system calling it the Facebook ad. I still don't understand what Snapchat is, but thankfully it seems I don't have to worry about that anymore because now. Tick tock. And as far as I could work out. Tick tock is just dog videos, which again is something that I can deal with. I'm not too much of a fuddy duddy. You know, I have been keeping to extent on the cutting edge. I was lucky enough to be involved with working with A.W. US from the very earliest days. I actually sort of helped create the first ever training courses for A.W. s back into space. It would have been two thousand eight timeframe. And so I've been working with Cloud in public cloud technologies for a long time because of a background of infrastructure automation. So I've always considered myself sort of Hafford operations person and half a developer. And certainly I've been very, very interested in the capabilities of very different public providers, of which, you know, A.W. is very much was the sort of set the scene really, and it's still the market leader. But there have been some things coming up recently in this space which have led to me being a little bit more confused. And this is, you know, things I'm not quite sure what's going on. And I've been trying to make a sense this sort of more confusing and emerging landscape and specifically that confusion stems from from this topic of of serverless. So, Serverless, you know, what does it mean? What's it mean to you? What makes it different? And maybe they're trying to maybe cut through a little bit of the hype around what surplus is. A bit more specific with what it means as well. Because I think the term gets misused. But also I'm sort of keenly aware that I need to be all on board with Serverless because I am, if nothing else. Something of a hype merchant. And as you can see, here's micro services. My own. My particular buzz word on which the I spend my a lot of my time. We can see if we got the hype cycle. Micro Services is surfing high. I always find this this particular chart has gotten a quite odd because as micros services at the peak of inflated expectations, which is right where I'd like to be. And then we've got attaway in the trough of disillusionment. I find this confusing because Miko's services are just a type of s away, which says to me that the people that create the gonna hype cycle don't understand that. But here we see in the technology trigger department, it's servants. And although this graph is actually a couple of years old, it has been surprising that service, technology or services, a concept I don't think has made as many inroads, certainly into enterprise development as I might hope. But nonetheless, there is another hype term. I've got to jump on board that bandwagon. I'm going to attempt to actually surf both bandwagon's at once, both micro services and Serverless, and we'll see how that goes. And of course, when you hear the term Serverless that immediately, you know, makes a scoff a little because it sounds like marketing buzzwords. Right. You know, when the reality is that we know that there are computers there. So what does that even mean? Is it just as simple as saying, well, Cervalis is just another way of saying other people's computers and maybe maybe there's something to that? The term actually comes from an article. This is this is a fine chap called Ken from writing for Art Labs. Now, I don't know. I've done some research quite a bit, and I've yet to find an earlier reference to the term. This one by Ken, as in this article, is really talking about what he's seeing as the future of cloud development and really just general software development. And the kind of the phrase that jumped out at me is, you know, when he's talking about what service means to him is, you know, the term serverless doesn't mean that servers are no longer involved. It simply means that developers no longer have to think that much about them. Computing resources get used as services without having to manage around physical capacities or limits. It's sort of interesting. We talk about public versus private cloud, although I think is, as we might explore a bit later on, you know, in general, the good service offerings are primarily in the realm of public cloud. There are some areas where that taps being closed for private on premise experiences, but primarily, I think to get the good service stuff is on the public cloud providers. Now, for many people, Cloud Serverless is synonymous with cloud functions, with lamda functions. So these product offerings under what's known as fast function as a service, say service are many just means farse, the sort of first widespread fast style offering, although I think you could easily look at other alternative products in this space that pre-date this. But the big sort of the sort of the first product that got excitement in this space was A.W. Slamdunk, which is launch in 2014. And, you know, that's sort of interesting, right? Because LAMDA was launched in 2014, two years after Khan came up with the term Serverless. So when he was thinking about Serverless, he wasn't thinking about function as a service. So there is this thing here, which is those two terms aren't necessarily as serious as we might think. Service actually really speaks to a whole category of cloud products that we can make use of that third abstractness away from. Really the detail behind it? Yeah, fundamentally, you know, it's about us working with a platform that allows us to get on with our jobs. We don't have to worry about the underlying machines. Right. This is this is kind of in many ways, the concept of serverless is very much contextual. It's about your position in this stack of abstractions. If you are somebody making use of a Cervalis product, as far as you're concerned, there are no machines underneath. It is very much about being in the eye of the beholder. But if you're somebody managing this platform, this is very much our machines. And I think this is also creating some interesting conversations when you look at things like the use of farse on premise, for example, because what I find confusing in those environments is really you want to deliver a product, a feature set to a developers where they don't have to worry about machines, but often they still have to worry about machines because they're running on a share cubitt at his cluster and they're only allowed a certain number of pots which create. This really interesting disconnect, this concept of perception is kind of quite important. Right. Cervalis, if nothing else is about giving these abstractions, it's about hiding the detail. But those of you in the engine room might still have to look after all these machines. I don't think this is partly why I think there's going to be a challenge in delivering quality Cervalis products on premise, because I still think, though, the kind of abstractions we create for developers working on the private cloud are quite leaky abstractions in many cases. Mike Roberts wrote an article called Service Architectures a couple of years ago. Now, 2016 began four years ago, as he wrote is over. Martin Fallas site, but he wrote sort of explaining what service isn't. Again, Mike takes a broader look at all the stuff under life under these serverless. Yeah. Talking about what it means to be served as he was trying to get a bit more distinct in terms of what it means. But he's very looking much more broadly. He's looking at things that function as there is back in a service, messaging products, the whole the whole lot. And although Mike specializes mostly working in the A.W. system, the stuff in him is very much applicable across this year and across Google Cloud, although Google Cloud is a much smaller offering in this space. So Mike's own definitions of what makes something Cervalis is that there's no management of server hosts or server processes when you're engaging with a service provider. Not saying I want this many machines. I'm not saying I want five of these machines in this data center. None of that. You're not worried about that. You're not worried about operating systems or patched patches. Have got to be applied. That is all dealt with for you by the service provider. The service product itself has got to auto scale and auto provision based on your usage. So as your use of it increases, the underlying serverless platform has to automatically work out what you need and provide that to you. That means the scaling up and scaling down. The third idea here is that we are we are charged based on what we actually use. So this is the idea. Here's a precise usage. If you think about traditional managed virtual machine, the standard cost model safer is your virtual machines or adaba s.E.C. Two instances is that you get a machine and you pay for it by the hour. Now, that's still amazing compared to buying a machine for life. Right? But you're still paying for the full hour. If I only want one request on that machine, I still paid for the full hour. Whereas the things that the fast platforms, if I only have one request, I only pay for that request. The other thing here is get sick. This is where things get a bit kind of interesting. You still do need some ability sometimes to specify how much or how powerful you want something to be. And so you do. You know, with the lambda functions, for example, you have the ability to specify. I would like more memory for a function is maybe more capital intensive workload. But the way that Mike talks about this is your performance capability. Should defining it in terms of something other than a number of machines? If you look at Dynamo DB, Roscosmos DB, for example, which is sort of backhands. So Cervalis backhands provided on A.W.. And as you to use the equipment allocation that you pay for a set of units that allow you to run queries, for example, against your databases. But you're not saying I need more machines to handle that load. It's like you're buying extra capacity on that ultimately shared resource. And the final item here is that we have implicit high availability of these resources. Right. So these service offerings, we have to do anything. Are just going to automatically handle failures for us. So far, no dice. That should be pretty much transparent to us. We shouldn't even have to worry about that or should have to care about that. So, again, this this idea again here of abstraction. We're giving developers higher order abstractions to work with quite powerful cloud based functionality. We're hiding details in terms of having to manage those workloads. We're not having to worry about the operational infrastructure costs here. We're also getting impressive high availability given to us by the underlying platform. And so, yes, as we mentioned earlier, you function as a service here has become to being almost what people think of a surplus. So A.W. is LAMDA Google Cloud functions as your cloud functions. I think we've impressed with. I'm not a big user of his rapid quite impressed with as your cloud functions in terms of offering quite a wide breadth of different programming languages. And that's an important thing with things like with all these abstractions when you give somebody a higher level abstraction to work with. You also end up placing constraints upon them when you run a function, a cloud function on any of the three big cloud platforms. You are often constrained in what you will be constrained somewhat in your language choices. So it's not just a case of saying, well, we early support we support nodes like Google, any support node python and go at the moment. And Java minus support for languages. It's not just that they've also got support, your specific runtime of that language, because they actually manage the runtime for you as well. Again, this is whereas you got a bit of a head start supports lots of different programming language. I think you can even theoretically run windows backfires on, you know, as cloud functions. I mean, you shouldn't, but I think he can. But there's also other things in this space that fit. I mean, we've got backhand as a service type offerings like Dynamo DP, like Cosmos, DP. There are storage products like S3 on a W.S.. You've also got things like messaging based products. Think about the as your event top. You're not provisioning a messaging cluster. You're just sending messages around on a W.S.. You've got Kinesis, you've got escudos. You've got S.A.S.. And these are all serverless messaging based products. And so the idea that service is quite now and it just functions is not all of those products offerings. If you think about it is three messages. It is data it. You get charged based on the data. And again, you get charge based on your usage. The high availability is handled for you to do. You're you manage managing the detail. There are also Serverless going all the way back to the roots of A.W. s. The first public products the A.W. was made available were escudos, which was their simple Q product and and S3, their blob store service. Both of you. Yes, we specialise in amazing products. Both of those were launched in 2006 and they are both totally surplus is sort of interesting to think that that service concept was with us early on. But that's not what we call light. Right. Caught lt was we infrastructure as a service. I don't think we were ready for that level of abstraction to deal with right now. It comes on like fast. If you haven't used a functional service platform right now, I mean, that's the first thing you should do is weekend go grab the service framework, play around with cloud functions on his year or lamda functions and A.W.. They are really, really kind of it's almost like it's as close to the perfect developer abstraction as I think we found. Working with cloud based code execution. So the way it works, you work on your laptop. You create a piece of code. And then you basically upload that Clow to the cloud code, to the fast platform and to apply that check. She say, okay, well, what you basically say some code and when this thing happens, and that could be a message being put into, say, one of the missing products, it could be a HGP request coming in. It could even be like a file arriving in, say, a blob store service, then launch by function. And so then what happens is when those things happen, when these requests come in, the fast platform behind the scenes automatically spins up a machine to handle that workload. So the request comes in. It will launch a function to handle your request. So you're not having to worry about configuring a host. You just say, here's my node function. Run it when this happens or here's my python function when it's when these requests come in, launch by function of passing the request into that function. You only pay for what's running. So, you know, if I already got, for instance, is running, I might even be paying for the four instances as more requests come in. Unlike Frass Platform, it's going to spin up more instances for you. And so we can scale up and scale up. There are limits around how far it will scale. Some of these limits are soft limits. You can put those limits in place yourself, but you get sort of automatic ability to scale up without having to really think about it. Now, that can create challenges for hybrid type architectures. We'll talk about that a bit later on. And also there will speed down. This is also quite useful when there are no requests coming in, no events being served by your service, you will not be paying for anything like your your service or your functions will vanish. That's interesting. I sort of touched on this idea that I see function as a service, as being like almost the perfect developer friendly abstraction for running workloads on the cloud. And it's taken the crown for me from the previous incumbent, which was Hiroku. So I think Hiroku nailed a really great developer friendly experience for working with cloud based resources. And I think it's in many ways is still the sort of gold standard for a developer friendly pass offering platform, a service offering. And you I mean, you can see how that's played out in so many products subsequently Capito's. That's the command line. And some of the concepts in Hiroku, I mean, Cloud found you go completely rebuilt really to mimic the Hiroku interface. And developers love working with Cloud Foundry, even though operations people aren't always so friendly. But on the face of it, something like Hiroku doesn't qualify as being surface with Hiroku you spit up dynamo's been at Web workers, you still managing kind of really abstractions over virtual ised machines. And so although it's awesome, it's pretty awesome. Not service was still a slightly lower level of abstraction. By the way, I still think you could do a lot worse than actually making use of a Roku, even though if fortunately cynical bought by Salesforce, they haven't done a great job really off of continuing to have to grow and have the impact that it once did. So these ideas of Surve led infrastructure and sort of high level abstractions are kind of this. This is almost part of our ever, never ending quest as developers. And as you know, I.T. people were always looking to create high and high level abstractions to help us get more stuff done. Think about what happened. We used to just have physical infrastructure, used to have by a machine and rack it up. Right. And then we started virtualizing our infrastructure. Think about virtualization in terms of, you know, your standard v.C VM where type type two virtualization running on your premise, make better use of the fiscal resources you had. And then, of course, we had infrastructure as a service, which is really what A.W. is nailed so well early on, creating infrastructural virtualize primitives that run in the cloud that nonetheless, although they're a virtual resource, they feel very much like the physical resource and they can engage with them in the right level. And then we started building sort of high level abstractions on top of that. Think about the fully managed database services, for example, from this year and a W.S.. So an idea of RDX, for example, which is a relational database service. I'm saying I want a database. It's this big behind the scenes. A.W. So making use of their own virtualize primitives to kind of create that service offering to you. But we're sort of dealing with it at one level of remove. What about the operating system? But we are worried about the database version and those sorts of things. Also, we've got container orchestration as a service. Think about, you know, the various Kirt, you know, Amasa. So as your API, you forget about the ACARS. This is my stupid ETTY services where we're now sort of another step removed from the underlying virtual machine. We're now talking of container workloads running on those clusters. And then we've got Serverless. So you got the various different messaging products in this space. The Backhands is a service like Cosmos DBI or THENE or Dynamo DBI. And there's, you know, the various other types of. Things that you've got the cloud providers, weirdly, although Google's got a really narrow product set in general. It has a really deep product set when it comes to databases. It's kind of where their expertise areas. There's a lot of stuff that fits that service model page you use. And, of course, fast. Right. Function as a service. And then we've potentially got that so that even more high level abstraction, which is maybe what we would consider that the platform was a service. The traditional view of the platform as a service, which is we do everything for you. And the problem with that is if you didn't like something it did is quite hard to break out of the constraints of that system. The thing that Cervalis does well, I think, is gives us higher level primitives that we can assemble together to give us the platform of services choice. So I think it's a really neat balancing act between giving us a high level of abstraction that allows us to get lots of work done while pushing away a lot of the underlying operational work, but still being flexible enough that if we didn't like one of those components, we could take it out and replace it with something else. And so as you go up this level of abstraction, you've got less overhead from a point of view of managing infrastructure. But you're also having to give up control. Right. You are having to trust the provider of the platform that they're getting six for you. And you may not have access to all the low level abilities to control things when you launch a function on a zero or A.W. as the only control you have is how much memory you're giving something. And that's sort of it. As you come down this stack, you get more low level access and much greater control. I actually think for a large number of organizations, I think most of the teams I work with, although they they think they want more control, they actually don't. They're actually be quite happy being at this of higher level of abstraction. But they stay sort of there's a bit of a fear in giving up control. And some of that fear is based into things like vendor lock in, which we'll talk about briefly in a minute. Even Kelsey Hightower, who's been an amazing champion for Carbonetti, is Cuban enties fundamentally is is not a developer friendly platform. It just isn't right. It is very good at managing container workloads. But you see, everybody ends up using Cuban etches and tries to create platforms and surly ends up using layers on top. They buy into something like Open Scheft or or, you know, cloud foundry, which can now run workloads on top of, you know, eight clusters. And so, you know, developers don't actually like working with Cuba, that it is too complicated, really, for it is fantastic at density, packing developer workloads. And Kelsey, who's been a great champion for Cuban exiles, is totally on board with this idea of service being the future. I think I may have got him on Twitter during his moment of epiphany, which is I understand what the service fuss is about. When you have a great idea, the last thing you do is set up infrastructure. This is all this is the dream, right? We've got a problem. We can solve the problem. We don't have to worry about the work being done behind the scenes. We offload that to somebody else. And I think with Cervalis, we're just getting a bit better because someone's like we're getting our collective eye in as to what makes for developer friendly products, service or offering. Coming back to Ken Ken's article from the beginning, the phrase Serverless doesn't mean that service is no longer involved. It simply means that our developers. This is the important thing. Our developers no longer have to think that much about them. This is fundamentally about creating developer friendly abstractions that make sense to them and their world. And I think people that have deep into Cuban ethics classes often can lose sight of the fact that although they understand Cuban, it is very, very well. An awful lot of people don't have don't, don't and don't have to tenancy to either another sort of kind of take on service. I quite like this one from Belgin, which is, you know, functions as a programming model of service overall, as a billing model in a way. Right. So you can sort of think about service shifting. The way we think about billing functions as a subset of services was like a program. So we're thinking differently now about how we structure our programs to take advantage of this new kind of deployment deployment paradigm. I also talked a lot about this idea of services to take work away from you. And it's a phrase I keep coming back to in my head, and that's the phrase of undifferentiated heavy lifting. And this was a term the Amazon used a lot when talking about the mindset behind why they created W.S. so they would create these sort of small Polli skilled teams. The idea was a small, poorly skilled team should own the delivery end to end of a product offering something that would actually reach the customer. Still, how Amazon to this day operates. It's also partly, I think, how A.W. s continues to outperform the other cloud providers in terms of being able to deliver new products. So the idea was we own the whole thing and to end it means that we don't have to coordinate with other people. We have a lot more autonomy and we can go fast. But they realize that those teams are suffering, that they had a huge overhead in terms of having to manage the infrastructural part, the sales. So they actually had to buy machines and wrap them up and cable them up. And so Amazon said, well, how this is done, they defined all that work was was a busy workers, undifferentiated, heavy lifting, its work, difficult work that our teams were having to do. That doesn't actually help us achieve our goal of shipping product. Just like what our ability to configure infrastructure as a team doesn't help us in the market. Right. Our ability to do to recommend great product to you if you've based what you've bought, that differentiates ourselves from the rest of our competitors. But we're doing lots of work that we should just not be doing. The idea was, can we just get rid of that? Can be outsource that and says of A.W. us, you know, internally inside Amazon is a set of services that those poly skilled teams can use to offload work that doesn't differentiate what they're doing from anybody else. So deeding at this level, you're dealing a different level abstractions. You can focus more time and effort on your stuff. I think what's happening now is that we're seeing more and more of the work we do in the infrastructural space as being undifferentiated. Your ability to run a Cuban enties cluster is almost certainly not going to be a competitive advantage to you, especially if you could have somebody else from that Cuban entity cluster for you better than you could yourself. Like, if you want to run Cuban ease, and I don't think everyone should. But if you do, great, why run it yourself? Cost cutting is a major provider, right? Pay them to do it. They'll be better at it than you will. That frees up your time, energy and money to work on the things that your customers want. Your customers probably don't want to keep it close to your customers. Probably won't. Cool great quality software that helps with their problems. Unless you're a company that runs Cuban ETJ classes, in which case you go for your life. This is a kind of continual thing for us, those developers. We've been creating high and high level abstractions. That's what we do. It was hard wired for it. Brian Mack is one piece of the agile manifesto. So, you know, he said once to meet developers. We turn caffeine into abstractions. That's what we do, right? We convert out, you know. We take. We take. That's what we do. You give. Give it with enough caffeine. You get lots more abstractions. We obsessed by it. You know, we think about that. That mantra of dry. Don't repeat yourself. We beat into ourselves in code. We're always crazy abstractions to allow us to work at a high level remove. Yeah. We started off with the Shinko working a very, very low level on underlying the underlying infrastructure like chipsets of our machines. And then we realized, you know, hang on, is a better way. So we created assembly code, write assembly code, allowing us to work at a high level abstraction. And we, of course, when people started coding in assembly, the machine code, people go, oh, that will never catch on you to be right at the low level. You know, it is much better. Hey, you're all the control. The assembly people were like, yeah, whatever, granddad. And they got on with their day and they started getting more stuff done. And then we realized, you know what, we can do better at assembly and assembly code. We started creating sort of I order application programming, programming languages. There was shit on top of assembly underneath. Now, very rarely do you ever actually have to worry now about the actual runtime target. Your or your operating on your operating system means less and less the underlying chips. The machines mean less and less. You know, that shift is sort of maybe first for me. It just happened when I moved from being a Fortran and C programmer. I was being a Java developer was significant. The abstraction of of of the JVM allow me to say I don't actually have to worry about running on a, you know, icebreaking software that one on son or S.P. X Dec Ultra's did every day. Kolchak's is is that SGI machines and, you know, Windows machines. I didn't have to care about the differences anymore between those. I could just work at that one level, abstraction, a lot more done, a lot more productive. And abstractions when they work well can be amazing for us. But I still there are some things to consider in the service space. So while exploring this, looking into a site, looking to more case studies around servers, people that have done serverless and and what they'd found and I found some interesting things around aspects like resiliency. So many years ago, I was working at a a an investment bank. That investment bank no longer exists because of the I with the global financial crisis. I would say more about that. I could recommend watching the film The Big Short, although the original book that this is based on is excellent essay by Martin Lewis. And we were basically doing calculating these things called collateralized debt obligations, which were a type of financial instrument. And we basically running these calculations on a grid computing system, data sign ups between old fashioned style of architecture now. But actually, you know, this is what I think you'd use a Cuban at his or her deep cluster for nowadays we missiles would be a better fit. We had this sort of cluster we'd run these jobs on. And so it always machines running his workers. And it had all these of upstream resources that we'd pull information from, like we'd pull the ability to pool risk data out or market data. We should be on the inputs into our calculations. And primarily what we're doing here is looking at the various how risky were these different trades that we were doing? Of course, we realized in hindsight we had very bad ideas how risky these things were. And then ultimately we did calculations and we'd stick them into some kind of database. And that was all kind of nice and good. What was happening, I think, was where we we actually used up all the machines that had been allotted to us. We were looking for ways of actually scaling up the grid. And the software we were using actually allowed these pricing agents to run a screen savers. The idea that people went for lunch and their laptops, their desktop machines, they would have been then weren't being used and then suddenly you'd have a lot more machines able to run this pricing. And so at lunchtime with Realisable, if we could use people's screen savers, we could get a lot more work done. We also found a whole disaster recovery center that we had that was sort of north of London. The idea being if there was ever an issue in London that the camp is other sites, current trading, and this system was that these machines were like not being used. They were sitting there. There actually had to be switched on, but they weren't being used with great. So we wrote our screen saver to set this up SDR center. And overnight, our pricing grid scaled a few machines to hundreds of machines. It went from 25 to 250 in one day. It was amazing. And things started going fast and fast. And then some odd things started happening, like everything outside of the grid basically vanished. And what happened was as we scaled up our workload, we overwhelmed everything that wasn't itself also scalable. So we kind of bombarded the upstream services with requests and we wiped out our database. And what happened was we had no way to sort of balance or throttle the requests we were making. We were basic. Have we had this is this is basically a classic case of mismatched resources, right? It's like having a load of water coming into small pipe with work very well. Now, if we think about it in competing, we have lots of things that we take for granted that we use to actually throttle. A classic example of that would be something like a good old fashioned database connection. So I've got a node running here asking to talk to a database. We have a certain we have a certain number of requests that we will allow, things like database fibre connection. So we want to make a database query. You actually get a request, you get a a connection from the connection call that allows you to make queries. So as requests are coming in that require access by database, these requests go to the collection pool. They go and grab that connection and make the query. If there's more requests that we have connections available to them, they'll have to block it. Wait until that connection is available to them. And the idea here is that that allows us to basically throttle the amounts of load that goes through our database. And if, you know, in the worst case, we can actually time out the upstream requests, which causes us to basically shed that load that reduces back pressure. And so it is actually a very common thing that we do that you put we do already without realizing it. These Conexion pools allow you to balance how many requests can go to these downstream resources. And that works, though, because ultimately your connection pool is managing a connection pool for multiple requests and it has some sense of state as well. So between requests, it has some shared state to know how many requests I got going on, how long these requests taking. You can also do things like circuit breakers as well. Now, of course, we don't have anything like that. When it comes, something functions, you know, with funk, you know, with LAMDA, with with cloud functions. And as you're when a request comes in, we launch an instance of a function. That instance of a function is stateless. It cannot share any state. Therefore, it has no concept of a shared connection pool. What that basically means is we have no throttling mechanism that we can have built into our function because our function can only throttle on a function based basis. So that's no good to us. We still could have 500 functions running that could result in 500 requests go into the database, whereas a similar set of workload running on, say, a a normal application instance. We might actually get in front of that quite easily because multiple requests would be handled by a single process. And that basically causes issues. Right now, I've wondered if this is a theoretical issue. Is it going to be affecting other people? I asked around. I said, you know what? What about the situation? And a lot of the people from the German database, it was fine. You just use different database. Just use a database like Dynamite or Creepy or Cosmos, which are designed to just scale up as you need. Which is great. But what about people that can't just go all in on these service frameworks or can't go all in on these service products? What about hybrid situations? What about situations where you don't want to migrate your entire architecture to these new things straight away? I saw a really interesting talk by a team for called Bussel. So this is Steve Falconer to talk, I think was in D.C. Oslo might in 2018, I think it was. And he talks about, you know, their experiences of moving over to service, making use of lambdas. There were eight of us and this exact problem happened to them. They were using REAC as their underlying database. We can have a separate conversation about whether or not you should use RACKETED database, but they had exactly this problem. They were getting overwhelmed by the number of functioning vocations and that would actually completely kind of wipe out their underlying reac instances. So they actually ended up having to bound how many functions would invoke just because of another downstream issue. They also had to do so quite low level stuff and then networking stacks on the actual machines that are running their reac instances to actually shed connections earlier to stop those services being overwhelmed downstream. This problem is like you've got part of your your infrastructure that can really scale up quite drastically. It can overwhelm those things that don't have those same scaling characteristics. And so that's something that's going to be considered when you have these sort of hybrid and hybrid based applications. Now, look, part of this is some of you think he will look at how much this is. You just worrying about nothing, you know? Do you really have thousands and thousands of machines that you to spin up? Is this are you really worried about issues that aren't good affects most people? And the reality is, maybe for a lot of you don't care about thousands of nodes and probably most of you don't do probably about tens or hundreds. And the reality is, if we're offloading are sort of needs and desires onto an online computing platform that claims to be able to scale up. And it can we still have to deal with the implications of what that means with other parts of our systems. Can't handle that. I actually think what we start to see even in multiple situations. Now, organizations that are looking to save hybridise, that sort of architectures on the crowd, you're starting to have to look at building these throttling mechanisms in place yourself just to protect yourself against these things. I think for some people, things like, you know, app measures and stuff like that makes sense. You're having something that sits between your functions and external services that has the ability to limit those connections. So maybe service measures might be the answer here. In the longer term, although I think it's early days when we think about how service measures and functions work together. Security is another kind of interesting one, right? This has got a lot of concerns around security because people started to realize quite quickly that things like fast were just running on containers. And to an extent, I think that, like some of it was well intentioned and some of it was correct. But it's also a lot of confusion around the concept of security and containers specifically. So when you run your your function is being run inside a container by the underlying runtime. Of course, in the early days of Dokka, it DOCA was primarily based to do isolation of resources. It wasn't designed as being a completely secure enclave. And so there were situations where you could get from one content to another. The underlying DOCA run times themselves weren't really built for thinking about untrusted code. That has improved significantly. A quarter of the container runtime sitting a lot better here. We sort of had this mantra that, you know, you know, friends don't let friends run on traffic code in containers because the level of isolation provided via a container. It certainly was much poorer than it was for normal virtual machine as that has improved drastically. But what he was worried about was, well, okay, well, if I'm running my container workload on farce, what if another malicious party can somehow break out of their container and access my container? The reality is this is much less of a concern. And I think people realize because firstly, if your function isn't running, it's sort of not there. And in any case, your not running a full container. You're running a function that runs inside a locked down runtime, which itself is running inside a container engine that is also quite locked down with lots of constraints around it. And actually, you know, some cases are actually fairly isolated. And I'm not sure if this is the case, but certainly one case for sort of a period of time. I believe that as your cloud functions, for example, we're actually running in full these fare containers that have isolated kernels giving you additional isolation. So I see from a security standpoint, this is actually pretty good. Right. Are your functions only running if it's being used? It is not being used. Is not sitting around. It can't be attacked. It's very hard for a malicious party to break out of their sandbox anyway. I don't know of any currently known exploits that allow you to do this. And even if you could, it would likely be an indiscriminate attack anyway, because the chance of you running on the same machine as the person trying to reach is pretty minimal. And again, this is the bent beauties of offloading work. If you think about a modern application stack, right, we've got the underlying hardware. We've got the operating system that sits on top of it. Then we've got the hypervisor because you're probably virtualizing infrastructure on top of that. You've got your virtual machine operating system. And on top of that, we are running DOCA or some other container engine. On top of that, we've got your container operating system. And on top of that, you've actually got your operating system. And I've simplified this. I haven't a clue, but it is custom in here which would further complicate this diagram. Every single layer of that stack is gonna need looking after management observation and patching. Wonder how many of you are confident if you're running, said a private cloud, that every single layer in that system is being patched. And as those patches kept up to date, like I know that all the public cloud vendors rolled out the those intel patches spectre and meltdown within seven days. I suspect that some of you are still running in companies that run on hardware that haven't been patched for those exploits because we don't think about patching the hardware. And as our attacks get more and more complicated, there are more and more moving parts to worry about. Now, if we have, say, running our infrastructure in terms of managed virtual machines, of you working at the level of cloud infrastructure as a service, we can offload all of that responsibility to the online cloud vendors, the hypervisor as the operating systems. All that stuff is being handled by somebody else. If we're basic making use of, say, cloud container platform, we can push even more of those concerns to the underlying provider. They're now running not only the virtual machines, they're looking after patching the Cuban and his clusters for you as well. And you're just sort of dealing with the level of your container and that can focus your energy and your attention just on your container. Your application with functions is even better. Even more of our work is pushed away. And now we can really just focus on the stuff that we care about the most, which is just your code. And now all you've got to do is to stay up to date with patching your code when new vulnerabilities are discovered in the libraries that you use. And the great thing there is we have awesome products like Snick out there that can really help us in terms of making these things better. Snick, we'll just you'll look at your repos. It will look at the dependencies that you have and third party products, it will send you a pull request when it identifies you've got a dependency on something which is has a critical vulnerability in it. It will say this thing has a critical vulnerability. Here's a pull request. If you accept this, poor requests will update the version you're using to one that has the. It right. You could put that in your bills. You can fail bills. If you're relying on libraries are out today. Snik is awesome. You should go use it as it is quite cheap for it does. And that that's great then. Because now I've got, you know, Microsoft managing mostly stuck for me now. I'm just worried about my application. I can get more work done. Because more of my time building features that make a difference to my customers and that don't. I'm not worried about, you know, does my ability to roll out patches really distinguished me and my company for anything else? Is that a competitive advantage? Should it be. I worry about or should I just have somebody else manage that stuff for me? You can do a much better job than I can. I think that's really the secret. The secret to all of this. I mean, fundamentally, I'm Skyscape for a little bit. I've talked so far mostly about this idea of Cervalis and the service offerings being something that you can only experience on a public cloud offering. There are initiatives out there to actually create some serverless type offerings on premise. Most this work is being done looking into building things on top of Cuban at these clusters. So pretty the most. Well, Weiden was a widely used example of, say, the functions service on platform is open. A number of companies are using it nowadays, although there's been other competitors out there. A fast Nettie's and fission and open wisk and open lamda. They'll probably something else coming out shortly. I think now if you say today, take a look. Open fast. We also have KEH Native coming soon. So Keh Native is something that's really was originally being more of a community driven thing to keh native. Is Google saying here's how we should do that as a function, as a service type workflows on top of the Cuban active platform. Unfortunately keh native of is now being taken really out of the community development and is now really being driven entirely by Google. And so early on a lot of work was being done by people like Microsoft and RedHat to support KEH Native is going to be interesting seeing if that continues now. Google said no, this is ours. We look taking it. They've actually taken that out of the SNCF, which is a bit disappointing, I think was interesting. I saw a lot of people stalled in going towards going ahead with using open fast on premise because they're waiting for KEH native. A lot of people come back to open fast. Now, Google don't have a track record of getting this stuff. They're quite good at talking about what will happen and it can often lag a long time to these things already take a very long time for Cuban entities to get to 1.0. Take a very long time for EU to get to one point zero. And even when they did, they made fundamental large scale changes to how the system works. After the fact, I think we could fully expect to see the same sorts of things happening with K native as well. And that might be exacerbated given the much less impact, less involvement. Other parties that came out is going to happen. We'll see what happens. For those of us on the public cloud, we'll have to worry about any of this mess. We just go use the cool service offerings that are out there. So, you know, fundamentally right for me, when I think about what services, it's just all about abstractions is all distractions all the way down. Coming back to the machine code, assembly code, application, code type attraction's, we think about if any service is just a continuation of that. We don't think about chipsets. We abstracts ourselves away. Now we extract ourselves away from operating systems. And now we're maybe even into a place where we can sort of abstract ourselves away from things like containers and run times, if you know more about so serverless. I've got a hold of two and a half hour presentation, all about Cervalis fundamentals from Micra Services. I'm actually writing all this stuff up at the moment for a second edition of my buddy Marcus sevices book. But you can find a lot more information about the work I do. You can find a video of lots of other talks I've done out there, including ones on security in the context of market services over my Web site, as well as details about how you get your own copy of Moneta Micah Services. But thank you so much for your time and your attention. We've got, I think, 10 minutes or so for questions. So I'm just going to switch over to slide show and see if you've got any questions coming in. Papa, I have lost Slider. I am just shut slightly down, which I think is extremely unprofessional. So let me just get into and see Copenhagen. And I'm going to put these questions up on the screens, everyone can see them just in case any of you said any in such anything insulting about me. Well, I suppose it also means if I skip the hard questions, that's going to be pretty obvious to us. So we've got a couple of questions here. So thanks so much. So what I'm asking is questions if you're asking more questions. Pop, I go to slide. I just use the hashtag NDC CBH. So. So he's a he's a comment here from his your functions. You can say setting them is your function to limit the number of requests you're allowed to receive. That I would imagine you're doing that actually the API gateway. And that does depend on. So it'd be interesting if that's at the API gateway level or food doing that. The more generic invented level you think about it conceptually, these cloud functions just get invoked based on some event most. For many people, that's just a call coming in via the AP, via HAYSE GDP or something like that. In which case that posts on A.W. says you it based on A.W. as an honors. That means routed via an API gateway. You can definitely do the quests throttling at the API gateway level, but there are other sources of requests. And if I have to think about every potential input to that and throttling that, it feels like I'd rather do that at a function level rather than the upstream ingest level. If that kind of makes sense. Like I Joppa file an s read, it could trigger a function. How do I throttle that. But that has got a lot better to be fair, early on. These, these controls were not available to us early on. Certainly with a W.S. you couldn't even say you had a hard limit of how we function in vocations you had, but you couldn't even specify a maximum number of function vocations on a per function basis, which was a real problem that has now been fixed. There's another question here is when do you think that cross fast communication will appear? I sent a cloud of maintenance, standardised manner from any fast to another facet. I mean, you can do that now. There's nothing to stop you doing this whatsoever. The means I can rely. So from one function running on his year, I could hit a high GDP endpoint that actually triggers a function on A.W.. Not a problem at all. So for me, I think you can do that in many different ways. Always think to be aware of if you're bridging across cloud platforms. The issue there really is about, well, latency and bandwidth. Right. There's always the thing about hybrid or multi cloud solutions. When you send any information out of one public cloud provider into another public cloud provider, you're being charged on the way out and you're being charged on the way in. And then you have to wait for how long for that call tablet. Of course, you're actually going of the public in today at that point. So I if I send a hasty P request from my as your function to my IBS function, I get charged and the way out. I don't have to wait for requests, abusive native, yes, ADA is charging the way it doesn't work. It sends a request back and back in again. Bandwidth often when it comes to cloud pricing, is the secret killer. It sneaks up on you. And people who are building multiclass solutions will start to get bitten quite early with those because it tends to be the ingress and egress. At this at the perimeter of the public cloud services tends to be with his band, with costs are the highest. And so as a result, most people who are using doing multiclass solutions, they would often be quite selective in what cross cloud communication they do. And I see this stuff happening more often in terms of region to region within a single provider rather than, say, from one cloud provider to another. So the question is, do I think that GCP and A.W. s will support dot net Korth or fast? I would be very surprised if ADA w doesn't with GCP. Who knows. I have no inside track. A.W. is actually is doing a pretty good job of rolling out new language. Support is behind as you're. I think. I think. But it is catching up. I'd be surprised. Not even a dot. Hang on a minute. I'm looking this up. I thought this was a beta. I'm looking this up, right, I'm speaking as. Yeah, I mean, you can do it already with the custom runtime, so you can now. Right. I'm showing you a link here isn't 2019 says half the presses. Here we go. We have a custom runtime support. So this actually allows you to run things. That clinic is great. So it's all there. So you can Google as well as I can. So that's great. Now, GCP hope, who knows? I mean, they're. I mean, for something so conceptually simple as cloud functions. It's amazing to me that they still got such poor support for run times, but really, really limited. I think Java support for cloud functions on Google is still in beta, which I think is very disappointing. So I have like zero expectations that that can be anytime soon. But Google, for some, you know, on to themselves and it comes to these sorts of things. And I, I am still bewildered by some aspects of their cloud strategy. But then, you know what? I am just one independent consultant and maybe I'm missing the bigger picture. But I you know, that's unfortunate because Google Cloud do have some excellent stuff at databases. I think the database offerings they've got are amazing. I think the quality of their managed virtual machines are really first rate. I just think they do suffer from a lack of breadth of offerings and some space. Yeah, the lack of language support on those on their funk, on their functional at times I do think is disappointing. I'm just going to refresh this, this and see if any other questions have come in. I don't seem to have. That was silly of me. You can't steal my mistakes firsthand. I guess I as quickly see we have got a few people that have dived into the Simchat, so maybe we'll quickly ask if anyone in the zoo chat wants to ask me a question. You can you can ask fazing chap. You can also just use your voices and your microphones if anything you want to ask me. No. Okay. Well, thank you so much for your time. I am available on the Internets if you want to. Karaman asking me questions. Oh, here's one. Do you think that this year has managed a better development experience for their fast compared to a WASC started earlier? There is one specific area that I think, as you know, has done a great job for a developer experience, and that is with durable functions. It's an obvious thing that you'd want a lot of functions to work together, maybe as part of an orchestrated workflow. For example, a WSI step functions which which are basically. That's right. LOEs adjacent to coordinate how these things talk to each other, which from a developer point of view is rubbish. Whereas cloud functions how a sort of embrace the idea of continuations and hook those into the reactive extensions to a life code point of view to basically do things like an async away on another function vacations. That's really, really cool at that level as using better. At another level. They're both doing terribly conceptually function as a service. It's such a simple thing. It's such a simple concept. But if you go through all the getting started guys or trying to use the console, it renders experience. There are companies like Z8, for example. It's really showing what a good quality developer experience should be. So there's a there's actually still two a bit was too steep learning curve, I think, into the getting on board with these platforms. Once you're in, it's lovely place to be. This is actually why four function run times. I do suggest that people take a look at the surplus framework, which is stupidly named. The service framework. Here we go. They got service, dot com. Clever them. Is an open source product for working with various different service product offerings. And it gives you a really nice command line for working with different functional vocations. To an extent, negation abstractions over different underlying providers. I think it actually works. It opens fast, stuff like that. So if you see what I suggest, if people want to give a chart, give a go of using their words, get say, try out a cloud function for the first time in a couple of hours. I would say download the service framework. Put your cloud platform of choice and you'll get start a lot quicker. There's definitely work that could be done to improve experience and all of those platforms. Oh, the other area that is used on really well is around debugging. Like their local and remote debugging story around cloud functions is way better than Amazon. But again, I think that speaks to a mindset Adaba. I don't like to get involved in what happens inside the box, whereas Microsoft is quite happy doing that for durable functions to work. You actually need support inside the SD case, running on the cloud platforms. Likewise, you need the same if you are to do local and remote debugging of the cloud functions. A.W. s have never really wants to go there. I think it might just be a bit of a mindset thing that might limit how deep they can get involved in that areas. The debug stories around Lamda functions is not great. They use this of fettering emulation to do that, which is not as good as the real thing. Anyway, I have rabbit ID on enough. Just double checks are the questions. We've got a couple of minutes left. Okay, so I have had answers. I've done dun dun dun. So I like sevice framework. We set that, terraformed those. I would not put these two things in the same bucket. Right terraform. I would not use terraform in the same place. I'd use a service framework teraflops. Great. But for terraform is something that I would use of managing more, you know, sort of sort of managing my widespread cloud resources I think is really, really good there. Another example of that would be something like Pollini. I like Plumy a lot. So def we take a look at Pollini, but I almost see the role of things like Pollini and TerraForm as replacing puppet and share for managing cloud based infrastructure is not necessarily something that I would focus. Appointed developer. I think a developer honestly, most developers are quite happy would be using something like seventh framework issue. But if you're interested in terraformed, do go take a look at Pollini. But for me, they I wouldn't I wouldn't use them so I would use the service framework just for baby your command line, you develop a command line tools for working with your cloud functions. I might use Plumy or terraform to set up the wider cloud infrastructure that my teams use. Again, personal preference. If you don't know a lot more about TerraForm Plumy and how they set in to sort of this whole ecosystem, an old colleague of mine, Keith Morris, has written a book called Infrastructure as Code. The second edition of Infrastructure as Code is actually available over at Ryley site. So should I know more about this whole space and how these different tools fit in, then? Do head over to this Web site and you can find out how to get early access, upload all the stuff. Today we are at time. Thank you so much for the great questions. I'll pop over to slack and I'll be in slack for 10 minutes or so answering any of the questions you have. But if you miss me on slack, feel free to ask me questions on the Internet. And there is loads and loads of information about me and other work I do to help with consulting services. Anything else you can find me on the Internet. Thanks very much for your time.