Video details

Building Rock SOLID Serverless Applications | William Liebenberg

Serverless
01.25.2021
English

William Liebenberg

The world has embraced Serverless computing but some systems still end up with the same problems we thought would be a thing of the past. I can hear someone in the background whispering "Distributed monoliths!" In this talk, I start by covering how we should apply familiar SOLID principles to our Azure Functions software architecture and infrastructure so that our systems are light, easy to maintain and benefit from unbounded scalability. I follow through with some hardening techniques that involve event messages and triggers that boosts our application's resiliency to outages, security, and low-coupling. After this talk you can go and build Rock SOLID Serverless applications that will be able to withstand the test of time.
Check out more of our talks at:
https://ndcsydney.com https://www.ndcconferences.com/

Transcript

I run and welcome to Energy 2020 to talk about building a rock solid Serverless applications. My name is William Liebenberg. I'm a solution architect at SSW. You can find me on Twitter. I've got a website called Agit James that I urge. You can find some tips about Azure. You find me on LinkedIn. If you want to engage a bit more formally and just a bit of my background, I started programming with about five years old. I wrote my first program on a Spectra video 328 big, clunky, beige looking thing like this. And it had a whole 8-K of memory and everything else off that had to be loaded from cassette tapes. They've hardly ever existed so long before the Internet and CD-ROMs and all that kind of stuff. So everything was pretty basic back then. And then a year later, I started developing professionally in 2004, but this is when I is pretty much low res computer these days, but still we felt like we could rule the world with source code. Eventually, I moved over to dot net version to skip version one. What's this crazy new technology? It's not going to last, but hey, happy to say I was wrong. We got dot net five these days. It's been with us for so, so long and it is fantastic. I spent many years in the 3D industrial training industry that reality, and I was lucky enough to get one of the very first lucrative Oculus Rift devices in Australia to help train people in their jobs to work safely. So I don't hurt anyone else or themselves. Then I fell in love with Azure in 2014. So about six years ago, not a day goes by that I don't use. It's definitely so big. It's something for Iran and try to have a small slice of Asia. So let me start off by looking at what is clean code and the solid principles, then how do we apply that to building or in architecting our Serverless applications? They don't want to sort of go and see what can we do to harden our applications at some resiliency and make sure that they can perform scale efficiently. And yeah, let's get started with clean code. So why bother with clean coal? Well, I'm sure we've all been in this scenario the you know, the expert reviews or code reviews and we measure the quality of code is a standard measure, the tweets per minute. So, you know, solar panels will help us sort of to either make sure we have this very low rate of 28 per minute. At one end, you have the data that sort of come into the DOCTYPE code as they just quickly fix it up and said it works, you know, whatever that you created. We'll take care of it later. But right now. On the other end, we can have the architectural assemble astronauts and they can talk about architecture all day long and never actually ever write any code, never get anything out the door. And they're very expensive, these guys. So, you know, is it worth it? So I think for us, we should go and find a middle ground so that we can write a nice code and simple, you don't spend too much time on it and have good architectures that are easy to reason about. And you get a product out the door and, you know, I have something of value out there. So with clean code, there's two main areas we want to tackle, easy to be to understand, and that should also be easy to change that to look at education flow is to understand you read the code. It just makes sense. Everything that our classes do or the way they interact to be nice and simple. And, you know, you don't have to spend a lot of time trying to figure it out. Then when we need to change our code, you know, things should only have one reason to change that single responsibility is very important. And again, just looking at your class interface should be simple, easy to understand, and it should do really what it says on the box. You don't want to say create new customer. And inside it actually goes and leads a customer. That's just not nice. So all of these things put together is very important. And there's all been summed up in five principles that form the solid principles. And they were popularized by Uncle Bob or Rob Martin in his book Clean Code. So if you're not familiar with these principles yet, that's why we're going to go through them. And but if you are just like me, put your hand up in the chat there and let me know what you think about. One side effect of writing code is that you actually end up with a lot of code and a lot of files, so you need a way to organize this. And this is where clean architecture comes in. So clean architecture sort of gives you the ability to write an application, it's independent of any other frameworks. It's all testable, testable, and you can take the UI that was originally written with react, you can swap the revenues blazer latest and greatest, or you can swap from SQL Server to Cosmas DB from the core of the application. Shouldn't need to change for any reason. It doesn't depend on anything external. So for instance, if you're you're upgrading a dependency, the core of the application shouldn't need to change. And this is all possible because the dependencies all point inwards. You can see the errors there. Nothing's pointing outward. So any changes that you're made on outside don't change the code in the middle of the night. So if you want a template for getting a clean architecture project up and running in second, you check out Jason Taylor's clean up legislation on GitHub and just a few more rules about how clean architecture and a lot more coded examples. You can check out the rules of clean architecture and W. OK, so now we know what clean cut is about, what solid principles there are, and how are we now going to apply that to service? Take a simple application and, of course, a classic example. It's just a simple fact is a server is database and a static site these days in Asia will say it's a static Web application so I can slightly out of date already, but it's very much the same thing. And we got readers that go to the site and we authors that go and add new content. But as the. It will have the impact in the worst case scenarios and things go wrong. You are really wrong. I just be your server and more things just so we don't have to do an application. We can't go outside. We can't even get to the Derbez. So. They're starting to panic and we don't really know what our user experience is going to be right now because even a static site can't access the API for entertainment information. OK, so we go away and try and figure out what can we do to make this better? And you will definitely run into Cervalis, but why would you want to go down the service path? Well, when the sun goes down. You don't know that at the end of the service providers as a GCP, they will just take care of it for you, you generally tend to spend less money with soullessness because the way that it scales sort of on demand where you can get rid of that overprovision problem or under provisioning even. So but digging a bit deeper, you know why you want to use services? Well, they remove or obstruct all that hardware issues away for you. And you just focus on your code, the very simplified program, or even helps to simplify you. What applications are smaller? They have less dependencies. You might just have one database rather than some applications, could have access to multiple databases, independent services. So, yeah, it's implied there was. I said it helps you with the problem of over or under provisioning your infrastructure, your resources, so you don't have a gigantic server that is only being utilized in five percent of it, or you're always running at 100 percent and you can never fulfill all the requests. So this consumption based billing basically only pay for what you use and no more to recoup an event driven Skylink. So that's very important because your application can do things like look at blob storage and reactive when a file is uploaded somewhere or someone sends you a message, not just a message, but from a message, bussau signal, for instance. So whenever those messages come in and you start getting a spike, our application can quickly scale up and take care of just running smoothly for us. And of course, the applications are smaller, they have less dependencies and, you know, they sort of start having less coupling so we don't have a gigantic service that or a monolithic service that needs to know about five or six or more dependencies down the track. So everything just loosely coupled and it's easy to test and you can just have much more confidence once you do release your application. So looking at the social services available in Asia for compute, we have Azure functions, logic apps, we can even run everything inside of communities as a service container instances pretty cool. When it comes to Donna is Cosmas. Even so, the sequel is now, I think, which is really cool and storage. So, you know, we basically can just be concerned about data. We don't have to worry about the hardware at all. We just put that record. They put that fall down and it works for us. Another set of services, one look, is messaging services. So we have quite a lot of them available. So as a service bus hub, event grid, even single hour service, it's there all available and their service. And they just scale to whatever demand we need. I'm going to pick these three. I'm going to start with as your functions, because they scholastically I want my application to quickly just spin up more instances when my application gets used heavily and when things go down, scale back down to zero. And the remaining it's really nice at the base that we can run as a function anyway, just as functions is what we run. We can help. We can even run the print. So anyone can run. There's no more than the lot that we be concerned about. Yippy. Provides a no simple sort of model, but it's actually not a. So there's a few other varieties that we can use from Cosmos to like graph databases, Holomisa databases with Cassandre and even Monga DB. So a lot of models are supported, really not. And it can elastically scale. So even what application can ask Cosmos to behave quickly? Scale up will be a heavy operation coming in and when you're done, scale back then and it happens very fast, very fast. A really cool feature that we have as well is change notification. So what this is, once you write a record or an item or document everything, it's got more than one name. When you write a document to Cosmos DB, you can actually trigger some customized function code to run, you know, which is a source of concern, is not maybe part of your main application, but you still want to react to that event, maybe on a free cash Sundara or generate a separate report and you don't want to then have to go and take that into your main application. This is really nice. So you can take care of some other crosscutting concerns using the change feel. And finally, with service bus, when it comes to sending messages via guarantee, the delivery of that message, once it goes into the bus, it will always be able to be read at the other end. And we can do that with different things. We can use just plain cues or we can use topics with subscriptions and filters. And we'll look at them a little later. Very, very cool. So we take that classic application we saw earlier and after a of playing around, these are the resources we try to do in the application up into a few smaller pieces that we only use as a service to send messages and have our functions listen to a topic. And whenever a message arrives, it will trigger that functional application to run and produce some sort of apple. And then for our database, like I mentioned earlier, you need to change feed, it's very handy when a record is written to the database, we can write an event, would trigger an event and run some custom code to take care of something that's not built into the main application volume. That's not part of the application at all. So certainly some call it OK, but it's already part of our principles for our service applications. The first one is the single possibly the single responsibility principle. I don't want have one reason change. This principle actually goes hand in hand with one further down the line, the interface segregation principle, but on a couple of them separately. That's an example of SRP with our application is the resize images application. So if I need to change this application, it's only because I need to do something about images. It has absolutely no concern about the markdown that we process in another application. So I won't make a change here. And down the line, something else wiggles around. And, you know, it is basically just focusing changes on one small bit of functionality, one application as it is now cost. Say it with random. If I change it. I don't want anything to change in the random Imagists application at all. The next two principals sort of combine them together and build something really nice. So the open, closed principle means open for extension, but closed for modification. So imagine we have just version one, an application out. And, you know, we want to improve on that. You know, we don't want to go and just rip out applications from production and see what the new money can do. That in a really nice way with Azure functions. And LSP, or the risk of substitution principle is that we can take an implementation and swap it out for something, that it does the same thing. So like I mentioned in our example, it is just a small slice of that application with the resize images. Again, let's say now when to right version, too, does the resulting quicker. It probably has a bit high quality, but I don't want to have to swap the version one in swap out version one straight away while I'm developing version two so I can actually deploy version two straight away. I can read the same messages, that version one, yet I can test the output right to a different side or, you know, anywhere else just to have some way of monitoring the output from that. And once I'm happy, cuz I hope that up, I'll put its output back to production, I can turn off the messaging for version one. Now it's gone, version two is running and it's all happened very quick and easy. And this is the investigation principle, and this one for me is very important. People tend to try and write massive great papers and we end up with even monolithic serverless applications. So when you have an application, try and limit the number of endpoints or the number of features you're trying to build into that application and a few basic benefits you get so you have less dependencies, potentially the application binaries are going to be smaller. And that also helps to not improve just the Cold Start Times Cervalis feature, but also helps your applications scale out quicker. So when when that event driven demand comes in and your application has to scale in the background, of course, your application, as you will have to provision some of the MS copy the code from a storage account onto the VM, spin it all up, and if your application small starts quicker, you can react quickly. And so it was just a huge application to connect the world with our basic needs in order to be useful to ourselves, if we forget functionality or in this case resulting images, it can scan quickly and quietly, can have many instances of running into a lot of the work very quickly and efficiently. We can only scale out one small bit of functionality for our application, unlike the classic version where we have to scale up the entire application so we don't need huge amounts of horsepower to turn markdown code into HTML, for instance. But images for sure, they take a lot more processing power. This is very, very nice. The vice principal is the dependency inversion principle and as I said, code towards the interface. And for a long time, I was trying to figure out, hey. How am I going to take this principle and apply this to a Cervalis architecture? And the key word here is code, so. That's what we do with our applications, not what the applications can do to each other. There's no easy way for me to apply this to an architecture other than being able to actually now do it in our service applications. For a long time, our service applications didn't have a way of doing dependency injection, which is important for clean code and coding towards the interface that we can then go and swap out an implementation. But now it's really good and actually helps us really clean applications. So I'm still going to take that one as a win is an example. Here is one of the service bus triggers that receives a message from that particular topic on a pedestal. So and what they're doing is realizing that message is one that we can understand. And even in the past, it was the same thing. Would say messages that can be considered as a request to have an end point, that I really think they're very skinny, very simple to understand, and just like with classic Web APIs, we have a very thin controller and a very small action. So we've injected the dependencies via the constructor for our service application. And it's very easy. It all fit on one screen, essentially. And then, you know, that that request came in, send it through media and then it gets handled here as a asynchronous request handler. And we have, again, a dependency injection through the constructor. And then in the handling method, we have the incoming request object and we can in this case, go and download the original image, resize and return the output. So looking back, Cervalis and solid principles, we can pretty much tick each and every one of these the Haku, now we have at least five good principles that we can use to reason about how to build the architecture, our service applications. You know, we're not sure. Stabbing in the dark is the things that have been tried and tested over the 30 years or so. Some of them have a long, long history and it makes life easier. It lets us not just write simple code, but build simple architectures that are easy to understand. And it makes sense. Everything does exactly what it says on the box. Cool. So now we know we brought some cool solar applications. Everything is. How do we had an eruption once it's been allowed into the wild, things can happen. That is where we it. And what do we want to it? Well, we don't know his impressions are. When a connoisseur, for instance, someone could buy a message that goes into the system, if something happens, you know, several times memory restarts, whatever, we don't wanna lose that operation. That's potential revenue that we're basically missing our. So want to make sure that we don't lose those operations when we keep them, even as something goes wrong in the middle, but we want to resume the operation as soon as possible once things come back online. We do want to keep going and not be too concerned. And this is because we do want to provide highly available service. So when you start panicking, well, we always imagine natural disasters, you know, people look at the data center is on fire, but that almost never happens anyway. Lightning strikes very almost just never happens. But what happens often is when someone accidentally restarts a service, I'm sure we've all been there or it just part of a normal sickly pipeline or devil's cycle. When you deploy your application, you have to take it offline and swap in the new version. That just has to happen. And in that window, you know, customers can still access your application and it may not work. And then just in Arizona, these things happen, for instance, when that classic application was showed earlier, suddenly I want to convert 50 images and my stories just can't handle it, or I need to store a whole lot of documents in the database and I'm just running out of connections. Those are the sort of trends and errors that do happen or even just when you make a configuration change to your application. Often they just have to restart to be able to have that same type effect that all of these can actually make you panic and we won't be able to do that and not lose any valuable application. So just looking at the setup for our basic application, again, when things go wrong, we lose all connectivity, we really start panicking. We don't know how the users are experiencing the application, really that may or may not come back at all. So how do we how do we solve this? How do we get rid of this sort of brittleness and have our applications nice and resilient in one way that we can do or achieve that is to use asynchronous messaging. So this is where the Azure service bus type services come in. It's not the only one, but it's one I've chosen for today. And it just because HDP requests, they suck, they have timeouts and you know, you can't recover from that time out really once it's the oppression is lost. If the user does want to come back to your website to submit that order with the click of a button, well, you've missed out. It's not going to happen. The nice thing about messages in these sort of services is that they have persisted over outage periods. So as soon as you've received that message, it's yours and you can keep it for a long time and make sure that you can process it at some point and then at least make sure that that by or that cell operation goes through and you actually still have a happy customer. And also reduce coupling between applications. So when we think about clean code, even, you know, there's the term new is glub. So if we couple our classes together, they're very hard to refactor, how to change, how to swap out behaviors. Right. So when we use the sort of asynchronous messaging through a bus, then we don't need a couple services together. We literally say, hey, I'm writing an event, I'm throwing a message into the bus saying order created and any service that's listening to that bus can pick it up and react to it. We don't know who it is. So basically, we can finally get the messages because you've done your bit. This one service has done its bit and it just sort of throws into to the bus and the next service can pick it up and continue its process. There's really nice. But again, even though such a nice service, we have to look at which patterns to use to make them highly available, we can even actually use some highly available services already. So let's say, for instance, Cosmas, Debbie makes it really simple. You can spare the Cosmos, for instance, and say, I want to run this in not just in Melbourne and Sydney, New York, L.A., you know, that's my customer base. I can literally do that with four clicks. It's not that simple. And you can turn on this feature that's going to make it even better. So it's called multi master, right? You can pay what that means with SQL Server is, of course, SQL Server can run across the globe in multiple regions, but you can go right back to the single master node. So if I have a a customer from New York wanting to write them through the database they have, that request has to go all the way down to here in Melbourne. If that's where my M.A. is, that's a long time to latency for that operation to complete by turning on multi master right in Cosmos. That customer in New York can write to the New York incents straight away and read whatever information from that database straight away, as well as highly valuable assets and customers actually takes care of that replication all the way back to Australia for us all, the M.A. in each region has the same information. And then, yes, I mean, if anything happens to our Sydney or Melbourne nodes, the ones in America, they're all still functioning fine. Nothing wrong with application. It's all good. And then for our applications, they sort out what happened instead of directly sending messages right away when we need to, we can actually sort of had like a persistent stall in the application just to make sure that I know these are the messages I need to send and I'm going to work my hardest to send them as quickly as possible. So even if the application did go down halfway through sending a message when it starts back up, I didn't finish that message. I can actually resume and send it out. So it's really valuable and it's not actually very hard to implement in your applications and just retry patent. So these are in the instances where, let's say I'm trying to select from the database. And if I run out of convictions for some reason, then that's like a transient error. So instead of reusing that very same connection, we can retry with new connection until we do it like a hard file. So this is a very popular library that we can use for dot net. And, yeah, it'll be worth implementing these patents to make sure the app is resilient and, you know, before do hard files really try your best to recover? When we look at our our serverless application. When things go wrong and they it goes, you know, nothing's perfect, and even though it's Cervalis, things do go wrong. But when they you know, when do we start panicking? Well, as soon as the applications start coming back online, then I will come back at the same time, it might take a little while for them all to come back. But let's say the first one comes online. That's OK, because the messages are persistent in the back, not removed from the bus until it's processed successfully. You mean. So that's that's really, really nice and the same with the other functions, even for Cosmos debate, when they come back online, they go up. You know, there's been a number of changes since I last read the the database. So we'll just resume from that check point and keep on processing. Literally not lost anything. Again, just sort of messages. Now, we've got documents coming down the wire and we can process them as soon and as quick as possible. And that's just not because eventually everything's back online with Miss No operations and we can stop panicking licensable. So looking at service itself, we need to make this highly available, spending up normally just one instance, but it's in one region and it does something in a region can fail. You always have to have something available elsewhere in the world where there's no disaster and you always fall over to that. Hopefully that is not too far away. So even though it is just a little bit distance away, it's not too far away. You don't have too much latency in your application. So why do we take our service bus, for instance? We deploy it into two or more regions and then we look at, say, an active active replication pattern. And what that means is our sending application, it sends the same message to both service bosses. But on the receiving end, you know, we're not going to receive the same message twice. So this is what we have to make sure that either if we want to process the same message twice, the second one doesn't have a negative impact or no side effects to do that operation. So the term, which is one of the harder words and used to say individuals are impotent, however you say it, everyone's got their version of it. But yeah, so all we have to then find a way to duplicate the messages at the receiving end. So we only process that message once. Not everyone I talked to has actually been able to achieve this is actually quite hard. Sounds simple, but it's a fair effort to make this work. You have to and have a store available at each application that receives a message to look up correlational IDs and all that stuff. But then for how long do you keep all that information? A lot of complexity involved in this. It works, obviously, but it's very hard to do with a better approach would be the active passive replication is actually the recommended pattern from Microsoft as well for service. But because we don't have to deal with that, duplication of our messages is quite simple. We still have two buses once an active one's a passive bus and they're both in different regions. So we first always send messages to the active bus and only when that send operation fails will file over to the passive one. But our receiving application listens to both as well. Just like with active active, we're listening to both active and passive. But at the same time, what that looks like, it's like this. We have our sending application on the one end and it sends the same message to both of the buses are listening, application receives it. But then we have to go through and, you know, duplicate that message before we can actually handle it or process it through our application. It looks simple, but that one step is sometimes very hard to implement properly. The active passive replication looks like this, so you start with the descender application sends a message when it goes through the bus. The listening application can receive it. Processing problem. Message to is the same through the act of bars and, you know, so we can actually retry that if we want to try to retry patterns. But when that fails, we then send through the very same message, message number two, to the passive voice and the listener can pick that up and just process. So you'll receive only one message from either of the service, but it's not going to receive the same message twice. So nice and simple and it works really, really well. But then how do we make our Azure function highly available? Well, again, we start by putting them into multiple regions, there are multiple instances of that application and are running in Asia. But going the other way, look at the active passive pattern from before. Each application will have to know that they are is to receiving applications in two regions, so we actually have to build a lot of logic into our applications. When you want to send a request between the services and, you know, try and file over manually, which is quite a bit of annoying work that we need to do. So if we look at the active active pattern, which actually is the recommended pattern for service applications or at least something that goes over HGP, do you think the problem is with active? Active is that I'm going to receive the same request. Well, more than once, and that makes me sad because as we saw, we have the deduplication, make sure that if we run the same operation a second time, there's no weird side effect that is actually very cool service that we can use to solve this problem for us as your front door. And and what that looks like once we spin up as your front door, we get a new public endpoint for our application and we register all our APIs for different regions in front door. We can even set up some awesome routing policies if we need to. And front door will actually do like an instant file over between regions when a request comes in. And in fact, they will actually find the closest or fastest region to route to for any request that comes in. So you can see we've got two sets of eyes in Australia, south, east and Australia, east and front door just takes care of sending the request through the right one. Traditionally, who did use SQL Server as a database? We actually have to set up a single server in both regions and then set up replication between those regions. But with Cosmati, we wouldn't have to worry about that because we can just take a couple of features and they take care of it for us. So that's the scenario would actually becomes really simple, nice and easy. And you don't really need a DBA or database engineer that can set up that replication or us. But now if we go and we take. The the two patterns that we looked at for high availability, and we combine them into the whole setup here. So we have our front door running our applications across regions and we have a highly valuable service bus set up here. And we want to follow that active passive pattern for service bus. This is what it looks like at first with all the arrows, and it's mind boggling, but it actually works really nicely so for each region. It will have its primary and. Passive, active, passive, but they actually just ping pong or switch, depending on which region it is, and it actually at first I thought, hey, this is going to be hard to build. It might not work, but it works really nicely. And that's a far easier than I thought. So me give this a try. Just looking at, you know, serviceperson messaging here, sending messages, just a couple of tips is to in your base message that you send is to include all basic information, as you can tell what message type it is, you know, correlational. These are important. And they can track that across multiple services or multiple operations. I can even know where it came from and whatever. Anything that is valuable, so I have that available in your message to anyone. Everyone can have a common way to handle this message. So in the age of functions, we use the service bus trigger to listen for incoming messages on a particular topic, or even if you want to use that and to send messages, we can use the topic client. You can also use a the service bus output binding, but often using the topic and deeper inside application, usually a bit more convenient if you want some code, either a fairly big project there that you can look at for a sort of modern enterprise service bus workshop that we did a few months ago for Indice Melbourne. Brendan Richards and I did this. You can have them just grab all the code is all where it goes. Pretty much everything that we talked about up until now. But just to give you a bit of a taste of what's in there. Here is a way of sending messages now to fit on a one screen. It's not the version where it does the file over to the passive one, but it will be pretty easy to. So we start off by making sure that any base message that comes into our message sender, we encode that UTF eight serializable Jason and we make sure to include the the date of time or as the label for that message. So at the receiving end, we can actually do that into a concrete object. So he's just the message reader that we will have on the receiving application. This is how we then dig into that adjacent payload that comes back and we find the message type and return, then the dot net type, which we can then use for some Jason serialisation, get the concrete object. And just like earlier, here's a service bus trigger for a request that comes in. We read that message and if it's in our case, my base messages, I always have them as mediator requests so I can actually use mediator to then send that through the pipeline and eventually in the application handle that message and he will go see his images coming in his handle. And we have the original request and you to take care of it in media. So very nice and simple. So cool, now we know you clean code is solid principles that do apply to Cervalis and we know at least you know, how to make applications, resilient, asynchronous messaging and high availability patterns. But what about scalability? Well, OK, first I say let's make sure we know how big you want to scale your application in Asia, you've got different ways of running your Cervalis application, your functional application, so you can run as a consumption line, which limits you to 200 instances, which is quite a lot the premium plan, even though you got more powerful CPU's multiple cores or that you're limited to 100 instances. And if that's not enough, then you have to think about having multiple plans or, you know, maybe even using a consumption plan as an alternative. Definitely, when you're looking at the dedicated observer spans, you can run an Azure Functions application on an app service plan, but you're limited severely into the number of instances that you can scale out to. But then if you run in communities, well, then it totally depends on a lot of things, how much hard we have available and whether or not you want to use the virtual cubits to run all those containers to al Qaeda and al Qaeda to sort of scale out automatically for you and scale back in zero communities. It really depends on what you have. OK, then, like I said earlier, it's very important, small, you can definitely notice it in the start up times and the scanning times when they have a lot of binaries to load, a lot of things to connect on start up, then it's not great. Then also, when your requests come into application, try and make it as quick as possible, you know, don't try and block on an incoming request for a long time. You know, as soon as you can return a message back to the user, you should. And a good tip just for running your as function applications is that it needs a storage account to for all the bindings and triggers to store information and even invocation logs, all that, you know, to maintain application state. So if you're at your you need to write some results to store original images of files, kids, anything, use a separate storage account. You don't want to negatively impact the storage at your application function is using or the runtime is using to make the application take over. So storage is cheap. So why not? You can definitely have to storage accounts for one application. You get a little extra out of your your application and your general functions are awesome. In particular, I was going to call out the fan out fan in town. So if you have a request that comes in and it's of the sort that you can actually split it up into multiple workloads, you can use an orchestrator and fan out all those operations to individual as a function of activities. That's awesome. But you can very easily just scale out to a couple of hundred instances and let it all run for you and it will then click or fan in those results for you or another name for the space with a map reduced pattern that is not so simple to give that a go. Then, um, very importantly, with your messages, that if you request that comes into your application, turn them into service, pass messages as soon as you can. So like I said earlier, don't try and block for too long. This means that your application, you sort of have to design for eventual consistency. And because, you know, if you're not going to be blocking a request on a request waiting for the result to come back and send that back to the caller, instead, you send back like a totally accepted message and give them a way to query for the result that will be processed and available eventually. So you can use something like signal service, for instance, to then do push notification back to the call. You have to take that job you requested. It's ready is the result. And so and then when you use service, buzz, try and use topics, because if you use cues, A lets you is limited to just one consumer or one listener. If you use topics, then you can actually each topic gets a separate cue and you have filters on them then that lets people subscribe to. So you actually have multiple listeners to one cues really also. And that lets you run a whole lot of stuff in in parallel so you can trigger multiple applications running off that one topic. And this one, I was very surprised at this, so following the sort of best practices you with your access policies in service Service-based, you want to give the application the minimum credibility possible to, you know, reducing sort of surface area for things going wrong. You know, if an application is only going to be listening from a service, but it only needs to listen. Right. Or if it's only a sender and it's sending. Right. The man tried to sort of give that just to an admin type application, but the problem is with the service the service must trigger if it doesn't have the Manege capability manage. Right. Sorry, it can't peak into the service queue to see how long the queue is and then it can't proactively scout out for you. I'll try this on an app recently. Well, it actually just scales a lot quicker. I get more instances spun up and just gets the job done a lot faster so it's much more efficient. Surprising. Some of the security conscious people say don't do that, but I'm sure you can convince them with the performance results much, much nicer. So here's what the topic would look like in serviceperson. You can have multiple filters. You can actually say, hey, the messages that come into this topic, I want to filter on some of them. You can actually do right, like SQL filters to say only, you know, messages of a particular data type or from a particular person or service I want to have. And then you make that available as a subscription so you can have one or more apps. Subscribe to that. And Intriguer, when the message does arrive. And also now, because we that really widely and produce a huge amount of results when you want to store it, ah, it makes absolute sense to make sure that you choose the right storage service. So to match that high throughput, the cosmos is awesome. Yet provision a huge amount of cost money. But the cool thing is you were scowling as well. So makes that job nice and easy. So no point trying to store huge amounts of results into something that's really slow, actually going to slow down your application and hurt your scalability quite a lot. So you got a few options because we've even had premium storage. Even SQL Server itself obviously has huge performance capabilities. So just choose the right one for your application. And this is sort of best practice when you deal with connections to services, try and reuse the connection. Clients don't manually try and spin up a connection for each Azure functions application because very soon your Cosmos or SQL Server, for instance, they will run out of connections and you know, you'll have a lot more transient errors and will just slow you down. So, yeah, go for the the clients that use connection pooling and managers, that whole thing for. All right, so wrapping it up, in summary, I go and use the traditional approach, definitely makes you appreciate that very easy to understand. They're easy to maintain and extend. The different testable and smaller bits of functionality is very easy to test. And so you can have a lot of confidence once you ship your application. And definitely solid gives you a way to reason about the architectures. You can keep things very clean, very simple, and everyone can come in and understand. Asynchronous messaging over HGP requests. It definitely has a lot of benefits that you have better resiliency and your scalability increases and you coupling between your services actually decreases. So it has an awesome dev ops benefit there. So it's very easy to just deploy the small bits of functionality. And that whole coordination around box is a lot, a lot simpler and high availability for service is actually quite easy. Surprisingly. Oh, no, it's going to be super simple. So everyone should be able to give it a go. And finally, now you can go and build some awesome rock solid service service applications. And from Asia, if you learn Asia for free, they make that nice and easy, something as free as you want to learn more about. Asian markets of loans got a massive, massive of tutorials and things you to go through and help you with training for your Asia search, as well as a very useful if you join the newsletter, then you can go to that. And as you develop a swag link and if you're very quick, you can use that code to grab some free swag. Only for people from this tournament will get a PAXO. If you click on the fingers, go for it. Thank you very much. Cool. And definitely open for questions if anybody got some questions. Audio coming through. Uh, if there's no questions, thank you very much, everybody, for turning up. I hope you really enjoy the rest of NBC. It's been fun so far. Even online, it's actually really, really, really fun. Blankstein, What do.