Video details

Azure and Containers, the tale of the two inseparable friends - Yaser Adel Mehraban - NDC Melbourne


If you ask someone how to run a container in Azure, they will most probably answer AKS or Azure Kubernetes Services. But is that the only way?
In this talk we're going to bust that myth and go through all the ways you can run a container in Azure and leverage the limitless potential which is in front of you.
You don't need to be an expert in working with containers, all you need to bring is an eagerness to learn and a sip of water, this talk is so hot you need to stay hydrated :)
Check out more of our featured speakers and talks at


Hello everyone, welcome to this talk. My name is Jas, I'm a technical trainer working for Microsoft at the moment and I'm going to talk about different various ways you could blonde containers on Azure and give you a bit of a use case in scenario based a situations where you know which service to use, in what particular scenario. So it's not a deep dive into any of the services we discussed, it's more of a sort of giving you that bigger picture as to what those you could for example used to create that solution that you have in mind and run your containers without any problem whatsoever on Azure. So strictly people didn't use containers, they deployed their code into their production environment and there was no problem with that. The only problem was with the portability. So if you had an application which was working on Linux, they wanted to run it in a different environment, you couldn't do it potentially, or if you had something that was depending on a particular third party tool or an extension or something like that, again, the compatibilities were not really possible for you to move that application, run it elsewhere, specifically mainly about migration. So when you migrate from onpremises to your cloud provider, whether it's Azure or any other cloud provider, it will become very important for you to have that portability option. So, our first iteration coming into the container wall, we only have one service, it was called Azure Container Services. It would mainly allow you to run DCOs kind of a situation. So you have distributed systems and operating systems on the setup where you could run your containers in that environment. It wasn't very complex, but it was still lacking a lot of features. So we started to move towards adding more functionalities and more services for you to be able to run your containers. So for you to start working with containers, obviously you need to start with the images. So you build your image from an application, you have your application files, everything you need on that image. Now it's obviously very important for you to be able to store that image somewhere on that particular provider. Why is it important? Why shouldn't you just use GitHub or any other, for example, docker Hub or any other repository that's there for you? And the main reason is that the platform, the services we offer specifically they give you deeper and native integration with other services within the Azure ecosystem. For example, if you are storing your images within Azure container yesterday, they give you the option to enforce an image scanning policy based on basically your requirement that you might have within your environment. So that people, when they want to push their application to the production environment or whatever environment they might have in mind, they need to have that scanning before they can do that. And that's basically available to Azure policies. Or we give you option to integrate this service with virtual networks. So if you want to run containers in a private network in Azure, you still have that integration. You don't need to use a public repository and everybody has access to it. There is much more access control available for you. So you could use Azure RBAC, for example, to control who gets access to read the image or basically even go down further the chain and say, hey, this developer only can push, this developer only can read, and stuff like that. So deeper integration with native services makes this service much more interesting. Apart from that, we also offer you so if you go up the pricing ladder, we offer you more services, we offer you more features, we allow you to integrate with your on premises environment, having a bit of a hardware connectivity there. If you already have or for example, we give you more bandwidth to upload or download your images from the container registry. We also allow you to have, for example, geode application so you have images in multiple parts of the world. And depending on where your application is running, your application gets the image from the closest server possible. So you have that speed and you also have the availability available for you as well. So depending on the type of the pricing, obviously plan you choose, you get different options in terms of storing your images. Now, if you want to do a quick demo, and my talk is mostly about demos, all we need to do is basically coming into our Azure Portal. So I've got the Azure Portal here running and I've already created a container registry within this NDC resource group. And we're going to push an image into this container registry. So let me just find it. Here it is. So there's nothing very specific about this registry. If I have to get the repositories, you can see I've already pushed an image in there. It's called Ninja Cat Node JS. This is no JS application, nothing really specific, just a one pager just to showcase what's happening. So we have this V one version up and running at the moment in that repository. So if I bring this here and basically come here, this is the project where I've created my I just want to make sure you see the same thing. I've created my project, I've created my image based on. And if I have a look at the docker file, it's nothing very specific, it's just a Note yes application. It basically runs an NTM install, copies everything from the app folder into there and just runs the server, run the nodes server, listen, in this case port 80. So to be able to create an image from this, all you need to do is run, build, docker build, give it a tag and that's it. So once you have that, then you can tag that image with the proper tagging URL, which allows you to push that image into a Container Registry. How is that possible? Well, there are multiple ways you could do it. Easiest one, if I close all of that and bring this up, basically, these are the commands which allow you to create the Container Registry. If you want to create it through the Azure Clr in this instance, or Azure PowerShell, which are your preferences. But all you need to do is use the Docker tag command to basically just tag it using this format. So that's your default domain name which is allocated to the Container Registry and that's your Azure Container GC name. And then you give it the image name. And if you want to give it a tag, for example, you could use a version tag or you could use the latest tag, depending on your preference. So basically I want to add this and create this version two, which we're going to push it into that Container Registry in a minute. So if I just create another tag and by the way, these commands are possible because I'm running Docker Desktop, just in case you're wondering. And all you need to do is basically run this Docker push command. Now, I can do it without authenticating to that docker register to one of my channels. So all you need to do is just call the Ads you are logging and I will be taking care of your so let me just define the variables. Run the logging commands so it authenticates us, and you can see here the native authentication to it. Apr is a much better preferred option, just rather than using a user and password, for example. There you go. And now we're going to push this version two image into our registry and that's all it takes for me to create my image, tag it, push it into the Container Registry. Now, this is a bit of a multiuse process. If you want to even condense this further, you could use the adhdr build command from the Azure CLI, which does all of this in one command. So it creates your image, tags it, and then pushes it up into your container. Just say without requiring you to go through all this. But I just want to showcase what it takes behind the scene and what it does actually. So this image is now pushed. If I go back up in my Container Registry and refresh here, you should see version two. And I can now go ahead and use other services to deploy this image and we're going to have a look at a few of them shortly as well. So, going back, let's have a look at the very first service that we generally offer, which is the simplest and fastest way to run a container in Azure. That services called Azure Container instances. It's designed to run a single container in Azure. When we say a single container, obviously it doesn't mean that you can run multiple containers in it, but it's designed for that single instance container situation. So if you have an application which doesn't have any dependencies or it's got dependencies but you want to limit the interactions between those dependencies, let's say you have a web application and a SQL database and you want to run those under the same infrastructure, this is the service you want to go for. If you only have an app, you want to give it a shot, you want to make sure everything works, test it out quickly. This is the service you're going to go for. So really great way for you to just spin up a new service. Don't worry about anything. It's a fully managed service by the way. It means everything is managed by us. We manage the operating system, we manage everything behind the scene. All you need to do is just give us an image and we run that image for you under that service. Super, super fast spin up time and startup time. So when we say this, I'm going to showcase that to you. Now it's going to take roughly about a minute to spin up the whole thing and we'll see that in action as well. So as I said before, you can run multiple containers within the service that's possible to the concept of container groups. So these are basically allowing you to run multiple containers which share the same infrastructure. So you could say I have this container, this is a web container, I want to expose it to internet through port 80 but I also have this SQL server, I want to keep it private so it exposes port 1433 but that's only available to my web container, it's not exposed to outside wall. So that's also a good thing to have within the service. Again, if you go up the pricing other we also allow you to integrate that with visual networks so you can have that whole thing again as a platform, as a service offering, but in a private manner that's also really cool. So what use cases do you need to go for if you want to use this service? Like what are the scenarios that you will choose to service over any other service that we're going to offer today? If you want to run single container deployments and you just want advanced it might be for development testing purposes or if you have a small, for example, skate app you want to run it. This is a service for you. If you don't need integration with other services such as virtual networks or any other features that are available to other services in Azure for example, you don't want auto scaling, all you need to do is just run your container and get away with that. Again, this is a great service for that as well. And if you want to run multiple containers but you want to run them on the same infrastructure, you want to share the infrastructure between them. This is the service you need to go for. So let's have a look at how quickly we can create a container instance. Now, I've already created one, but just to show you the experience, I will do this through Azure Portal design. We'll alternate between Portal and Azure CLI. So all you need to do is run or search for container instances. Click Create. Now choose a resource group. Give the container name. This doesn't need to be unique globally, but it needs to be unique within your resource group. So I'm going to call this Ndcacr two or one just to make it unique enough. Let's do doesn't really matter. Where do we do that? And if I choose Container Registry, this is where that value added benefit comes into place. It's already detect that I've got an ACR. The textbook repositories are there. So I can just come in without authentication. I can just come in and have a look at the repositories. This is my image. These are the tags that I have. So I want to choose, for example, version two. I want to use latest. It doesn't really matter. I can customize the size of this. Now, in terms of CPU, we can go from one to four. In terms of memory, again, one to 16. And if the region that you've chosen supports GPU, it can also allow you to run your containers and benefit the GPU as well. I'm just going to go ahead and create this service and you can find out how long does it take to basically get creative and come up now? Let's go back. I'll run that running and we'll come back in a second. The next service we have which allows you to run containers is our web apps or app services. In general. App services, fantastic platforms that they're offering for running web API and mobile apps. I don't think it needs introduction, so I'm going to skip all of that for my t. By the core, you can publish your code into the web app, or you can publish a container into a web app. So it allows you to run both of those. But the benefit of using Azure web apps is all the additional features that you get. For example, you get auto scaling based on a particular metric. Or you get integration with virtual networks. You get hardly connectivity with your on premises environment. You get features like built in authentication authorization using Azure Active Directory, etc, etc. But so all of those features, you get staging slots, you get a mini CI CD pipeline through deployment center. For example, you can run your entire pipeline from a GitHub action or from Azure DevOps and set that up within a matter of seconds probably. It's really easy to do. It's all automated for you. You don't need to write code. All you need to do is go into the deployment center and enable that feature. So, really great for running these containers but if you want more capacity, if you want more features, this is the place for you to go and have a look as well. Now again, when we talk about these kind of services, we still are not at the place where we talk about market services. This is just a regular application where you have multiple containers running. There might be some interactions between them, but we're not talking about heavily relying on different services in a mesh structure. As I said before, integration with native services like ATR, with GitHub Actions, with Azure DevOps Pipelines, all of those are in place. So all you need to do is enable those and benefit from that. Again, the deployment center, you always say this. The deployment center within the Azure App services is not, to be mistaken, with a fully fledged CRCD pipeline. It's just a mini CRCD pipeline to allow you to have that continuous deployment into your app service. If you want more advanced scenarios, if you have a complex pipeline, you're going to have to go and stuff from your DevOps side. So you're going to have to go with GitHub Actions. With Azure DevOps, this is just for getting that app up and running from an image container in a matter of no time. So what use cases do we use or should we use this service for? Obviously, if you want to host NetApps on Azure, this is a great place to start. If you want auto scaling or load balancing built in, this is again a great place to start. It comes in with a load balancer. So if you say I want to basically add more instances, scale out my App service, it automatically loads the request between multiple nodes. If you want to deploy in seconds based on not only GitHub Actions and Azure Docs, but also stuff like Peak Pocket or if you want to do it based on Azure Container Registry or Docker Hop, again, this is a really great place to start there. And high availability is again another thing that we have as part of the service in addition to the security standards that we meet. So we are already meeting PCI standard to be compliant with a bunch of other things. If you are storing PII information, again, this service is already certified for helping you to achieve some compliance or reach those goals. And app insights and analytics already built into this. So you can monitor your containers using application insights, using log analytic workspace to not only allow get those infrastructure level logs, but also the container level logs as well. So both of those are available. So definitely a great place to do as well. Let me just go back and have a look at this. This service is now up and running. If I come here, I have my container. Now we didn't provide a label for this, so I'm going to go ahead and just grab that IP address and put it here, but you could not just use the label by default. It doesn't support custom domain yet. But there are ways to get around it using NGINX or stuff like that. So the first request takes a bit of a while, but just to showcase what happens if you have that already running. So I've already have another container instance here, which is my first one I created. This already has a URL, so I'm going to copy that, bring it here and just run that. I might have used the wrong port. Yeah, I might have not set the correct port for it because the app usually exposes 90 90. This is expecting 80. So that port mapping is not done. We need to inject an environment I able to get that work anyway. The ease of use and the way you set up continuous instances is really great. So in terms of web apps, it's the same process, it's literally the same experience. So if I come back to my Azure Portal, come back to my resource group, I can just create a new app. And basically because the web app is part of our favorite or popular choices, this now needs to be unique because it's a custom domain. So I want to call this NDC for example. App two should be unique enough. And then again docker container based on Linux or Windows. So we support both of those platforms here depending on your region pricing plan. And then I'll come to docker. I can do single container, I could do Docker compose if you want to run multiple containers on there, there as well. And then container registry is similar experience. So already the text, more registered text my image and I can choose anytime I want, create the app and get away with that. So really easy to kickstart again, I've done that before. So without going through the creation process, I want to bring this app up and if I just go and explore, it should bring up the app. And there you go. Our app is up and running. Everything is good to go. So really easy to set up, really easy to get started with that as well. What else? What happens if you have many containers or many services you want to run regardless of whether they're APIs or web apps. In those cases, you generally want to go with more advanced level stuff. So Azure Kubernetes Services or AKS is probably one of the most frequently used container services in Azure at the moment. There are a lot of customers that are moving towards using this service. Obviously a new service that I'm going to introduce next, which is called Container Apps that is taking some of the load off. But still this is a flagship software in terms of running multiple containers in a mock service sort of architecture. This is again a managed service. So we manage a lot of stuff for you. We already have created our flavor of the Kubernetes open source platform. We've added those native integrations. So you have integration with Azure Policies, you have integration with Azureback, you have integration with Azure Active Directory. You have, for example, integration with Virtual Networks. So all those are available for you. If you already have Virtual Network, or if you want to go with the default virtual network set up, it's already there for you. We manage your operating system updates. We manage everything. All you need to do or provide is your deployment setup. That is usually done by a yellow file, and you say, hey, go and deploy this number of containers. These are the services associated to those containers, and we run the rest for you. You can make them public or you can make them private. All of those can interact with each other. And the good thing about this service is that it comes with some additional features, like an application gateway, for example, integration. So it gives you ingress controllers where you can define routes to route incoming traffic to various containers within your API's cluster. So that's really value added. Apart from that, you can also go for custom setups like NGINX for example, to do routing through that. Now, this service also has a great integration with GitHub Actions and Azure DevOps in terms of deployment. But the best part is, again, that ability to enforce compliance. So you can set up, for example, Azure policies to enforce your users to go and do a security scanning on your container images before it gets pushed to your Akes cluster. Or you could, for example, say, hey, this image needs to do some auditing, or Everybody needs to do auditing on the images regardless of security. Or not to be able to say, hey, if this is a pass, go and run it, or this is not passed, go through the circle again. So all those are, again, native integration. With that we have support from one of 42 inches accounting. Obviously this is a bit older, and I gathered this or put this slide together. I think now it's about 50 ish. But again, most of the Azure data centers and regions that we have are already getting on board on supporting this service. Again, the good thing about this service is that not only you can use it in Azure, but you can use it outside of Azure as well. And I'll tell you how to do that in a minute as well. So what use cases if you want to go with AKS, one of the use cases that covers that particular area, if you already have continuous on your premises, you want to do lift and ship without changing anything, AKS is a really great place to start. You could just push your Kubernetes cluster, create your cluster in Azure, and just start migrating those containers into Azure. Apart from that, if you want to simplify the configuration, the management overhead on your clusters locally. Again, this is a service which allows you to do it. It takes a long note off your shoulder or your admins or operation team shoulder. And you only need to care about is your application. So adding value to your customers at the core for value is the customer success, right? Bringing DevOps and Kubernetes together. Again, this is a great integration with a whole bunch of other DevOps tools. So again, that is true for all of our Azure services. But this one is definitely something that we are really proud of in terms of scaling. Now this is a service where scaling is like a bread and butter for it. It not only gives you a scaling on the cluster itself, so it can basically scale out of to 100 nodes. And these nodes are like VMs basically running. So not only it allows you to run and scale up to 100 nodes, it also allows you to scale within those nodes to hundreds of what we call pods. So you can run your containers, you can run for example, one node. Inside that node you might have hundreds of containers running. And then if you run out of capacity, if you have an application which is so big that you run out of capacity, you can also use Azure container instance as a backup. So we give you that option as well. That's called virtual nodes, basically. So you can use that to basically speed up new pods within that service. Data streaming. Again, these are all the great features that are available for you to use. I'm not going to do a demo for this one. I'm just going to show you how that looks like and what are the experiences because this is going to take a bit of a time. This is a service where we literally have to wait a bit for this to get creative. So what I'm going to show you here is first what that YAML file looks like. The specification that you want to push your images into an NCS cluster. So I'm going to bring up my cloud show. It's a browser based terminal. It gives you PowerShell bash within the Azure Portal itself. So you don't even need to install any local tools there to work with those tools. It also comes preinstalled with a lot of tooling that we will see in a minute. So Queue controller is already installed there. Azure CLI is already installed there. A lot of tools are already installed in this particular terminal. Needs to be fully up and running before I can access. So it comes built in with Vs code, with a lighter version of Vs code. So you can actually edit stuff right within the terminal. There you go. So if I open this Vs code here and open up my AKS folder there you go. I have this Azure YAML file and if I just open or zoom in, I think I should be good too, for most people to see. All I'm doing is like, hey, I've got a deployment here. This is my metadata, this is the specs. How many replicas do I want? And the spec itself, like, what is the opening system? What's my container? What's the image name here? So this is using mocksource policy, obviously on a redis case, it's a backend application which holds the data, and then it exposes this port and it'll have a front end application as well. And this is a service for that back end. So for each container that you have, you have two elements. You have a deployment, you have a service. The service is basically just telling the cluster how to treat this particular condition. Is it a private one or is it public? Does it need a load balancer, stuff like that? So for front end same stuff, all we need to do is change the image name. And also here for the service, we just change the type to Load balancer by default. I think it's called I forgot the terminology here, private something, I can't remember what exactly it's called. And you have the load balancer type where if you deploy this one, you get a public IP address. If you deploy the other one, it gets a private IP address from the virtual network that you have. So what I'm going to do here is I'm going to show you how that looks like when it's deployed. So if I bring up my Kubernetes Services cluster, if I come to my workload, you can see here when the loadouts are already here, these are all the stuff that we run for you. So it's got like Cord DNS, it's got the Logan Workspace agents already installed. All of those are already managed by Azure. All you need to do is basically deploy your own stuff. So these are the two containers that have already published into this AK cluster. And if I have a look at the services, the first one, it was the Cluster IP, or the terminology is Clusterpie. So the first one is Clusterpie. So it only gets a private IP address from my VNET's range. And then the other one, which was Load Balancer, it gets a public IP address. So I can now click on this and it brings me up the app. I can interact with this, I can vote for cats and dogs, whatever. And then if I close this and open it again, it's going to basically have those results persisted in that rediscase backend that we have. So really easy to spin up, really easy to work with, but it's got a fair bit of configuration that, although it's a managed service, it gives you a lot of freedom to go and configure your cluster however you want. Sometimes you don't want that. Sometimes you just want to deploy your clusters in a surf and mark service architecture, but you don't want to deal with. Those configurations. And for that, we offer you our next service. So the next service which was released recently, is called Container Apps. Container apps is like a managed version of AKS cluster. So it's a managed version of a managed service, which means that they also take the load more off from your shoulders. It means that all you need to do is just run your deployment, get your containers up and running, and the rest is on Azure. So we take care of a lot of stuff for you in addition to what we did for a Case Clusters. So if you have public endpoints, if you have public APIs, you want to expose them, again, this is a great service for you to get started. Background processing. Again, you can do this with AKS as well, but this one supports that natively. Using a continuous running background process, you have scaling automatically built in. You have, for example, part of the event driven process. So this can be integrated directly with Azure functions or all those like event grids and event hops and stuff like that, where it gives you that ability to create event driven solutions at a larger scale. You also have the Mock Services architecture set up by default. So you can just go and run multiple containers. These containers can talk to each other. You can control which container, can talk to what container. That's all up to you if you want to use that. So again, App Services is a great place for you to get started as well. And if I want to showcase how it looks like in terms of deployment, using Azure Sila, for example. So, first things first. If you are using Azure Sila, you're going to have to add the extension. So the Container application needs to be installed. If you want to work with the Container app CLI from your terror, you also have to have the resource provider registered. Once you have that, you basically create a new environment. So the container apps work with environments. You can have multiple environments. Each environment will have multiple containers. So once you have your environment up and running, you then go ahead and create the Container App. You provide a bunch of parameters, like the image, excuse me, the registry name, username and password, any wire variable. So if you're exposing a particular port, you could do that. If you want a custom, for example, label or DNS label, you could also do that. And then you run this, we'll create that, and you can also pass in the query to tell you what's the fully qualified domain name. So if I come back to my Azure Portal again, I've already created an instance of Container Apps here. And if you have a look, this is my Container App. It comes with a sort of weird looking URL, but it does the job. You could do custom domains, but that's what the default gets you, depending on your region and a whole bunch of other options. So here is my application running on the same image that I pushed into my container registry. So again, those native interactions or integrations with our Azure services, what makes this one interesting as well? What else do we have? So sometimes you want to run background processes, but this is like a specialized background process service for you. You could run those, generally speaking, on you used to run those sometimes on web jobs within development space, but that wasn't really designed to do longer running jobs. Sometimes people use automation accounts or bunch other options. But this one in particular is designed to give you that option to run a container which handles your background. Let's say you want to run a weekly report or generate that report on a weekly basis. It takes a lot of data from various different sources and then compiles that into a single report puts it somewhere for those scenarios where you want to run those but still want to use containers. This is a great place for your stock. The good thing about that is that this comes built in with, for example, its own dashboard. So you get a really good overview of what is happening within your bad shipyard at any point in time. So what's the CPU percentage of my service, what's going on, how many containers are running? So this dashboard gives you a really great overview of what's going on at any point in time. You can see all the running containers. You can see how many, for example, how much memory is used, how much CPU is used, et cetera, et cetera. So use cases, big data workloads. Definitely this is the way to go. You can gather a lot of data from various different sources and just compile them, do whatever you want. Obviously this is not to be mistaken with the data pipelines. There are different services in the data category but you can definitely use this for processing those data reports. Again, a really great option for those longer in task. If you want to run longer in task, really great service for giving you that opportunity. Basically set as a cost in regards to comparison to if you want to run the same thing on a web app or run another function or something like that. CPU intensive process. Also this could be used because it's got built in automobile scaling. It gives you the option to scale out and scaling depending on the workload, which means that it's optimized for CPU usage as well. The next service we have is called Service Topic. Now this is where it gets really advanced. This is where you get to deploy containers applications at the same time they can coexist. So it gives you that full fledged mark services and mesh architecture where you can just deploy it on your onpremises environment or NASA, it depends. You can have that hardware connectivity. The service subject originally was designed to run net apps, but now it's expanded. It allows you to run containers alongside with those apps. And basically it can run anywhere. You can run on your dev machine, you can run in Azure, it could run in another cloud provider. It just doesn't matter. And we give you options to manage all of that and we'll talk about it in a second as well. So this got full lifecycle management, it's got automatic scaling. Again, however, the configuration of this requires skilled people. So you need those highly skilled people to be able to configure and use this. Something funny. We also run most of the Azure on service Epic. So that's a service to use under the hood for a lot of other services. Here's an example of how kind of deployment would look like. So you have, for example, real integrations. You might have different note, for example, pools where one is used for the backend, one is used for the front end, or whatever it might be. It's got integration with other services. So you can use, for example, Azure SQL or Cosmos DB or whatever you want, integration with application gateways. API management can be used to expose these APIs to apps out of your organization. So there are a lot of good interactions and integrations there. So what are the use cases? If you want to do Rs, lift and shift. If you already have applications that are running on Rs, you want to bring them into Azure, but you want to keep it as is. You don't want any changing code, you don't want any changing configuration. This is a great place for you to get started if you want to mix containers and also service fabric market services, again, this is a great place for you to start. We don't have any other service in Azure which offers you this capability. If you want to reduce the impact of noisy neighbors. Anyone has heard of noisy neighbors before the term. So these are services which emit a lot of events and they're not necessarily useful, but you want to limit those and you want to make sure that you segment those particular services and just let the other services live for their own, reduce that kind of unnecessary interaction between the services. This is again, offers you that natively. What else? Well, we have Azure argument. I talked to you about how we can run Kubernetes services and also service Arabic anywhere. If you are running a multi cloud environment, or if you are running a situation or going into a situation where you have some services on your onpremises environment, and you have some services on your Azure environment, but you want to manage them in one place, azure Arc is your go to. It's a really great service. It's not an operational service. It's sort of a management layer service. It gives you that central control plane to control everything, see what's going on within your environment and basically gives you a unified interface to find out what's going on with, not just for container services, you could also monitor other services in Azure as well. So again, this is the go to if you want to use it to mix and match and monitor everything within your space. So this could be like on premises environment. You could be Azure Stack Hub or Edge services or whatever it might. So other use cases. If you want to manage multiple sites running the Kubernetes in a centralized manner, this is a great place to start in your consistent deployments and configuration of node within multiple different services. Again, this can help you with that. And if you have some, for example, regulatory compliance stuff which enforces you to do something in a certain way, this is a great place for you to also enforce that unified across multiple different environments, so you don't have to go and basically apply those settings in multiple places. And you have to manage all of that and have some auditing in place and this does all of that for you. So, what else do you remember I talked about how you can run the Azure APIs outside of Azure? It might be on a different cloud provider, it might be on your local environment, it might be on your dev machine. We have extracted the core of Azure Kubernetes services. It's called Azure Kubernetes. Service engine. You can run that engine anywhere, literally. So you can run that engine. It's an open source software. You can run the engine, get the engine running on your local environment, try out, see how your containers are running, how they interact with each other, a whole bunch of stuff. And if you're happy, you just push it to Azure or just keep it running on your onpremises environment for sort of like a gradual migration to Azure as well. So let's say you don't want to bring every single container into Azure, but you still want to have the same experience, you still want to have the same sort of set up so you can run the KSA on your own premises environment, start migrating your containers into that. Once you're happy, all the tests are green, you can then push them into Azure and at some point you can get rid of that. And meanwhile, you can use Azure, for example, to monitor everything and enforce configurations and whatever. So here's how, for example, it looks like. And you can mix and match those services with some other services. If you want to have some sort of a, for example, high ability set up, or if you want to have the situation where you want to wrap part of the traffic that incoming traffic to your applications, to your Azure environment and the rest to your onpremises environment. Again, you can use traffic manager profiles for that as well. It's a great place for you to just get to know the AKS engine and then work with it. This is mostly for developers and more technical teams per se, so use cases again. If you need Akes features, but you want to run it outside of Azure, this is a great place for you to start. If you want to run Kubernetes Cluster on Azure stackhot again, this helps you with that as well. Okay, so that's that. Let's do some quiz to see how fool remembers what we just discussed. So what is the fastest way of running a container in Azure which gives you the ability to connect your onpremises environment through a hybrid connection? What option do you think is the correct answer here? Just a random letter. Just give me an option. C. Very fabric, but we're not talking about those kind of setups here. So let me help you a bit more. From A and B. From A and B. Which one? B. Correct. So criteria instances allow you to have that capability, but it doesn't allow you to do hardware connectivity. So web apps offer you that option. And number two, what two services allow you to run your containers and auto scale your workload? Actually, this could be like in three. Not a not a Not A. Correct. So AKS already. We already know. It's called auto scaling. It's not even auto scaling. To continue instances, web apps already have auto scaling service fabric. Yeah, it could, but you have to work for it. You have to create those infrastructure for it. So the managed services there, auto Scaling comes built in our web apps and Akes. So that's it. If you have any questions, I'm happy to answer those questions. But thank you so much for coming to my talk. I hope you enjoyed this talk. It wasn't a deep dive into any of the services, were more like an introduction. And then to use what? Well, hopefully you've benefited and you go and pick your QRC so you can go and play around with some of these services at your own. We also have a dev newsletter, which is designed for people in Australia. So it's a local newsletter. You can use that link to subscribe if they give you, like I think it's monthly at the moment, but they send you all what's happening within the Azure space and developer space in that newsletter. So really cool newsletter. Definitely go and check it out. If you follow me on Twitter, LinkedIn, and if you want to check out some of the technical stuff that I've written, check out my website. And that's it. Thank you so much, folks. Thank you.