Video details

Building Enterprise Microservices using Ocelot and Azure - Patrick Zhao - NDC Sydney 2021


Microservice has been a blazing topic in .Net's world since .Net Core came up, Microsoft has provided a detailed example on building microservices. In my talk, I will show how I have architected an enterprise solution with cloud native services on Azure and by using Ocelot. I will walk through Ocelot in-depth and show the in-and-out of building an API gateway and protect the services using VNet. Few buzz words: API Gateway, Ocelot, API documentation, OAuth 2, Microservices, VNet
Check out more of our featured speakers and talks at


Hello everyone. Welcome to my session at NBC Sydney. Good afternoon to audience from Australia and good day. If you are watching elsewhere around the world today, I'm going to talk about how we can build enterprise micro services using Oslod. Apparently the Oslo here has got a set of Azure Ice. My name is Patrick Jaw. I'm currently working as a Senior Software Architect at SSW Melbourne office. I'm a full stack engineer and speaker. Here's my Twitter handle. If you want to be a little bit more formal, there's a link to my linking and at SSW each consultant has his own profile page. You can easily find me by Googling Patrick at SSW in my spare time, I like to follow some great open source projects over GitHub and also contribute to some of them. My recent interest being a conversational AI chatbot that can serve as a virtual receptionist. So if an organization wants to set up a virtual receptionist that can help its guests to find its employees with certain skill sets and availability, you can feel free to download and spin it up in your own environment. Come have a look. I also have a personal blog. I see the birds for me every now and then, retrospectively summarize the lessons I've learned from my recent projects, my understanding for some new technology. So since I graduated from Melbourne unit, I have been in the Itinerary for over seven years during which I have successfully led architected and delivered a great number of solutions for my clients across different industry with special focus in fintech industry and my expertise and interest includes big data, machine learning, AI and most importantly, Azure. I've been recently helping a client to migrate join application from a classic but a bit old school architecture to modern microservice architecture. And that's why I'm happy to share some of the knowledge and lessons and challenge that I learned and got from that project. So here's today's agenda. First of all, when we talk about monolith doesn't mean it's bad or it's a mess, or when we talk about micro service, although it's more modern and more optimized, doesn't mean that it's perfect. Let's do some comparison so that we have some overview overarching understanding of monolith versus micro service. Then I'll spend some more time to have a deep dive into the microservices and focus on what kind of pattern that we should advertise when we are trying to implement Microsoft and what are the key elements a micro service solution is composed of. And then we'll narrow down and be more specific to Azure what type of services we can leverage to build our minds of service architecture the way that I usually deliver a presentation. I prepare the demo, which is a quick Plc of simplified version of Microsoft architecture that we can use that we can build on Azure. Okay, so monolith. So what is the monolith? Doesn't mean that we have a bunch of Lego bricks which are completely unorganized in the mess. No, it's not. One of these is a unified architecture where I see it as a very clean and clear architecture. So what it means is that you have no user interface that the user interact with and all the way down it follows a transaction script pattern that the uplier. For instance, here the user interface interacts with the downstream layer, the business logic and business logic interacts with your data access layer and data layer runs part of operations with the data store and it returns the data all the way up. So the monolithic architecture is actually quite clean. So it's not a bad thing. It's not able to use monolithic architecture, but it has its own problem. So next let's look at what are the pros and consoles architecture has. The biggest advantage when we use monolithic architecture is that it's self contained. It can have a lot of features functionalities within the box, but once it's done, it's selfcontained. Once we've done the DevOps, once we've done the pipeline to release it, once we've configured the hosting environment, it's done it's all self contained. So we only look after one sweet of artifacts. So that way we can narrow down our eyesights to look after a particular thing, although it can be very complex inside and since it's self contained, so it's easier to test and debug. And mind you here, by testing and debugging I mean treat the monolithic application as a black box. So this is more from the integration test perspective. If we don't need to break down and dive very deep into the solution itself, but we'd rather throw some input to it and assert on the output, then it's easier to test. And there's less overhead in cross module communication. Usually say if you take Donut as the programming language, usually the cross module communication is done by DLL. If you build your application node. Js, the cost module communication is done by referencing NPM package. So there's no network traffic, no gRPC call, no rest, no graph API call, nothing like that. So the overhead is minimal with all that, although over time when your business logic grows more complex, more or less that you might have a problem. But to begin with, if you are not sure about the potential scope that your application is going to grow in three years, it's easier for you to just start with a modern architecture for your Greenfield project. So we look at all the advantages. Now let's look at some disadvantages or the potential problems. So I've got to ask the risk on this first one just to remind you that when we say that a monolith architecture can be easily coupled, but only if it's done wrong. So what I mean is that even with monolithic architecture, if we follow clean architecture and we incorporate it with some good patterns such as CQRS command, query, responsibility segregation or even sourcing our expand on those architecture later because those architecture can also be used in microservice architecture. If we incorporate these clean architecture and good cross patterns into our modeling application, the monolith would become a modular monolith. So this cost can be flipped to cross if we've done it properly. But it's very hard because of the other issues that I'll show shortly. The second one is it's hard to up the individual feature. Again, this downside can be fixed to an upside. If we've done it properly, we have a modular approach, then each individual feature can be updated individually. But this hard block with modernist's application being it has a massive tobacco in a single repository. What this means is that since our monolith itself contains so we have to have everything in your repository, including your database, migration scripts, your application code, most of your configurations. What is resulting is that when a developer tries to pick up some product background item, they need to have at least a minimal understanding of holistic level. So without an understanding of the overall application, you won't be able to even start working. The other hard roadblock is the scaling. The scaling is also holistic. You either have to scale the entire application because it's self contained, deployed in one single hosting environment, but it will increase unnecessary cost. I also expand on that in a bit. So with all the downside Monolithic application has some organizations are thinking about moving from monolithic to micro service, and this happens when there's a Brian legacy system which could be smoothly running or getting some patches and some minor fixes over the last two decades. But when we are trying to migrate this type of application for whatever reason, maybe some technology is end of support, we need to migrate. Then when we try to migrate this big giant system, it's very difficult and risky to migrate them all in one go. So what we tend to do is we call a Strangler fixed pattern that we want to migrate the application bit by bit and this way there are less risks and it's easier to manage the overall projects. We can tie it to smaller milestones and the deliverable of each milestone are easier to manage and easier to scope. Secondly, we mentioned that in the previous slide that the scalability for a monetized application could be a problem because you have to scale at a holistic level. So if only a certain part of the system needs to be scaled up and down, in or out, Microsoft is the way to go. And lastly, the Microsoft tend to increase your development team's productivity because by embracing my architecture, your engineers tend to only require smaller amount of knowledge from the business domain perspective because they only look after a certain part of the system. So they don't need to have an overarching understanding of the entire business so they can spend more time in delivering. So here's a good analogy that I found on Google so for instance we have a legal pyramid and if something goes wrong, for instance, we want to replace this green break. So if it's a Monast pyramid, what we have to do is that we have to put down at least the top three layers and in order to replace this small break, whereas if reconstruction once we've migrated to microservice architecture, if we're trying to replace a small break, if it's been corrupted or whatever reason we need to replace it, we only need to put down this small module and reassemble it once we fix this small break. So as you can see there's the cost of managing micro service like hatching or bug fixes can be reduced if we've done micro service property. Now we've had an overview of Modernist versus Microsoft. Then let's move on to a more in depth discussion of Microsoft. Let's start with the key components. First of all, usually we want to have an API gateway that serves as a single point of entry and that API gateway can sit in front of your subsequent micro services and serve as either proxy or handle more complex scenarios. And we want to support different types of front end. It could be web, desktop, mobile. We also want to be able to deploy our subsequent services in a flexible way. And each micro service needs to have its own data store. Here the keyword is data store rather than database. So multiple microservices can potentially share the same database, but if they share the same database, they need to have its own schema. What it means is that we should prevent cost service data access and this is the key principle and the pattern that we should follow when we are building micro service. From the coding pattern perspective, we want to advocate eventdriven pattern. And personally I like to integrate my monitor, especially whatever is behind a gateway to be protected by VNet. So that way we can have less worries about the security and compliance practices. And we want to leverage messaging system because each market service has its own data store. But sometimes we want to interact with the other store or external data. Then usually we need some messaging system. I'll explain all these components in details in a bit. So this is a great example that Microsoft has posted on their documentary repository. It's an ecommerce site and they use this to illustrate the micro service architecture and they've been actively updating it. For instance, in a year or two ago when I saw and referenced this example, they advocate a slot, but now they use the invoice instead. So that means that as technology evolves, the Microsoft has been looking after their documents examples actively as well. So we serve our resource. So as we can see, we have an API gateway which has different variations. We have an API gateway that serves as a facade to your APIs. Or we can have back end for front end if you want supportive type of front end applications. In this example, the hosting environment, they host the applications as stock containers, but there are other deployment options which will in a bit. The most important insight we can get on this example is that each micro service has its own data store. The data store can be a secure database or can even be a random place. If we want to interact between services, we do that via a messaging system. On Android we have Android service bus or you can house your repo on it. Okay, let's move on. So I just mentioned that there are different deployment options rather than Docker. So let's have a look. So first of all, we can leverage past services on public cloud providers. On Azure there are a lot of options or we can choose to have containerized applications. On Azure there's Azure container instances. And recently I just wrote out Azure container apps, which I personally didn't have much knowledge about because it's new. But I like to explore because they advertise it as a service containerized environment and it has a smooth integration with step up, dynamic application runtime and Kita. So it's bright. I like to explore that as well. Or we can always use Kubernetes if we want to manage the cluster ourselves and we have the expertise in experience in managing that. On Azure there's IPS and it's quite deployment option. So we've looked enough on the tennis side and on the development team's perspective, if we start embracing Microsoft architecture, it's not just about a software or solution architecture, but also about methodology and also a development team management. So we can on board our new developer in a fast pace and more effectively. And because we can onboard the developers quickly so developers become pluggable, you can join a micro service team and do some work and finish off and join another team if there's needs. So we can adapt to business requirement change quickly as well. So there's a nice track here. So I'm just replicating because it's so simple to start with. So the development speed could be very quick at the beginning, but as time goes on, over time the productivity is going to drop. Whereas in comparison to begin with, you have a lot of cross cutting concerns to worry about and you have a lot of things to set up. So at the beginning when you're setting up the foundation, it could be slow, but over time the productivity is going to grow because you don't need to have an overarching understanding of everything of the entire solution. But you can rather just focus on small piece. Okay, so from development to release, we have a DevOps exercise advocating micro service. We can adopt the real agile, we can adapt to the changes and have a faster iteration. And the most important thing I like to highlight is that compared to the traditional model application, the concept of deployment and release should be separated. So what that means is that we want to have a progressive release strategy in which we can use to launch or we can talk of features on and off. So what that means is that, for instance, we have a feature and that's handled by a single micro service. It's easier for us to turn it on and off so we can release these features to the market without a formal notification to the user. So we can collect some user mementory and get the feedback to improve our application. This can be done with monolith if we do models properly, but with Microsoft is much easier. So we've looked at all the goods and benefits and all the upside of Microsoft. Let's look at some potential challenges and the traps. The biggest challenge is that in Microsoft architecture we can potentially get into a circumstance get into a place that we call micromonalis. What this means is that, sure, if we've done micro service properly, we don't have service reference. Each service has long data store that will offer event driven pattern. But for instance, there is a method that's sitting in a module that needs to be referenced by five different services. And if we make some change on that module, all the five different services will be tested, which means that we have to run tests. We run a lot of tests to make sure that our change on that service doesn't break in. That module doesn't break those five services. If we have a lot of modules like that, we get into a situation that although our services does not reference each other, but they all reference the same thing. So this is micro monolith. We should avoid that. And the way to avoid that is that to try to package your module and publish them as programming packages. So in Donnet we have new guests in Australia there's NPM package and by properly packaging them and shipping them to the new guest, we can have a proper versioning so the services can choose to upgrade if they want to reference some updates in the package. Or the other services can side with the older version of the package if they don't want to change. So this way we can make sure the impact is minimal if we make a change in the Share the library. The other thing is, from the data perspective, we don't want cross service data referencing, but sometimes we have to. For instance, in a typical ecommerce application, while we are checking out, we want to reference a customer's information, such as the address with the transactions which the payment and the transaction posted. So that way the payment service or the order service need to reference the customer service. And there are two ways we can attribute it. The first one is to reference external service. What this means is that your checkout service can emit an event or can call your customer service directly, but regardless of which way. By following this approach, we create a dependency between the primary service and the dependent service. And if your external service goes down, you need to work out why to mitigate the problem and what we do, we either retry or we've been held up. As an alternative solution, we can duplicate some of the data in the micro services data store. So as we know, storage is very tricky, so duplicating the data would incur an additional storage is not a big issue. However, because we're duplicating data, we need to be careful about the consistency level of the data. For instance, if our customer service is going to provide the referencing data for a user and we need to synchronize that data across to your checkout service, then we need to guarantee that when the usage is actually checking out, we get the home address of the customer that we're checking out. So the data synchronization process needs to be in mind. The data synchronizes need to have a good consistency level that's acceptable by the primary service. Of course, eventually the data is going to come through, but the data might be there at that time. So there's consideration that we need to take into account at a higher level can be summarized as a reliability for individual services because there could be potentially dependencies between the different services. If we are not doing this correctly, there could be some issues that when a service goes down it affects other parts of the system, and the last one is the most difficult one that I personally don't even have a good solution for. It is the domain boundary. So what that means is that when we are cutting different pieces of your business into different problem domain so that each problem domain by design should have a market service to look after. If we are not setting the boundary between the problem domain properly, what it would potentially end up is that we can have some Microsoft that act as monolith, which handles a lot, but other services handles a little based on we should avoid. And from my understanding, there's no quick way to tackle this, but it all depends on your knowledge and experience with the business domain. So next, let's look at individual parts of a microservice architecture. First of all is API gateway. It's a single entry point and API is nothing but a bunch of middleware. So that the cost cutting concern of the cost cutting concern of HP pipeline can be obstructed out to its own layer that sits in front of your micro services. So microservices can be doing its own thing rather than worrying about cookie affinity or JWT token meditation, so and so forth. So there are several tasks API gateway should perform. The first one is the simplest scenario where an API gateway is just a proxy. It proxy the upstream incoming traffic and route them to the subsequent service. Sometimes is more complex than when the API message multiple service needs to react. And certainly there's even more complex scenario where the API need to when the front end asks for the data that spread across multiple services to fetch data and across trade them before it gives response to front end. And as we showed on the Microsoft example, there's back end for front end requirements for some of the API gateways. So let's look at Oslo. So Oslo is a great library that can help us to build an API quickly. It's a bunch of middleware that's being executed in a specific order, so you can see that also as big middleware in your entire solution level rather than your PIL and MEC level. And also started to support Republican, which is used to support. So there are other options. Apart from Oslo, Oslo is not the only choice. For instance, there's invoicing Microsoft's example starting to use invoicing. So it's a great alternative. And there are past offerings on cloud providers. Let's take address. We can use API management or application guideline. They are different, but I'm not going to spend too much time in comparing these options. There's just one thing I like to highlight is that when your API gateway grows to a certain complexity or you want to monetize it at that moment, you should start thinking about using a past service rather than manage it yourself. Of course, we can build API gateway in your Kubernetes cluster. For instance, we can leverage Engine X as an Ingress controller. And I like to integrate my micro service architecture with Vanessa so that we can use it to protect the subsequent micro services while we only need to expose the iPad to the public. So this way we can ensure that the traffic between the micro services on the backbone network that's offered by the cloud provider, for instance, instead of going by the public internet. So this way the compliance and security can be guaranteed. Next, let's talk about cross service communication. The first pattern I like to highlight is the Saga pattern. It has two variations. The first one is the choreography. What this means is that ecommerce as an example, you might have an order service. Then when a certain event happens, for instance, your order has been created, the order service is going to emit some events and some other service. In this case, your customer service may need to react to the event that's emitted by the other Saga once it's sold out its own business logic and change the size of a transaction, for instance, from the credits reserved to credit limit exceeded or the payment sorted. Whatever event, once it's done its own operation, it's another event, so other sites can react on it. So this is why the events are trying out and they've been flying through different micro services smoothly. There's an alternative to this choreography pattern that is called orchestration pattern that you have a single price that acts as an orchestrator that when the events need to be immediate, it sends to a message broker and when the other side reacts on it, you get the message back and then you file another event from your Orchestra. So either of these pattern works, but they are all event driven pattern, which is good for microservice architecture. So as I mentioned, we want to have event sourcing when the site of something changed. So within the sake of pattern, when a particular transaction site has been changed we want to be able to choice these sites change and sometimes if you want to rewind or reply. So the event sourcing panel is a great choice for that requirement. By implementing all the event source panel, we need some pops up messaging service. And at last I'd like to mention King, but I also want to remind everyone that we need to be careful with caution. So for instance, if your external dependence service is going down, then you need to have some type of patient to make sure that you still can get that data. But you need to be careful that you don't get the stale data. So we need to introduce something called a circuit breaker pattern. Not a real circuit breaker, but it works in a similar way that whenever there's some abnormal currents going through your circuit, we need to open it. So whenever there are abnormal events like the external service goes down, you want to make sure that the service does not constantly Ping and constantly get an error, but it will stop pinning it. And when the extension goes back we want to reestablish the connection, but only allow a certain attempt of retries and we want to make sure that the sites are remembered properly. So there needs to be a state machine to handle that. So we need to be careful when we want to use Kitchen. So we've gone through the elements and the pattern that we can use to build Microsoft. Let's see what are the services available and can help us. First of all, Azure and Keyboard, we can store our secrets and credentials on Keyboard that multiple services can reference using managed identity. So that's why we don't need to store the secrets across different market services. What are the messages we can use to implement the pops up? Sorry, the event sourcing pattern. We have our service bus which supports Q or Topic subscription model. And there's a simplified version of Queue on Azure storage account. And there's also Event hub and event grid, so there are a lot of options. If you want to implement event sourcing pattern, you should have a look on this managed service rather than building the messaging system yourself. So it is a simplified architecture diagram. I think it's easy when we build a POC, so as we can see API support WebSocket then we don't need this signal. Whereas if our event support has a great managed service called signal service that can help us to manage our broadcasting service. I like to place to protect our micro services so the communication within the net are guaranteed to be secure. And we have our service bus, which we can use as a pop up pattern. As we can see, each service has its own store. It can be either Cosmos, no Sequel or Sequel or Cage or Data like whatever store. And as a cross cutting concern, we can use keywords, store our keys and secrets. We can use applications such as It's, our applications, we can use managing identities, and we can support different types of front ends, whether it's desktop that you've done with different technology like we've done enough. Discussion let's get onto a demo. So in my demo I'd like to show you how I use Offload to build an API and it communicates offload traffic to subsequent service. We start with adding the Offload package and Loading the configuration files. Similar to how we load app settings, we can load the configuration files for different environments and then we need to do some configuration. In our startup CS class, we register service and then we use the service. There's nothing special compared to other libraries. Now, sometimes we want to use Swagger, so it's easier for us to play with our APIs. The way we add also Swagger is by bringing some additional packages. We add Swagger for slot, and then we also add the UI. Next, if you want to add some authentication schema, for instance JWT or cookie schema, we can do so by adding our schema in our API gateway, the same one as where the Oslo is hosted. And here's an example of JSON configuration. As we can see, there's an upstream pattern that we define. So what it does is that whenever it gets request from an upstream pattern, it will be mapped to a downstream pattern to our downstream service. Apparently they are local. I'll show you the configuration file in a bit, but it's going to route the traffic to downstream service. And if there's any authentication schema, we need to register it here by referencing the provider key and also support a lot of features. One of the greatest features I like is the header transform. What it means is that it can extract out a certain information in your authentication token. For instance, in JW token we can extract all the times and then transform it to headers to the subsequent service so that your subsequent service can reference the header directory rather than passing and JWT token again. And if you want to use Square to interact with subsequent service, what we have to do is to add this Swagger Gen definition in the subsequent services so that's on the Swagger UI we're able to call subsequent services. All right, so next, let's get to the code. So first of all, let me quickly show you the coding structure. So in the demo that I've prepared. We have an infrastructure folder as well as the source code. So I prepared the API gateway and two subsequent services. First of all, let me show you the infrastructure. I have a bunch of Bicep files. I'm pretty sure that you know what Bicep is from Williams. Also talk which happened on I think the other check just now. So I'm not going to expand too much in Bicep is and how things are bought it up. But there's a nice feature I like about Bicep is that we have a visualizer on Vs code and as we can see that we have in a bit. We have multiple services. We have Apex gateway, we have web service one, web service two which I host them on app service planning, app service plan, server farms. And we can move this to make it easier for us to read. What they do is that they all reference the same app inside all reference the same keywords so that we don't need to store any credentials across different micro services. That's nice and cool and different service cancer on data store. You don't see that structure that clear here because they reference the keywords and to have access to the database. And I'll also show you how we can place VNet. I don't have VNet in this Bicep for sake of simplicity, but I'll show you in our Agile portal in a bit. So let's go over to the code. First of all, let's have a quick look on how we scaffold the API gateway. It's nothing different from a normal web API project. As I showed before, as I showed before, we can load the auto loads file and different environments. If you look at the file which we can run it locally, it's a JSON file and it has an object called route. And another one is the Swagger which we use to configure Swagger and also global configuration. So we can have different upstream patterns mapping to different downstream patterns over different protocols and certainly different ports. And I wouldn't demo the DevOps process, but in your DevOps process you need to somehow do some JSON file transformation or you can generate this production of slots yourself programmatically. So for instance on Azure the services. So we need to have pod it here. And then the downstream service can be different URLs or your own DNS, then managed by your DNS provider or actual DNS or DNS grow book. Now as we saw before, we have the Swagger key that maps to the Swagger definition for subsequent service and also the authentication schema that we want this gateway to transform. And also we need to substitute it by Cr to be where your API gateway sits. So this is the API gateway and I've also prepared two subsequent services. So we have service one which has nothing but just returns on data showing that it's actually returned data from a subsequent service. And I prepared a login endpoint usually one reputation to a service, but I just added controller. For the sake of simplicity, we can generate a token and we want that token to be validated by API gateway rather than subsequent services. As you can see, I haven't add any JWT token schema here. Apart from the Swagger definition, which help us to use Swagger on the other service that I also prepared is that we can try to have the transformation we expect to see the client that we transformed by API gateway rather than we providing it on the API call. So let me just go back a step. So on service one, what I prepared is sorry, this one in the login process, we've added a client called Special Client Purpose and we want to see that this client can be extracted out by our API gateway and attach it to the subsequent call to service B so that service B can recognize it. Okay, so enough talk about the code by let's look at the demo in action. So what I have is I have an API gateway, just making sure that I'm on the iPad gateway. Since we configure Swagger, what we can see that from API gateway perspective, we can see the definition for service A and also service B. So just do a quick test to make sure that it works. So it says hello from service two controller, I use IEP and one two interchangeably service one. Then we can see that it says hello. Now if we are trying to access the endpoint that requires authentication that we defined in our file, what we potentially would get is for one unauthorized, this is assigned with the end point on service B. Remember that we want to see the header that's been transformed here, but for now we can't access it. We get four one. Okay, let's get back to service and let's log in. So let's put in some username like Patrick. I haven't validated it, so it's proper email. But let me just put in something random. Now it generates a straight up the token and let's quickly observe what is straight up the token look like. So as we can see that it contains the name, ID and the email, I didn't mandate it. Sorry, but what we should highlight on is this special client. We want to see this being passed by being extracted from the client that passed through API, but also been recognized by subsequent service. So let's go back and grab a copy of it. So the cool thing about Swagger for a lot is that you can authorize it here. You only need to authorize it once and all these substantial services can leverage it. So the first thing let's test this endpoint that requires authentication. Remember we used to get four one, but now we get 200 switch to service B. And here let's try to see if we can get a header. So if we click execute, we see we get the header out. So review our code a bit. Just make sure that we understand how it works. Service too with us simply just get the header and return it in the 20 response. So now let's test if we try to pass it with different header, will it get this header or the header is transformed? If we click execute we see it's not taking the header that would pass to it, but it transforms from the API gateway. So this way your API gateway is the actual entry point and it could enact the traffic and attach the information you need to attach to subsequent services. This is great. So I've deployed it on Azure. If we jump onto Azure what we can see is let me return a step back. We have a bunch of maybe we need to make a bit smaller. We have a bunch of services with provision on Agile and there's nothing special apart from the VNet configuration. So as I mentioned before, we want to have a VNet to protect our service. So what that means is that we can access the endpoint API but not the services themselves. So let me quickly show you what I mean. If we click on one of the microservice and we click on the URL, we can see it because I haven't put the vinyl access policy in price. But if we go to another service that I remember, I have put in the SS policy already in place, we'll get four three. And the reason why we would get four three is that we have the networking configuration in place. By that what I mean is if we go down and move on to the networking session, bear with me. Networking. There we go. It's over here. We can see that we have an assess restrictions on. So what is access restrictions is that it only allows the traffic that's been sent to us within the VNet and it denied all the other traffic. So how come then how come the API gateway can offload traffic to it? It's because we've configured it to do so by adding the API gateway to the VNet and that's done in the networking session over here. So as you can see we have VNet integration on for the API gateway. So this means that the outbound traffic that's been routed from the API gateway is going to go through the backbone network on Azure under this Vnex and subnet. So whenever you impose policies on the subsequent market services, only the traffic that you configured to allow will be allowed. All the other traffic from the public request will be denied. So just as a quick proof, if we take the SS policy off from the service that we had the assessment on what's going to happen, let's see networking again. So here this one. So if we take this off remove. So now we don't have any deny policies. So by default it allows all the traffic. After that. If you try to access service one again we'll see that we're able to access service one from its own URL by leveraging VM it's easier for us to protect our microservices and offload the security and comprehensive concern on that is great. We can have less concern so that we can have a better productivity on delivering business value. The money application okay with that I've demonstrated we used to build API and deploy API as app service on Android. Let's go back to my presentation and see a quick summary. First of all, we've compared monoliths and micro services on an overview level and we understood that more or less is not completely wrong. While microservice has its own challenges although I believe that down the track when the business has become more and more complex in the world, in the modern world, micro services tend to be the architecture of preference for most big organizations and we also had a deep dive into different key elements of a micro service and the cloud patterns that we want to advertise when implementing micro service. We also spend a lot of time looking at the services offers to help us implement those clean architecture and those good club patterns and we'll also see a demo of how I build a quick Plc using a lot as API and the other subsequent services in microservice architecture. So with all that, at the end of my presentation, the references I have uploaded the code that I showed you to my personal GitHub account, I'll be updating it to add more stuff to it so hopefully we can use it as a good template to start with Microsoft architecture and I also reference to this Microsoft IO as a bright resource for different patterns. All right with all that, thank you very much. If you have any questions you're welcome to drop them or if you want to discuss offline, maybe let me just move on to this slide. You're welcome to post them on the social media links.