Video details

Fundamentals of Microservices

Microservices
04.10.2022
English

Get an introduction to microservices that will give you a working understanding of hybrid architectures, containers and Kubernetes, Ingress controllers & more.
PUBLICATION PERMISSIONS: Original video was published with the Creative Commons Attribution license (reuse allowed). Link: https://www.youtube.com/watch?v=HmmiSnt-DY4

Transcript

Thank you very much, Christina, good morning. Good afternoon. Good evening. Thank you, everybody for joining this Webinar today. My name is Dial Kingston. I'm a solutions architects at A Five at F Five. I spend a lot of time helping customers and our community adopt NGINX and F Five technology. From looking at many environments and topologies, we see many patterns in modernization, microservices and cloud native technologies. So today I'm going to speak primarily about the fundamentals of microservices. Now, I know this is quite a broad topic, so I will do my very best to go through some of the most popular trends and technologies that we do see when speaking to the community and to our customers. There are so many reasons why companies would shift from a more traditional monolithic architecture to a more cloudnative microservices based architecture. It could be to reduce costs, it could be to avoid single points of failure. It could be to shorten the application development cycle in a DevOps environment, perhaps. But there are so many reasons why. So I will explain at a very high level some of the technologies that we do see in some of the most popular applications in the world today. I would briefly describe what a modern app is, why some companies should modernize, why DevOps is important. Of course, we cannot talk about micro services without mentioning things like containers, things like Kubernetes. We will spend a little bit of time going through what an ingress controller is service mesh. We will also talk about which is a very hot topic right now. Also, do you need one? What is it? Are you ready for one? Perhaps. And finally, it's important to understand why production grades, applications and solutions can save you time. It can simplify your architecture and reduce costs. So let's get started. So this slide in particular here is a maturity model that represents different stages where companies tend to sit the first stage being a monolith or traditional application. And this is usually an application that is built as a single unit. A good example of this would be a very simple application that has a database, a client side user interface, HTML pages, for example, and a server side application, let's say PHP application. To make any alterations to the system, a developer may need to build and deploy an updated version of the server side application, which can be very slow. It's very difficult to scale. And overall, making one change to the application might require a big bang release. The next stage is what we like to call hybrid, which is essentially a mix and match. You have some modern components and you have some traditional components, but most customers I've spoken with fall within this umbrella because obviously we have some modern applications, but traditional applications, they're not going away anytime soon. So let's say, for example, you want to create some micro services based functionality. But the core of the application is still the monitor. For example, maybe you have created an authentication service using modern technologies. Mobile application might be a good example of a hybrid application. In front of you, it looks like a very modern app, but behind the scenes it might be a traditional app with old technology doing the bits and pieces, and then we get to the next stage, what we call micro services. This is modern application, usually built from the ground up as multiple isolated services that are stitched together. Usually it's a single application. Perhaps it's born in the cloud, but they are most definitely developed, deployed and tested with a very sophisticated CI CD pipeline. There's usually automated testing and release orchestration. Maybe they're in Kubernetes, but usually these are very specific digital services, maybe something specific to your industry or business unit. So as I mentioned, most of the companies that I've worked with have fallen into the hybrid buckets and monolithic. I suppose we're seeing more and more companies adopt a hybrid model where they're trying to transition to more modern environments. But microservices is the ideal state that many companies shoot for. So here is an example of a monolith. This is a taxi application, and as you can see, we have components for payments, trip management, building, and so on and so forth. Most likely a very large chunk of code with many components. But here it's a single unit. It might have a single shared database. Releases are very slow, perhaps waterfall methodology, but because all of the components of the application are linked together, if you wanted to make an update to one component, you might have to bring down the entire application. Releases are very slow. Maybe you're releasing a new update every six months as a big bang release. But services are very tightly coupled, they're very dependent on each other. And communication between each of these services is done using synchronous method calls. So when you flip this monolith on its head a little bit, we separated these components. We now have smaller pieces of code per service, and maybe they're connected via APIs. You can see the Rest API logo here. So now we have a micro services environment where each micro service runs its own process. These may be deployed in containers or pods in Kubernetes, and they all communicate with each other using a mechanism such as a Rest API. But the idea to simplify is one micro service for each function. This is not going to happen in a single step. It could be very expensive, very risky, and you don't want to re architect your entire application in one go. This is going to take time. There is a pattern known as the stranger pattern that you may be familiar with. The idea is that you add small pieces of functionality in microservices and repeat the process. For example, the authentication service I mentioned earlier. But it's very important you adopt a DevOps mentality here. So having proper source control, having automation, having the teams organized around service ownership. But services in a micro service environment, they should all work together as loosely coupled rather than tightly coupled services. And each service should have one job, and that should do that job very well. They are isolated, so each micro services might have its own data, so it can evolve and scale by itself. And if you needed to update the application, you can just update that specific micro services. So micro services. The idea is to take an application and take that application and take specific components and compose them into loosely coupled and independently deployed services. Usually microservices are very maintainable and testable. They're usually smaller, self contained, loosely coupled. We are using APIs. Of course, modern applications oftentimes need an API, and this could be message brokers or event streamers. Also, it's possible that each micro service might have its own language. Maybe you have one micro service deployed in Java and another micro service deployed in PHP, but usually they're deployed around business capabilities. So separating services to have specific capabilities. Obviously having teams organized so that you have specific teams managing specific microservices can definitely help. But coming from F Five and NGINX proxying solutions for these environments are changing in that traditional load balancer might not be known as an API gateway or an Ingress controller in Kubernetes. Or perhaps you're using a service mesh. So this is why we hear terms like API management, Kubernetes, and service mesh solutions like Istio. So there are many different areas of change in the migration from monolithic to microservices. We're moving to APIs, we're moving to the cloud. Perhaps we're moving to containers, moving to more lightweight protocols like a Restful. Api release cycles are changing. If you're within a DevOps environment, you might be releasing multiple times per day. Obviously, the bigger the application gets, the longer and more frequently the releases get. As you move to micro services, we're releasing multiple times per day. You can have development of different microservices happen in parallel, which is a huge advantage, which brings me to collaboration across teams. Teams are managed differently. We're moving to a DevOps culture where DevOps teams are more involved with the entire release process and we have automation and you can use many different programming languages as you choose. Just to take a little step back here and focus a little bit on some of the key trends we're seeing within the micro services landscape. Nothing should be too surprising here, but I just wanted to share our perspective. What we are seeing in the enterprise organizations are modernizing at a rapid pace. Three quarters of Enterprise State of Application Services Survey reported that our customers are modernizing, internal and customer facing applications with APIs and containers as the primary method. Given their ability to combine capabilities with modern and traditional components, DevOps is on the rise. Obviously, DevOps is very critical to agility, and things like automation can speed things up and you might be familiar with automation tools for infrastructure as well, rather than applications too. As a TerraForm, you can set up and manage your infrastructure via APIs like TerraForm. So just to mention Kubernetes very briefly, as Kubernetes adoption continues to increase at NGINX closely tracking the Kubernetes and Cloud native journeys, we ask our community oftentimes a number of questions around Kubernetes adoption, and we did so last year, and 35% of our community said that they were using Kubernetes in production. Another 35% said they were actively exploring Kubernetes, and 30% said that they haven't adopted Kubernetes yet. So when asked when do you plan to implement Kubernetes? 72% reported plans to put Kubernetes into production within the next twelve months. So yeah, Kubernetes adoption continues to accelerate. It's a common strategy of modern app initiatives, but it's definitely a very important part of that micro services journey. Now let's take another step back and focus a little bit on the microservices technologies. So containers most important technologies that allow microservices at scale is the container. Why are containers so popular? Well, compared to virtual machines, containers are quick to build. They're small, which means that they can be stored, transported over to a network, over another network. They're very well defined, they can run anywhere, and they're stateless as well most of the time. So containers are a key component of that microservices journey. They are a solution to the problem of how to get software to run reliably in one infrastructure and another. So put simply, a container consists of an entire runtime environment. So an application, its dependencies, libraries, and other binaries all configuration files that are needed, bundled into one package by containerizing. This application and its dependencies, all of the OS distributions and infrastructure is abstracted away. So if you look at a virtual machine traditionally, when using a virtual machine for applications, you're taking an entire operating system as well as the application. You might have a physical server that runs three virtual machines. You have a hypervisor and multiple operating systems running for each virtual machine. So that's heavy. And by contrast, a server running containerized applications with Docker, for example, you have a single operating system, and each container shares that operating system kernel from the machine. So that means the containers are much more lightweight, and they use far fewer resources than virtual machines. So this is the foundation for bringing portability to microservices applications, as well as some legacy applications. Of course, it is possible to put legacy applications into a container. Obviously, the difference obviously comes down to the size of the application is dependencies, and so on and so forth. But there isn't an analogy known as cattle versus pets that you might be familiar with. The idea is that in the old way of doing things, we treated our servers like pets. For example, a mail server. You might have a mail server downstairs in the server room. That server goes down, it's all hands on deck. Ceo can't get their email and it's a big problem in the new way of doing things with micro services, it's more like cattle than a herd. If the service goes down, you just replace it there, and then you just spin up another container and it's not really a big deal. And Kubernetes, for example, you scale your applications horizontally. If a container completes its job, you just destroy it. And when they complete the job, you can either destroy it or you can reduce the size of it also. So it's a very dynamic environment. It's much different. Now, this isn't a Kubernetes course, so I'll do my very best to explain this at a very high level. Kubernetes is the orchestrator for containers. It's the magic that makes it all happen. So when it comes to Kubernetes, there are multiple components. First of all, there are multiple nodes. You have a Kubernetes master node, you will have Kubernetes worker nodes, and you have an internal network. You might have an ingress controller, you might have a load balancer. The master node will contain the key Kubernetes components. Your worker nodes will contain the containers of your applications. Each of these interacts via an internal Kubernetes pod network. And of course, you might have an interest in Florida bringing traffic in. This is your layer seven two. But the idea is Kubernetes is all about container orchestration. It's about managing and coordinating the lifecycle of containers, especially in large dynamic environments. Software teams use tools like Kubernetes to control and automate many tasks. Some examples could be provisioning containers, deploying containers, scaling your containers horizontally or downwards, moving containers from one node to another, exposing containers to the outside world, load balancing, monitoring containers. The list goes on and on. But the idea is that the Kubernetes cluster is the management of the control plane for your containers running inside of the cluster. More on the Ingress controller there. Now, there are absolutely some drawbacks when implementing a modern microservices architecture. There are so many changes happening in this environment. For example, you might have a traditional application, you might have function calls that are very easy to configure and everything makes sense. Whereas now with a micro services application running in Kubernetes, for example, communication between your services is much more complicated. There are no network calls rather than function calls. Debugging is very difficult because it's not just one application on a single machine anymore. It could be one application spread across multiple machines. That application might have multiple microservices, and within each microservice it has its own set of logs. Tracing and the source of finding a problem can be very difficult. It comes to testing. It might be easier to unit test with microservices to test the functionality of a component, whereas integration testing, meaning testing the entire application is more difficult because the components are now distributed. So developers cannot test an entire system from their individual machines anymore. We are updating the application a lot more. Yes, that's a good thing, but you need to spend time automating and learning how to roll back their issues. If you're adopting a DevOps mentality, if you're writing an Anzac playbook or a Chef script for an automation task, you have to write another for rolling back at their issue. That takes time. If a micro service has its own API, they need to be consistent. If you're updating the API of one micro service, the other micro services need to be in sync and understand the new API version. One thing that I don't have mentioned here, I should mention is that what we're seeing more and more is that companies are adopting multi cloud strategy when it comes when they're deploying their micro services. So they might be deploying application services across multiple clouds like AWS GCP. Some of you might have some onprem. So when different microservices are spread across multiple different clouds, it can be very difficult to keep track. So all of these points here relate to that. Here's an example of an environment that we often see most. Some of the most popular applications we see have something similar to this. It's important to note that every environment is different, but the flow is often times very similar. You have multiple components, many teams looking after each of those individual components. You have the application teams looking after the application, Deadbox teams ensuring that updates are happening in the CI servers working. You have all your automation scripts. You might have a security team looking after the web application firewall might have an infrastructure, our networking team looking after everything else networking with it. Maybe you're using open source solutions or enterprise solutions. If you're using an NGINX Ingress controller or an Fiji IP for security, perhaps you're using monitoring tools like Profana, Prometheus, authentication tools like Opta KEYC, different security products, automation tools like Anzaco, DevOps tools like GitLab. The list goes on and on, but this is the flow that we often see. You have your code repository, you have your CSV pipeline, you're deploying the application within containers to your Kubernetes environment, and you have external tools for monitoring, for security, for logging, and so on and so forth. What I'm trying to say here is that this is the data plane. The data plane is there to control and monitor how traffic is sent to and routed within their microservices application. This is probably the most important point we depend on the data plane. Kubernetes and micro services are often used hand in hand, almost synonymous these days. I guess Kubernetes has emerged as the favorite container orchestration platform. So it is the gold standard for modern container based microservices. But the data plane, which is your traffic flow to your applications, handles all traffic from the client to the application container. And that includes load balancing, proxying security analytics, open tracing. All those things are very important. And usually there are specific teams looking after specific areas within that flow developers, infrastructure engineers, security teams, operations team, and so on and so forth. So very briefly, some of the challenges and concerns that we've seen. We ask customers recently, what are your biggest concerns around Kubernetes? We've got a multitude of answers, ranging from very small details to very broad concerns about configuration learning curve was obviously very popular. How you handle persistent data in a Kubernetes cluster was another very popular point that was made, but the four big ones were knowledge, complexity, security, and scalability knowledge. The biggest concern was not being able to understand the technology and how it works. That makes sense because Kubernetes networking is hard. Kubernetes security is hard. There's a pretty steep learning curve when it comes to someone who's new to cookiest. You have to understand container networking. It's a completely different language. So that leads me into complexity. Even when Kubernetes is deployed and it's out of the box form without management tools like Open Shift, Kubernetes is pretty well documented, but the networking is completely different than those that came before that containers are still relatively new technology and other things like certificate management complications and security is then the next point. To be honest, Kubernetes, when it's deployed out of the box, basically has no security turned on, which is quite a risk. Enterprises looking to learn Kubernetes as they deploy applications is probably one of the main reasons why things take time and why things are slow in adoption. I say security doesn't come out of the box. I mean, if you're deploying applications in Kubernetes with no web application firewall, you might start exposing applications directly with no proxy, and that can be a security risk. And finally, scalability. Now this is kind of ironic because the idea is that you have a Kubernetes cluster and you can scale your applications or your pods at will. But what we mean by scalability here is that because Kubernetes are so complex, a lot of the concerns are around the platform scaling the platform. It's very challenging to operate a Kubernetes cluster at scale. If you have multiple nodes, multiple pods within each node, and you're continuously scaling, you need to have resources for that. You need to have a team looking after the resource configuration of Kubernetes, and it can be quite complex, and it does become a concern. Okay, so the ingress controller now, as with microservices, containers have really become very popular because they do provide a massive benefit to the application development process. They're very dependent on the scale. They provide a nice isolation layer, and many microservices applications rely on this technology to operate in Kubernetes. So traffic management into and within Kubernetes is often handled by a load balancer, what we call an ingress controller. An ingress controller is responsible for bringing traffic into your Kubernetes cluster. Think of it as an Engine X proxy. For now, it's configured it differently to an NGINX proxy, but it's essentially an engine export balancer sitting in front of Kubernetes. So it's a Layer Seven Http primarily, and it brings traffic in deals with North South traffic. Everything you would use NGINX for like TLS termination, load balancing, and much more. That is where you would use NGINX uses something called an Ingress resource to configure itself, and it does a lot of things than just load balancing. Obviously it can scale like other containers do. They can monitor the status of your pods and do health checks and TLS termination and so on and so forth. But it's essentially your Layer Seven load balancer that brings external traffic into the cluster. It's important to note that there may also be another load balancer in front of the Ingress controller, and this could be a DNS load balancer. It could be a cloud TCP load balancer. Perhaps every environment is different. Of course, it could be a big IP if it's on Prem. So usually you have your DNS service that routes traffic to an Ingress, and then the Ingress does all of the application traffic. One struggle that we're often seen in companies adopt microservices within Kubernetes in dynamic environments is the infrastructure scaling the infrastructure. Another problem is that the actual teams, the application teams, it's getting very complicated for them to operate because they're designing the entire environment around application complexity. So web application firewall policies, routing rules, rate limiting API management, and so on and so forth. The teams do get complicated and organizing around these complexities. I myself have more of an application DevOps background. I used to be a developer, but I'm very familiar with deploying applications, writing applications and getting them ready for production. What I'm seeing a lot more is the complexity around networking and security. When you start deploying these applications into the net, you have IP tables and security policies and all these things. Application teams are starting to feel more like infrastructure teams, and security teams are starting to feel like more like application teams and getting stuck into Kubernetes. So service mesh. Very briefly, we return to our Distributed Micro Services application here. The Ingress controller is responsible for controlling traffic coming into the application, and this is known as North South traffic. It has no visibility or control over traffic flowing within the application. We call this East West. So if you have a micro services based application here, and each of those micro services are deployed in containers and they're all communicating with each other via Rest APIs, this is usually East West traffic, so NGINX brings traffic in. Once that's within the Kubernetes cluster, it's within the application rather than the load balancer. This is where a service mesh might come into play, but it does depend on the application. Oftentimes the Ingress controller can do everything you need to do, but let's say you wanted to have mutual TLS security between all of the application traffic within Kubernetes. Let's say you wanted to limit traffic from micro service one to micro service two. Very granular requirements we're talking about here. But that is why you would start looking at just like a service measure, a sidecar proxy for those scenarios. Let's simplify it. You have your layer seven, layer seven. Ingress controller traffic enters the cluster via the Ingress controller. This entire layer seven traffic comes in when traffic passes from the Ingress controller to the service and from the service to the pods. It's layer three, layer four. So this could be okay if your application is simple and you don't have a requirements for encryption. Perhaps. But having a service mesh here would give you the ability to manage EastWest traffic. So this could be mutual TLS between your pods. It could be for better granularity like open tracing, and you want to see traffic communicating between each microservice. It could be for DevOps methodologies or things like AB testing, Canary and blue green upgrades. Perhaps you might decide to do some rate limiting between your cars. There are many reasons why you would do this, but the biggest reason is often times encrypting traffic within the cluster. So some of the use cases of a service mesh. Very briefly, it's important to understand that a service mesh only solves a very particular set of problems. Like I mentioned, these are not required these types of features, then you may be moving too fast. If you need mutual TLS or client side authentication between services within Kubernetes, from the Ingress to the Egress or East West traffic, maybe a service mesh might help you. If you need advanced load balancing, traffic splitting and a B testing and access control within your cluster, that could be a use case. Also open source tooling Prometheus Profana all those things. Perhaps you want to view metrics and analytics on the traffic patterns you see in your cluster. That is something you could do with a service mesh. Oftentimes you don't need a service mesh. And once traffic hits your Ingress controller and Ingress distributed across your individual application cards and that is all you need, you should ask yourself if you could say yes to these items here. Probably if you could say yes to these, then maybe you could benefit from a service mesh. If not, maybe not yet. It's a very complex technology. We're not trying to scare you away from this. We are seeing a lot of companies trying to adopt a service mesh before they need one, and it causes a lot of complexity and headaches. If you've only started using Kubernetes, then maybe a service mesh isn't needed yet. If you have deployed an Ingress controller and it works very well to deliver your applications, then maybe you don't need one yet. But if you've been a fully automated CRC pipeline, you're using Kubernetes. You have an Ingress controller deployed. You want to add mutual TLS to a zero trust environment. You want really granular traffic control within your cluster, then yes, the Service Mesh could definitely help you there. And the idea is that you inject a sitecare proxy within your Kubernetes environment to have more granular control in your environment. So let's have a quick review of what we learned today. So Microservices is not a destination. There will be different clouds. You might adopt Micro services on Prem. You might use different tooling. The number of tools in the industry is extremely high. Automation tools, Chef Ansible Puppets, cloud platforms, Zero GCP AWS load balancing tools and Ingress controllers, Engine X, Five H, a proxy. The list goes on and on. We have multiple DevOps tools, CI tools like Jenkins, a DevOps platform like GitHub, and so on. So there's actually a periodic table of DevOps tools out there that's worth looking at. But you will see DevOps methodologies. You will look at API management solutions. You will certainly come across Kubernetes, but it will take time and it's too risky and expensive to move too fast. We spoke about Service Mesh and Ingress controllers primarily because they are key to production ready Micro services. Because the ability to control how traffic reaches your applications or Micro Services applications is very important. That's where the data plane lies. So tools like NGINX and other Ingress controllers have a very important role to play there also. Okay, that is the end of the presentation. Let's have a quick look at the Q amp. A. Okay, so question here. Both Mollets and Microservices have their pros and cons. Do you think that Mollets will completely disappear in the future, or would the number of them just shrink? That's a great question. The answer is I'm not sure, because when it comes to all the applications I've worked with, the monolith isn't going away anytime soon and oftentimes. Some of the most modern applications we see out there today are built around Micro services, but some of the core components are still monitored. I do think they will shrink personally, but not anytime soon. I think some applications are actually perfectly fine to be a monolith. It's important to understand that, too. My previous slide that went through some of the cons of moving to Microservices. All those are relevant and things like complexity, having multiple containers for each of your application components that may not be needed if you have a very simple application that does one job. For example, let's say you have an online blog and essentially contains articles, blog posts and images, and it's pretty static and you add updates not too often, then a three tier application model will be absolutely perfect for that database client service size application and a client side front end. Yes, there's a comment here on top of Kubernetes concerns. Security is most scary. Yeah, and that's a good point, because when it comes to security, you hear things like zero trust. You hear things like Service Mesh from each of TLS between your application containers. What we're seeing a lot more is the ability to deploy a web application firewall inside Kubernetes. I know at NGINX we do have the ability to deploy web application firewall on the Ingress controller itself. There are multiple ways you could add security to the Kubernetes platform. You could put security outside on the external load balancer, bringing traffic into Kubernetes. You could bring security to the Ingress controller. As I mentioned, you can start encrypting all the traffic within Kubernetes so that more traffic can be accessed without some encryption. There's a lot of ways to do it, but yeah, we're seeing a lot of companies adopt a zero trust model. Our Micro services a viable option for small teams or individual application developers. Yeah. So I think my answer to the previous question answered that. Also, some applications are perfectly fine to be a monolith. You can deploy an application using micro service methodologies also, but the application is small and it doesn't need to be scaled horizontally, then you might not want necessarily need a micro services environment, but it depends on how you want to deploy it. Also, because if you want to deploy things in Kubernetes, or if you want to deploy things in containers, then obviously you want to use a more lightweight language. Modern languages like NodeJS and Python, for example, are much more container friendly. Okay. What essentially is the difference between an Ingress controller and a service mesh like Istio? It's good question. There is confusion around the two, and oftentimes we see people mixing them up. Actually, because an Ingress controller, the idea is that it brings traffic in layer seven. It's a Http load balancer bringing traffic into your Kubernetes cluster. It works the same way as an engine expert sitting outside or inside, and it does your TLS termination, load balancing, and so on and so forth. But it focuses more on North South traffic bringing traffic in, whereas a service mesh focuses more on East West traffic, which is traffic that's indistributed inside the cluster. So you might have an Ingress bringing traffic into a micro service. That micro service might send requests to another micro service. The Ingress has no visibility of that micro service. One to micro service to request. With a service mesh, you could add another proxy within that layer to keep track of those traffic patterns, to add encryption to those traffic patterns, and a lot more than that. So Ingress North South service mesh East West within three main questions here. Yes, there's a question here for a networking world, big IP is used for load balancing. Can NGINX be a substitute to Big IP? Yeah, good question. I think they're both slightly different in terms of how you want to deploy them. Yes, both solutions can do load balancing. Engine X, I often think, is more associated with the application, so it's closer to the application, whereas big IP is usually more of the network entry point into your application portfolio. In the previous example where we had a DNS service bringing traffic into an ingress controller. We actually use both of them side by side. Big IP could be used as your external load balancer. You can add a security layer to that also, and then that's managed by the Net Ops teams. And you could also have an NGINX ingress controller on NGINX proxy that's managed by the application teams doing different things like Jason web token authentication or more layer seven features, I suppose. But depending on the environment, you could use big IP by itself, or you could use engine exploit itself, depending on what you need. Okay, it's a really good question here about what are the different use cases for using ingress controller versus ingress gateway? That's a really good point because Kubernetes is adopting a new type of resource called gateway. When you configure an ingress controller, you can configure an ingress resource in Kubernetes, and that is a manifest file that is sent to the Kubernetes API and that configures the load balancer with the new gateway resource. For Kubernetes, it's a different way of configuring the load balancer rather than using ingress resource to use the gateway object, and there are plans to add that to NGINX and other ingress controllers out there. What are the benefits? What are the different use cases? I would say the ingress controller. If you write an ingress resource in Kubernetes, oftentimes they're the same regardless of what ingress controller they're using. Some ingress controllers have more features than others, and in order to do those extra features, let's say, for example a WAFF policy or mutual TLS, for example, you might need to add an annotation in Kubernetes to extend the functionality, whereas I think an ingress gateway would allow for more custom or more advanced use cases like that. Is it mandatory to use an ingress controller in Kubernetes? Technically, no, it's not mandatory, but it's recommended. So within Kubernetes, if you deploy an application container, that container is running within the internal Kubernetes network and there is no way to access the container externally unless you configure a way to get access to it. And within Kubernetes, how you do that is via a service. You create a service within Kubernetes and that allows you to expose your application container that's running within that Pod network. If you expose your application container within the Pod network, then you are directly exposing your app with no proxy. That can be okay depending on the application. But let's say you have multiple applications from Kubernetes, which is the whole point of having a Kubernetes cluster to allow microservices and apps to be spread across multiple notes. You don't want to be exposing different Pod IP addresses from each application or each micro service. You need to have some form of entry point like an ingress controller to bring traffic into the application. That's one reason why an ingress controller is recommended. It's to have one load balancer to distribute traffic to all of your pods. Second of all, you most likely have a DNS service for the application. So when a request comes into your website's DNS name, the ingress controller is responsible for resolving that and sending that request to the relevant application that matches with the FQDN. That is something that would be a lot more difficult with an ingress so I would say I wouldn't recommend it, but it can be done. If you have a single application that runs as a single container that has a built in TNS termination then maybe you can just expose it directly without an invoice. But of course that does depend on what you're trying to achieve. What makes a language container friendly? That's a good question. I don't think I have an answer to that. I think it depends entirely on the dependencies of an application. If you're looking at a traditional application, It most likely has an external database. It might have external dependencies for the server side, there might be an app server. It can be quite difficult to containerize something like that. When it comes to having a database within a Kubernetes environment, then it's difficult to have a database within a container and scaleless that was awesome. Containers are meant to be staples, Whereas a database isn't really status. So I think somebody's been container friendly. It needs to be able to be destroyed and spun up again without losing data. It needs to be lightweight. So some traditional applications aren't lightweight. They might require lots of libraries and dependencies and they just don't fit well in a container because they're too heavy. If the container is over a gigabyte in size Then that might not be very container friendly depending on the app. But it's a very good question. It's a broad discussion but the first two examples I thought of I suppose is they're supposed to be stateless and the dependencies and the libraries need to be lightweight. Okay, Christina, I think that's all we have time for today. Thank you very much for your time. Hope it was useful. And please, if there are any questions, Reach out to the contact page after the call. Great. Well, thank you sir so much to me. Haul for his time today and thank you to all the participants who joined us. As a reminder, this recording will be on the Linux Foundation YouTube page later today. We hope you're able to join us for future webinars. Have a wonderful day. Bye.