For more info on the next Devoxx UK 👉 https://www.devoxx.co.uk
The advent of Kubernetes and in particular the Knative Serverless and Functions technologies have opened up an amazing path to writing efficient, non-resident microservices. This session will explain and demonstrate how easy it is to take advantage of this new architectural paradigm and technologies to write microservice applications that take up way less resources and space, allowing for smaller clusters of Kubernetes/OpenShift to handle many more workloads. The session will explain the ease of using the new abstracted Cloud Event technology as a bus for driving events to microservices that are deployed only on demand. It's a mindshift but one that will open up huge opportunities for writing highly efficient and agile workloads.
Excellent. Hi everybody. My name is Ian Wilson. I'm a principal domain solution architect working for Red Hat. I've been Red Hat for nine years now. Before I worked at Red Hat, I used to actually work for a living, was a software engineer for the government for about 25 years, writing some interesting software that's probably on your phone at this point. Today I want to talk about next generation micro service development using Knative functions and cloud events, which is a bit of a wordy subject. The reason I get really excited about this is that this is the kind of technology I wanted ten years ago. Unfortunately, now I'm in a situation where I can't use it and you guys can, but I'm the guy that's unfortunately has to sell it. But I'm really excited by this new stuff that's come out, especially around the kind of function site. I'm going to give you some slides, first of all, and then hopefully the demo gods won't kill me and I can show you a demo of this stuff working. I've given this about three times so far and the demo has never failed, so that's an obvious indication it's going to fail this time. So start with these bit of buzzword bingo time. There's not too much buzzword bingo in this one. I'm only going to be talking about Knative serverless. I assume everyone here knows what Kubernetes is, what OpenShift is and what all those kinds of things are. Set the stage where we are, where we were. Concept of legacy applications more or less adverse to Agility. Now basically this is describing where we were ten years ago, building huge applications with all the functionality within it. When you wanted to change something, you had to rebuild the whole thing, had to redeploy it. And that was the kind of thing that made it adverse to Agility. You couldn't change the whole aspects of it. Everything had to change at once. And these huge monoliths. And to be honest, we all love these monoliths. They were our babies, they were the things we made an effort to actually build and lived and died with. Where we are ish, and I say where we are ish because this is where we'd like to be right now. The concept of functional decomposition would have moved to microservices. Or you could break those legacy applications down into multiple little components. You could change those components without affecting the rest of the system, which means you could build them separately, moving to sausage machines. That's this kind of concept of cloud native where you build an application where everything is injected using dependency injection and these sausage machines can survive to being destroyed and being recreated because all of the information and context is actually held external to them. And it's a deluge of new technology and approaches. And I don't know about you guys, but every day I get up and look at the web or look at the emails I get and it's full of names of things I've never heard. When I was bored giving speeches to customers, I used to drop random things in that didn't exist just to see if they could detect it was I got caught out with one customer because I was talking about a new way of building APIs called Apache Fourskin. So I don't do that anymore. Where we want to be. Now, this is the key thing about today. So this on demand instantiation of microservice. Now, what I mean by that is when you break something down to microservices, when you build a system using microservices in the current way of doing it, the current architecture, for example, with containers, they have to be active all the time. Now, the good example I give of this is, let's say you've got a system that's got ten microservices that one of your microservices is called every second. The other nine are called once every 24 hours. If you're standing right up on, let's say Kubernetes today, you need ten active pods because those pods need to be there when that traffic comes in, even if it comes in once a day. So you got your system, which is ten microservices. In actuality, it's ten pods that are running all the time. And those nine pods that are called once a day are consuming resource for the entire day. Now, the whole concept of Knative Serverless is you've got this ability to actually instantiate an application via a pod and then instantly scale it down to zero. And that sounds like so what it doesn't make sense. What Knative service actually does is it retains the ingress point. It's got a special ingress controller within the OpenShift system, within the Kubernetes system. And when traffic actually arrives at that ingress point, it rests in up the container and it's an almost instantaneous respin. So what that allows you to do effectively in the example I was talking about is have ten microservices, one up all the time, and nine in Knative Serverless state. So for the majority of the day, when you've got the system up and running, you only have one pod running and the other pods only pop up when they're needed. So as there are no Red Hat sales people in the room, because I always get told off when I say this, it means you need a smaller cluster. You don't need these huge clusters. You can have a smaller cluster because you're only using the resources when you need them. And that helps with efficiency. And we're all about efficiency at the moment. We've also got this concept called Knative functions, and this is the new stuff we've been working on. If anyone actually saw the summit demo we did earlier in the year, the Red Hat summit demo, it was all battleships on your phone. I wrote the entire back end for that using Knative functions. And Knative functions are basically a way that a developer can build these functions without having to know Kubernetes. It does all the wiring for you. It sets up all the Knative stuff for you. You just have to write the code and then you run a command called Funk, which is part of the KN function, which actually builds the image, deploys the image, and does all the wiring to make it a Knative serverless one. And that's the point of Knative functions. That's the big area I'll talk about today, which makes it much easier to actually use this new style of technology. But enough blurb because I know we're all developers and I've given that first bit of speech a thousand times. Let's talk tech. I love the way that this works, and I love it because it's simple. I've been a Java programmer since 1996. I was a Java programmer. It was called Oak. If anyone remembers Oak, I am older than I look and this I've always had problems with understanding the complexities of things like Kafka and AMQ and streams and it all seems like a massive overhead. I just want things to happen with Cloud events, which underpin the technology I will talk about today. It is incredibly simple. You have the concept of a producer. That producer creates what we call cloud events, a cloud event. I'll describe it on the next slide, but I'll give you a quick overview. Now is a very, very simple payload and a set of headers. And that's all it is in actuality. And I'm going to show you the cheat behind the scene. The broker itself you can talk to by posting a simple post request too. So you don't have to understand APIs. You don't have to build all those kind of complex interactions with Kafka and things like that. I'm not dissing Kafka. As of this morning, I probably understand Kafka for the first time in three years. And one of the wonderful things about these brokers, which are actually the central point the cloud events are actually handled in is you can put backers on them. And I'll show the example, I'll show you a basic one that's Ephemeral, and this broker just sits within, OpenShift itself, receives the events itself, and then sends them on. But you can back them with things like Kafka. And the difference is, if you've got an Ephemeral broker, any cloud event coming in is forwarded on. I'll explain how we do that in a second and then it's lost. With a Kafka backing, you can actually resend previous events. So it's the things where you want a system that has persisted state that is dependent upon temporal activity, where you don't want to lose it. You can have a backer on it that's based on Kafka. Basically, when the event is pushed to the broker, you have what's called triggers. And triggers are triggered by the type of the event. It's one of the header fields and that type basically allows you to push the event to a consumer. And it is as simple as that. And what's nice about it, because it's based on the Kubernetes side and because everything is abstracted into specific Kubernetes objects, the triggers are independent to the consumers. So you build the consumer, the consumer receives the cloud event, it works upon it, it does whatever it wants to do. But the actual triggers themselves are Kubernetes objects. So they're very used to create. They're very used to change. So you're not bound by configuration in the same way you are if you're building an event based system based on messaging and based on things like Kafka. It is very simple. And I like simple. And the reason I like simple is I like writing code to do things. I don't know about anyone else in the room, but I'm getting tired of writing 90% boilerplate code. I want to write code that does fun stuff and I don't want to worry about all the wiring. And this takes that worry away. So the anatomy of a cloud event, I say we get a bit tricky at this point. That URL at the top defines it in detail. It's a very simple specification. That specification hasn't really changed since we started developing this. There's been a couple of changes based on some feedback. In fact, some feedback I gave in terms of the information that's needed by the broker. But it is a pretty stable specification. It's a very simple specification. And to put simply, it's just a Blob with encoded attributes and a payload. And that payload can be anything. And this is one of the things I asked for when I was talking to the engineering team about this. What they used to have was the payloads, Jason or other defined formats. And I'll tell you a little story about this. What they built was they built some libraries you could actually use as part of the API on the consumer side to handle the events. And one of the actual libraries was a JSON decoder. And what would happen was if you push the JSON as the payload. And that Jason was slightly wrong because who writes JSON right the first time? It's always wrong somewhere. I don't know. It would fail the conversion and it wouldn't run the consumer. So the actual conversion of the JSON into a beam is done. And if that fails, it stops the consumer. And I don't like that because in the real world, you're going to get Blips, you're going to get bike problems, you're going to get damage to those events, and you want your consumer to get that event even if it's slightly broken, and then process it. So I bounced it back to the people who are writing this and said, can we have it as a Blob as well? So there's another option within the actual APIs, and I'll show you some source code for this that allows you to just read as a Byte array because I'm a bit old school. I like just getting things as bytes and then just putting it reformatting it myself. It's that old thing about genericization and specialization. I think I made up that first word. But in the old object orientated days, you find a specialized person would write a bean that had get A, get B, get C, and a genericist like me who didn't like writing too much code would write a method that said get open bracket string name. And then in that I'd say if A do this, if B do this, if C do this. That genericist approach is what led us to actually have the Blobs in there, and it's really, really useful. So as I said, the nice thing about the pilot can be informatted the attributes that are required. And there's a number of additional attributes that aren't required, but these are the ones that are required. You've got ID, which is the event identifier that's contextual. So that's generated by the producer consumed by the consumer. It's not preformatted you've got source, which is the context, which is the Uri those who are written by the consumer. This is the important thing. I'll jump over the last two. First one. I'll tell you the important thing about this. The idea of the source must be unique for each distinct event, so that's the identifier by which the broker decides that it's a unique event. So if you're writing a producer and you send the same ID and source combination, the broker won't forward it on to the triggers because it's deemed to be a unique event that's already been processed. So when I'm normally writing this and playing with it, I normally have a counter. So I have a Uri which defines an identification contextually from the producer that produced it, and then a standard counter. So you never get the same ID in terms of the attributes as well. You get the specification version again, the specification hasn't changed very much, but if you get a card event that doesn't match the specification version of the browser, it won't be handled. And you get type, which is a producer defined event type. That's the important one. The idea in the source actually identify the event. The type is what's used by the broker to forward it on with those triggers I was telling you about. So a good old fashioned example. I'm not sure if you can read that, and I'm happy if you can't. I don't like APIs because I'm old school. So what I did in this example was I simply built an Http URL connection in Java and dumped a post down the line. And you can see what I did was I just set request properties for the actual attributes. The C minus type is the type CID spec version source partition key was something to do with Kafka? I can tell a humorous story after if anyone's interested in why Kafka failed badly with that, and then all I've done is I've actually taken my object which is called output. I've encoded it in JSON because I wanted to push it as JSON and I've dumped it down a lie to post request. So in terms of using it, it's incredibly simple. If you can write a piece of code that produces a post request, you can generate a cloud event. I've actually got an application I'll show you which I call the Cloud event Emitter, which allows you to emit these events to anywhere within the system. The broker is incredibly simple, and you can see from the YAML actually defines it, that generates a broker. Now the broker is a namespace bound channel for Siphoning and distributing cloud events. So for those who know Kubernetes an open shift. This is within a project, lives within a namespace and it's namespace bound. The broker has filtered subscribers to which it emits the events. So when you throw the event into the broker, it compares the ce type to the triggers it's got, and any trigger type that matches that ce type, the event is omitted to it under the covers. It's actually another post, but we'll pretend it isn't. We'll pretend it's something new and funky, but it is just an emitter. The subscribers are set up using triggers which cater for one ce type, and that's one of the down things about the system at the moment. You can't have a trigger that looks for multiple event types because event types are deemed to be unique at some point in the future. It'd be nice to have multiple triggers for the same consumer, but we don't at the moment, and the brokers can be backed with technologies for persistence of events. As I said before, you can put Kafka on the back of the broker, such as when the event arrives, it can be persisted and then omitted, and then if you lose it, you can actually replay it through the Kafka side and regenerate it. More on brokers under the covers the brokers can be reached if you don't use the Cloud Event API libraries directly as an internalized URL with an open shift. This is a nice thing about it. If you look at the actual format of the name, it's using the internalnetwork SPC cluster local. All of the brokers are served from a central ingress point. Now that can be deemed to be a bottleneck, but you can scale it and all the things like that. But if you look at the format of the URL, it's that ingress point, which happens to be the ingress point for the Knative eventing project targetnewspace brokername, and it's that simple. So when I'm emitting things via application itself, I just build that URL based on the namespace and the broker name. And the broker name is normally default, so I don't bother changing it. And if you're a Luddite like me, you can simply send post requests to the broker directly. There are a number of APIs you can use for this. There's an API for now, there's an API for Go, there's an API for Java, and that wraps up all the nice annotations and stuff. But because I'm so old and out of touch, I still think annotations are a new thing, which again means you shouldn't take my word for it how a broker looks in OpenShift. I'll show you this running in the demo itself, but OpenShift actually renders it on the topology page as that thing on the left hand side, this kind of binary broker. What you're seeing here is actually the demo I'm going to show you. The top right one is actually a Camelc integration that I've written, and that is driven by an event type. I think it's Tech Talk event the bottom one. And if you can see the icon right at the bottom, that's a Knative function. That's the stuff I'm going to talk about. How to build that very easily. That's a caucus function that uses a couple of the caucus libraries, and I won't spoil it because I use it as a pun on the next slide to actually do the emitting of the cloud event, the processing of the cloud event itself. You'll notice as well that both those applications are scaled down to zero. And that's the point behind this, because they're keen to services themselves. They are not scaled up until an event arrives, and you get that wonderful efficiency when you're running this, you're not consuming any resources on the cluster itself until those events are actually pushed to the consumers getting funky with it. That pump actually hurts to type. Caucus has a library called Funky, which gives you annotations within the caucus code. So you can do things like specify what cloud event is going to be emitted. You can do things like automatic translation of the payload and things like that. Cad functions are a quick and easy way of writing and deploying resident on request and cloud event driven microservices, and it wraps all the Knative wiring up for you. So you don't have to know Knative, you don't have to know the internalization of the ingress points and all that kind of traffic analysis stuff that does it all for you. What I've got here underneath is the YAML that I use in the actual Knative function. So the Knative funk command takes that piece of YAML and it converts it. And very, very quickly going through it, I've got name. My function namespace isn't listed, so I can just push it into whatever namespace is contextually active when I'm actually running it from the command line. The runtime is quarks. We've got a number of different runtimes for it. I love Quarkus because it's finally made Java relevant again and love it to pieces. The image is where the KN function builds it to. So how it actually works is it uses a base framework image, which has all the framework, you need to run the Knative service in it, the caucus libraries and all those kinds of things. And it does a build of the actual caucus source code. You give it and produces a composite image that it delivers to that image point. That image point is then used to serve the application into the Knative service that's running within OpenShift or Kubernetes. And you can see underneath I got builder default, and what you have is a builder map that allows you to pull different framework images for building that function. Now you don't have to know that you can just go with default, which does a standard Maven builder, the caucus stuff. But you can see from the third option, I think it is that there's a native builder and that's the caucus native build or actually does a native builder makes it much faster. It compiles all of the stuff to binaries. You got the ability to put environment variables in there as well. When you actually do the Kenny to function, I'll get onto it. There is a minor problem with that, but I'll tell you about in a second. So wiring the services themselves to the broker. And this is the trigger stuff I was telling you about. The triggers are incredibly simple, incredibly simple to write and you can see all it does. You give it a specification for the broker you're going to use and a filter, which is the attribute, which is the type, and the subscriber which is the end point service. And the interesting thing is you look at the API version, it's serving Knative Dev. Even though this is Knative eventing effectively under the covers, it's using that service endpoint via the serving side. And that's the way it kind of gets round it. So when it gets when it gets the actual request itself, it compiles it into a standard service. Calls that service that drives the creation of the actual pod writing Canada functions. You probably can't read that, which is good, but this is an example that's just got a couple of the caucus libraries in it. The interesting thing with this I remember I can walk around because I'm not actually wired to anything. Interesting thing with this is I've got an annotation called Funk F-U-F-U NQ and a cloud event mapping. Now, the reason I've got a cloud event mapping, if you remember I said the trigger which actually maps the event into the Knative service is abstracted. So the subscriber, the consumer doesn't need to know about this, but what this function is actually doing is receiving an event and then it's emitting an event. And this is the key thing that makes it brilliant. This is why I get excited about this. Because you could strip down an existing legacy system and build it as a number of microservices that call each other. Because these microservices can actually emit the cloud events as well as processing them and this is part of the demo I've got for you. So I'm using the Funk library to actually make this Emit event. This is Emitting an event of Tech Talk event, and all it's doing is it's just adding a bit of text down the line. So before the demo, Gods or Smile on me why this is huge for Devs. I love the idea of functional decomposition, of being able to break a system down. It's what we're meant to be doing with microservices in the first place, and we've done it to a certain extent. Lots of people do it all this kind of stuff. We're limited with the ability of using micro services in Kubernetes because of that having to be resident all the time. So you get to a situation where if your system is made up of 100 microservices, you've got 100 separate pods, or if you've got multiple replicas of those pods, you've got a huge amount of pods, and you get this kind of overhead of having to keep them up and having to keep them maintained and all those kinds of things. So there has been some kind of fettering or control in the way that we can actually do functional decomposition. This gives us an excuse to do it right, because we can break things down into these very small microservices and they're nonresident, so they're not taking resources all the time. So you could have 1000, 10,000, 10,0000 of these services on a much smaller cluster because they're not resident. So for me, that gets me excited because I'm thinking, well, I could write hugely complicated systems and not have a massive footprint on the Kubernetes and OpenShift side. But again, I haven't been a developer in Anger for nine years. Why Knot is a choice of hosting. Again, that's what I was saying. It's an efficiency thing. You can get a lot more bang for your buck, a lot more of your services running in a much smaller area, and it just feels cleaner, it feels way more efficient. And what future systems might look like, well, you're going to get to a situation where you can functionally decompose these things down to very small micro services or just Nano services, and you've got the ability to change them whenever you like. The one thing I didn't tell you about, which is really cool, is not only does Knative services scale down, it scales up automatically. And I'll give you a good example of this. The demo we wrote for Summit involved taking a huge amount of events and processing them across three different clusters. So we were testing it by pushing a million events through at once. And the beautiful thing about the Knative services is not only do they scale down to zero, if they get a flood of traffic, these things auto scale upwards. So the minute we were pushing a million events through the system, it was ten pods, 20 pods, 25 pods, 30 pods, 35 pods based on the traffic that was going through it. And that's done completely by default. You don't have to set anything. The Knative service will automatically auto scale. And again, it takes me back to my excuse for actually using this in the first place. You don't need to know the technology, it simplifies it, but you're still getting all the power of this technology without having to know it, which I really like. I did take a couple of slides out because I was feeling a little bit guilty about them. And the slides I took out were the problems with Knative services. And as we're developers and we're all friends here, I'm going to tell you what the problems with the Knative services currently are. The first problem which we're currently working on is you can't use persisted volumes. Does everyone know what persisted volumes are? So persisted volumes are basically resident pieces of storage that you can attach to the back of pods. They are the things where you put the contextual information. So when your application is lost, it can restart instantly and still have the same information and context where it was. We decided not to use PVs or not to allow PVs with cannabis services because of the rapid way in which these things come up and come down. I think that was a mistake. And the reason I think that was a mistake is because you're losing a lot of functionality. And I'll give you a good example of this. I wrote a resident application that sat alongside the Knative services that was resident at all points at all times with the persisted back end storage on it. And I would actually call that directly from the canitive services. So when the Knative service was called, it would do a call into my resident service to store a piece of data, and it would do a call into the resident service to pull the piece of data. So it was abstracting the storage of it. But I then got the overhead of having a resident application for my storage engine. And the kind of sort of way to think about it nowadays is that there are ways to handle persisted storage, but you're not going to be able to do it directly within the Knative services themselves. It probably will come when we get to a situation where the PVs can be fired up much quicker, then, yeah, they'll bring them into action. The other one is a slightly more interesting one, and that is that if anyone has played with Knative services, we have this concept of what we call a revision. And a revision is just another copy of the Knative service with a change to the environment variables or a change to the image that is built from. These revisions are immutable. And what that means is when you create the revision itself with the environment variables, with the settings, you can't change them. So if you go and use Cubectl or OC Edit or something like that. It won't allow you to change those things. It makes it physically immutable. The problem we've got is the key and function stuff which builds these functions doesn't cover all of the attributes that you can give to a Knative service. That doesn't sound too bad, but one of the ones it doesn't cover is the default timeout. Now, what we mean by the default timeout is when you pass traffic into the Knative service, it will spin up and then it waits for that time out, and if it doesn't receive any traffic within that timeout, it scales down and that timeout is currently something like five minutes by default. It's very easy, once you've created a Knative service to create a new revision with a different setting for the thing, but you can't do it by default when you'd use a KN function, what that means in English is the first revision you actually deploy. If you want to add anything to it, you have to add another revision. That doesn't sound too bad, but when I was developing for the Summit demo, I created 400 versions of my app, and even though 399 weren't active, there were still Kubernetes objects for them. So when I did an OC Get service or CTL Get services, I have 400 services in my project. They don't take a huge amount of space, but you're still filling up, etcd. So it would be nice if we could actually have defaults to the K and Funk to actually add those additional attributes that are not currently covered. Right? So let's see if the demo girls like me today. So everyone see, that too small. So this is going back to what I was telling you about. I've got a single namespace called Knative Demo in which I'm running these applications. I've got a resident application called the Cloud Emitter. It looks like this. It's a very sad 1996 star sheet that I still use 25 years later. And this allows me to actually put the broker address, the C type and the payload in and just admit it. There's a vaguely humorous story about this. When I started using tentative functions for the Summit demo, they were still pretty basic and pretty early on in the development cycle, and there was no way to manually emit an event into the system. You have to write your own application to do a post or do these kinds of things. So I wrote this application. I went back to the Red Hat engineers and said, Isn't this cool? Don't you love it? Sweet. I can emit events. And they went, yeah. And then they went and wrote an extension to the K in a native function command to do it for you. And it was like, well, can't you have done that before? I wrote this horrible webpage and had to show this horrible webpage to people. But anyway, so what this allows me to do is to actually emit the events themselves. If we go back to the topology when I told you about the triggers, for example. So this is the broker. It's currently called default. There's nothing behind it. So it's a very simple piece of YAML. So it's just a channel throughput for the cloud events. If I click on the trigger, you can see this is trigger. Caucus subscriber is the caucus function. So it points to a Knative service endpoint ingress point and it's filtered by type caucus event again. And slight bug which I've raised is if you've got multiple subscribers. So if you've got multiple endpoints for that, it only ever showed you the last one that was registered. So if I've got another one which is reading from the same kind of event, I'll only see the one that was registered last. What I've also got is a Camelc integration. So it's a very short piece of Java that I've deployed using Camel. Kamlk is a lovely wiring technology, but I just want to show two different technologies that again has a trigger point. So that trigger point, which goes to the Knative service event reader is a type TekTalk event. So what I can do and I have to do a lot of quick clicking here to show you it. So I go to the cloud event, emitter I've got a type qaucus event and this is the payload and here is the content and I can emit it. If I go back here very quickly, you'll see that this service is instantly starting. So that event has arrived at the broker. The brokers decided via the triggers which one to send it to and it started that event. If I click on it, I can go and look at the logs and you can see I've received the payload. So it's a very simple example, but you can see that the actual way in which that kind of function is driven is driven by the event flowing through the broker. If I go back, you'll see that the actual Camel key has started as well. And the reason for that is that the caucus function, as I showed you in the example, emits an event. So when I sent that event directly to the caucus function, what it did was output the payload it got and it emitted a tech talk event back to the broker that emitting the event back to the broker spun off the camel K one. And if I'm quick because I normally talk too much and by the time I go back, they've unloaded themselves. You should see at the bottom message, hello, DevOps message kitchen the Corkis function. So it's a very pity example and a very simple example, and I wish I had time to actually write a more complicated example. But if you can imagine the kind of potential you've got with these kind of things, you can write an entire event based system without having to learn the intricacies of Kafka and all that kind of stuff. Because you can focus on building the cloud events and focus on building the functions that actually serve them. And in the background, your infrastructure people can back the broker with Kafka and you'll get all of the enhancements of Kafka, the replayability, the ability to use. I keep forgetting the word. I call them fragments, but there's another thing for them. It'll come to me in a moment. What was that? Sounds about right. I got one of those holes in my brain that I hear the word and it's like that's the right word. And then 3 seconds later what was the word? Shutting down. So what you're seeing now, if we pop back to the topology, is that these functions are now winding themselves off. So what has happened is that timeout that set for the Knative service has expired without additional traffic coming into the ingress point and those have now gone away. So going back to what I was saying in terms of next generation systems, this makes it incredibly easy to build these microservices that don't take a huge amount of footprint. Now I can show you I haven't prepared this, so you have to bear with me. But if I go to my terminal, can anyone read that? And I'm hoping people say no. And the reason people say no, I'm going to get shot by the people at Red Hat is that I don't want Docker on my Mac. I don't like Docker on my Mac. My Mac is nice and clean. I like doing development on my Mac and Docker seems to be a little bit naughty. So what I've got is I've got effectively a Fedora virtual machine that I'm running on my Red Hat MacBook using VMware. Again, I hope there's no Red Hat people in the room that are going to be killed for that one. The reason I do this is the K and Funk stuff. Natively uses Podman or Docker. So you have to have Podman or Docker on your machine to actually do it. What I've got here is basically if I see into wrong one, I know people. You can't see that, can you? So if I do an alias, LS equals LS to get rid of the color schemes, that's better. So I've got a Caucus function directory that's got all my Caucus source code in it and the Maven file that actually builds that Caucus thing. I've also got a Camel K directory which has my Camel K stuff in it because I'm nice and open. I'm going to show you what the Camel K looks like and it's a simple app if people can see it. And what's lovely about this because it's Camel and Camels are nice for doing this kind of wiring stuff from Knativecoloneventechtalk event. And that's all I need to do in the Camel side to get it to be able to consume an event being driven by the broker. As long as there's a trigger that's actually connected to this, it will actually send it across. But what's nice about it, when I use the camel command line, which is K-A-M-K-A-M-E-L-I can't even spell it. When I use camel to run it, it automatically catches the trigger as well based on that. So that from eventcoon Kennedy vents, tech talk event actually physically creates the trigger for you. So all you need to do to actually play with the actual event you're getting is write code in the bit in the from stuff. So I've done some redact. I just log received body, which dumps out to the log. But you could do anything in there. You could actually emit another event you could call some standard storage and all that kind of stuff. What I love about it is that I keep going on about this, but it's very easy to start with this. You're not spending your time worrying or having to think about how these things work. You can just go ahead and write the cool stuff you want to do. I want to show you the caucus thing as well. So I go to the caucus one. The interesting thing in here is I've got something called trigger function, and that might look similar. That's the trigger I've used. The difference between using the Camel stuff is when I actually do the KN funk build, the K and funk build will actually compile the caucus using the build file I've asked it to build. In fact, I'll show you that if I do a cat on the funk file, you'll see that what I'm doing here is I'm actually using the default builder. The default builder is key. Iobosonfaascorcusmus JVM builder. It's doing a JVM builder. The caucus function boson is the upstream for Kennedy services. The reason I say that is that you don't need to know what the mavenfah does, because that's all handled by the stuff that's running within the framework. In fact, if you've used Caucus, which I assume most people in the room have, go to the caucus website, choose Knative functions, click on download. Bang generates this all for you, and you can just focus on actually writing the source code. If I go to the source code, it's beautifully simple. So what you got here effectively is we got an app funk annotation and an app cloud event mapping response type equals tech talk event. And I'm using uni, which is part of Mutiny, I think, to actually asynchronously push that cloud event so you can see what it does is it does this return unicreate from emitter build response input cloud event emitter, which calls build response here, build response, just builds the output packet and pushes it to the emitter. So again, the boilerplate you've got in there is very simple, and you get it all written for you by the caucus website. And that's what I love about it because I don't know about you. I spend too much of my time writing bad boilerplate that doesn't work, and I've had to debug boilerplate as opposed to debugging my source code. So I mean, the main messages, which I've probably said a number of times, is that it's just so easy to use and so powerful at the end of the day. The problem I've got with it is that people aren't using it at the moment and I think people aren't using it because they are unaware of it. I only became aware of it because I was involved in the Summit demo. I didn't even know Boson existed as a project at that point. But this to me is incredibly powerful and if I had more spare time I'd be writing all my systems like this. That was basically it in terms of the demo and the presentation. As my Red Hat colleagues normally say, feel free to ask me any and the hardest questions you possibly can. Unusually, I'm not hungover, so I might not be able to answer them because I can normally answer them when I'm terribly hungover. Not being hungover means I've got a lot more stress, so I probably won't be able to answer them. But anyway, has anyone got any questions? I don't know how this works. So can Camel. K be used to write messages to a database? Yes, there's a number of different sort of different extensions within the roots of Camelc which are already there. The thing about Camel. K is it comes with a number of packed integrations. I had another example which I used beforehand and I was using the Camel. K root for Telegram. So I was pulling Telegram messages off of a Telegram channel and actually processing them as a Knative service and playing around with them and pushing them out and stuff. I used it on a Telegram channel at work and then was told off because I come from a background of snooping. So they saw it as me sort of monitoring what people were doing. I shouldn't have called it Watchmen, which was the big giveaway, but I also want to call it Nanny bot, but they didn't like that either. But yeah, you can use any of the integrations that are currently offered through the Camel side. What I've done with the database site in the past is I've actually done Caucus and I've used things like JDBC and Hibernate directly from within those to actually talk to the database itself, but you can use it within Camel. Lovely side of the Camel stuff is that I also showed you it's like two lines of source code and I just use Camel run. There's no compilation, it just builds the integration within the actual OpenShift platform itself and it's much, much quicker to actually develop and push out. Does that make sense? Any other questions over there? First you can use Docker, you can use Podman, you can use whatever you like. The only thing to be aware of is if you're using Docker and Podman. I'm not using Kubernetes and OpenShift. You're going to have to simulate the cloud event stack, the way in which the cloud events are actually emitted into the system. Because what it's doing under the cover is building the cloud event. It's actually doing a post and all those kind of things. You'll have to build a harness around it and simulate the broker. But that's the only thing you have to do. To be honest, it's much easier to go down the open shift in Kubernetes route because it's all done for you. You don't have to build your own test harnesses, but no, you don't have to use Kubernetes or OpenShift to actually test these things in anger, if that makes sense. I'm not allowed to comment. No, I am allowed to comment. I'm not a fan of CRC. I'm not a fan of CRC because it's a wonderful technological experiment to try and squeeze a whole of OpenShift into a single virtual machine. But what you're doing is you're giving people the impression that OpenShift is slow because the CRC performance, you really need a grunty laptop to do it. And I've had too many people I've talked to that said we tried CRC and we tried to run a couple of things and it was just so slow that we think OpenShift is so slow. And it's like, no, stop, stop, please. Openshift is an enterprise system that you run multiple nodes with huge amount of Ram and all this kind of cool stuff. But you could use CRC, but yeah. Does that answer your question? Any more questions? I can't see anyone at the back. Yeah, it's a very good point. The point was what happens when the message is corrupt? This is what I've got a problem with. The Britain I got a problem with it is that these things are transient. The actual functions themselves are transient. So if you throw a message into it, I fell foul of this. If you throw a message into it when you're using JSON, the first thing the underlying API does is it puts it through the JSON translator. I can't remember what it is. Jackson something Jack's B or Jackson. I think it's Jackson puts it through that. If it fails, it throws a stack trace. That stack trace stops the processing of the actual message. The problem is this, that service goes away after the timeout, and when it goes away after the timeout, you don't have the logs. One of the things I did to actually test this was I wrote another product, another application again called Watchman. And what Watchman did was Watchman allowed you to persist the logs. And so what was happening when the message failed and it just basically shortcuted and dumped the stack trace in the catch, I actually wrote out the information that was in the exception to Watchmen, who was persisted, who stored it in a positioned volume. That's a lot of overhead. And that's why I went back to them. I said, can we have it that the payload is not automatically processed? Because what it does at the moment is it tries to identify what's in the payload. If it detects JSON in the payload, it automatically pushes into the Jackson. If the Jackson fails, it shortcuts and dumps the thing. That's why I asked it to be in Byte. So what I've got is all my functions that actually consume the cloud events take as an input by square bracket, square bracket. And then I decide whether it's the JSON and I do the JSON processing, and if the JSON processing fails, I can still partially respond to it. But that's why I raised it with them in the first place, and that's why we've now got the ability to have flat Byte payload. But I'm old school. I don't trust things. No, it's because it's behaving as expected. It's throwing an exception in the actual JSON processing, and it knows that when it throws a JSON exception, it used to stop because the JSON isn't formatted correctly. And it's that kind of automatic assumption that every message you're going to send is valid. Jason. And it's great in a production system where everything's defined and everything's working. But when you try to develop this damn thing, this thing's failing left, right, and center, and all you've got is an empty circle and no logs, and it's like I don't like Jason. Sorry, I say that out loud anyway. Any questions? Sorry. We have tested this to destruction on the Summit demo because we had a Kafka backing on the brokers we were using and we wanted to take advantage, as you said, of the scalability of Kafka itself. What's nice about it is the broker is basically a transparent layer over the top underneath. You can do auto scaling of the Kafka stuff. The broker is actually a single point, but what's nice about it is the broker doesn't exist. The broker is just internal wiring from service to service to service. With the scaling of Kafka, you can actually have multiple Kafka backends with multiple. What's that word again for the partitions? See partitions. Wow, it's stuck. I've learned something today. We had multiple pods based on partitions and we had them in different notes themselves so we could scale them up and down and all these kinds of things. No impact on the broker whatsoever. The broker was still a single end point effectively, so you can take advantage of that. So it's massively scalable at the broker point and at the application point. I tend to use ephemeral ones because I don't understand Kafka. If you're talking to any sane person who's like 20 years younger than me, they would love Kafka and all these kind of things. Does that make sense? Cool. There was a question there. That's a good point. So it was basically what we do in terms of auto scaling of the application itself and how we make it load fast. Is that fair enough? Yeah. So one of the nice things about this, one of the things that made me laugh about the summit demo we did is that even though you can scale it down to zero, you can set the automatic number that you want to have. What that means is when we first ran this up, when we were putting a million events through it, we found that the time to spin up the individual copies was having a bit of a hit on the processing, which was actually backlogging the Kafka stuff. So what we did when we define the functions, we said, right when this function is deployed automatically have at least 100 replicas. So you don't have to if you don't want to take advantage of the Knative scaling down to zero. So when I was running these as part of the summit demo, these functions were actually firing out with 100 replicas. And then when it was getting loaded on top of that, it was Loading more and more and more. Now, in terms of the speed of actually creation of them, there are some tricks, because what you're seeing when you actually deploy an event in the first place, the lag you get because you do get a lag when you first deploy it and you first call it and the lag is down to where the pod is. So if you're running this Knative service and it fires up one replica and let's say you've got four worker nodes, Kubernetes is Kubernetes. It decides I'm going to put it on this node over here. So it'll dump it onto there. If it auto scales, it might put it onto another node where the image is not resident. So that image has to be brought in from the integrated registry to the node. And then what you're seeing with the lag is actually the image being distributed to the additional nodes. So one thing we did to actually make this performance in a production way when we were doing the demo was when we actually started it. We front loaded it as part of the test. So we'd install the Knative service and then we'd thrash some event messages through it. And what we made sure was that at least one copy of every single replica landed on one of the nodes. So every single node had the image preloaded. When the image is preloaded on the node side, the startup is instantaneous. That's the nice thing about it. It is cold offline, but it's still an active and resident image on the nose, so it doesn't have to go back to the central image registry to pull it. So if you want it performant and you wanted to have no lag on the actual Knative services themselves, when you do the first installation, either set the initial replicas to something like ten. And what will happen then is it will drop it onto if you've got, let's say, six nodes, set it to six. Whatever set it to that. When it first doesn't install, it will drop one on every single node, shut them down, and then any call to any of those nodes will get an instantaneous restart. That makes sense. Any other questions? I can't see because of the light, there's a question right at the back, I think. No. The only reason I could answer that so quickly, the question was, was there any support for Sidecar containers? The only reason I can answer that so quickly is that this does not play well with Istio. And of course, Istio has the face huggers those containers that sit on the side of all of your containers and just abstract it from the thing. It's one of the things we're currently looking at, adding Side cars and adding in it containers to it. The reason we haven't gone down that route at the moment is that the traffic the way in which the network traffic gets into these applications is already slightly verbose, because all this traffic has to come through the Knative eventing initial ingress point before it goes somewhere else. If you've got Sidecars on there, you've got the additional time of spinning up the Sidecar container. I do know that they're currently looking for the next release to allow Istio. In fact, they might already allow Istio or Red Hat service mesh with it. If they've done that, then Sidecars will be available. Does that make sense? Yes. Cool. Any other questions? I think I've got two minutes left. You can put any ingress controller, as long as that ingress controller pushes the traffic into the Knative eventing servicing. Basically, you can treat the Knative eventing entry point as any application that's within OpenShift any application with an open shift, you can add your own custom ingress controller to it. It's one of the features of it, and that customer ingress control just pushes the traffic into the Sdn at the correct point, if that makes sense. So you can do that. You're going to get the overhead of having to maintain an external custom ingress controller, but it's not something you can't do, which is a very bad way of saying yes. I got to the end of that sentence thought, why the hell did I say it that way? Does that make sense? No. This is very similar to the question that was asked earlier. What it is is basically once you've instantiated, so it's flashing times up at me. What's that mean? Once it's actually instantiated that service on each of the worker nodes. The restored speed is very, very, very fast. Very fast indeed. What we normally find, being brutally honest about it, is it's more to do with the way in which the application is working within the pod, not the spinning up of the image. And this takes me back to it very quickly because I'm getting flashed at. If you take a Springboot app if you run up a Springboot app in Knative services natively, it'll take 30 to 40 seconds to restart that Springboot app because the framework itself takes 30 to 40 seconds to start. If you rebuild that with caucus it takes nanoseconds and it's all to do with the class Loading Because if you notice I was building a Knative service as part of caucus, one of the options was to build it native. If I build it native, it pre compiles all the classes into a binary which means you don't get the class Loading overhead which is why your 40 seconds of time to start comes down to nanoseconds so the two points are to look at the speed of startup of your application itself and write it more like a cloud native one or use caucus to do it. So where I'm at, I assume I'm out of time because it's flashing and telling me off. Does that mean I'm out of time? Yes, it's yeah. Thank you.