Video details

Decoupling your Springboot microservices using workflow automation by Niall Deehan


For more info on the next Devoxx UK 👉
When you first get started with using Springboot microservices in a distributed architecture, things are remarkably easy. But it won’t take long for a production system to start showing some signs of strain. Responses get slower, availability decreases and you’ll suddenly start to realize that services aren’t always as decoupled as you thought! This is often because of too much peer-to-peer communication, missing possibilities to store state which leads to a resistance to use asynchronous or reactive concepts.
I will demonstrate how a workflow engine or state machine can easily improve your architecture and decrease coupling. I will have three Springboot microservices ready, refactor them live on stage and add Camunda’s Open Source workflow engine. You will not only gain a better understanding on the architecture, but also can follow along on the code level.


So hello everybody. Welcome to this fun little chat. I'm going to be talking about decoupling springboard microservices and the technology I'll be using is going to be workflow automation. And the reason I'm doing this is because hoping orientated. So first of all, I'll introduce myself. I am Nial dehuman, developer advocate with Commander. I am this one right here. And this is the greatest picture ever taken of me and I never don't show it if I have an opportunity. And that, of course, is Hercules Mulligan. So we will be talking through this lovely stuff. And first let's ground ourselves a little bit. Let's discuss the first ideas that we're going to be solving problems about, which is let's talk about a basic disputed system. We have, let's say a springboard application called service One. That's lovely and selfcontained and very happy in its little universe. Of course, another universe somewhere close by. Service Two, also a springboard application also exists. And of course, distributed systems come about when service One needs something from service Two. And in most cases they will do something like make some kind of request, usually via the Rest API or whatever is available, and then they'll get a response back and then they'll continue doing their thing. This is usually synchronous because it really solves a huge amount of networking problems. If it's synchronous. So that call runs, we got a wait state in service One. Then when it returns back, we're good to go. So rollbacks are possible. This is something we kind of know about. And this is how, first of all, the easiest way to start building microservices and distributed systems is by making these kind of calls. And then, of course, things get slightly complicated because we throw in a new friend. And once you start building microservice applications and microservices and distributed systems, you end up having a scenario quite often where service One needs something from service Two. So it sends his request and says, hey, can I have that thing? And Meanwhile, of course, unbeknownst, perhaps the service One. It is also then making a call to service Three because it needs something else to complete that. And usually this is a single transaction that runs through that, which means that if anything fails, have no fear, it will all roll back. But if everything is successful, we'll get a response back from service Three. Subsequently, service Two. And then service One has what it needs. This is kind of what I hope we kind of already know about communication. There's loads of ways of changing the communication. I'm going to talk about ones specifically regarding state and transactions. So let's first talk about coffee, because that's the point of the talk. Mainly. I live in Berlin and there there is a wonderful little coffee shop where I wander in and I, of course, will always ask for coffee, perhaps some cake. And the person then turns to the person, maybe the orders ordered and says, hey, can you get cake and coffee? Now she can get some cake, but she needs to then turn and look at the barista who then needs to actually make the coffee. So right now I'm looking at the person on the till you're making an order from they're looking at someone who's ordering cake, who's subsequently looking at a barista who is looking at coffee. That's how the system works. Then I will get my coffee. But first we need to tell the coffee needs to say, right, I'm ready. The barista then gives that to the orders order subsequently gives it to the cashier. And eventually, hey, I've got a copy. Fantastic. So this is usually how a lot of disputed systems tend to start off microservice as well in terms of communication among each other. So that whole time, of course, we are blocking. Okay. And luckily and we'll talk about what that means in practical sense just now. But let's talk about it in a more coding sense. I've created three applications that I'm going to show you just now because most of this is just me trying to not break a demo. And the rest is slides. So the Ivil cashier springboard application, which sends an order to orders order. This person can get cake if cake was ordered, but this person can also tell the briefing to go make coffee. Subsequently, when the coffee is made, we then come back and the cashier can then tell me what's going on. Okay, so let's see that working. So the applications are all here. So I'm going to start them and they should all work perfectly because that's how demos work. And there we go. They're all springboard applications and there's nothing too special about them. They have little rest endpoints. The only thing that's special is that one of them does have a front end, which is a little bit because I don't know how to build front ends, but I do know how to do back end stuff. So this is our fortune. So this is our cashier, and this is how I can ask the cashier. Please, can you give me some stuff? So first of all, I would like some coffee. Great stuff. And my name is Nile. There we go. So as you would expect, this is what's happening. Cashier looking at order, shorter order, sort of looking at barista. And if we head back to our service, you can see there that the little thing at the bottom shows, okay, we're making the coffee. Great. So if I return, we've got our order. This is very common. This is usually how these kind of systems work. When you see that Loading thing, that's usually what's happening. There's a synchronous request. These are super common. If you think about things like booking a flight, may be making a payment, you'll always see that little thing saying, hey, we're in the middle of making a request. Calm down. You'll be fine. And eventually it will come back and there are some problems with that. So let's imagine I can also get some cake actually, if I want, maybe I'll send that. So this is all great. I now want some coffee and cake. So we now have a scenario where coffee is being made. The order sorter we can see here has got the cake and eventually I get my coffee. So eventually the barista is very slow. There we go. Coffee and cake are ready. Marvelous. So now let's imagine that, of course, we have a narcoleptic barista. So they've gone to sleep. Very sad. I just killed the service. So now if I make this request, I want coffee and cake. Great stuff. Things are happening. We're waiting for stuff to happen and that's going to happen. We are going to have an error because we went to the orders order, who has the cake? They subsequently went to coffee and then it said I'm not here. And it threw an error all the way back to the front end. And we're kind of stuck because that's how synchronous communication works. Okay, so that's where we're at right now. And I'm just going to wake up the barista. Marvelous. Okay. So I'll return to a couple of my slides now. So there's all sorts of questions about this kind of architecture, like we just showed. What happens if briefly takes a break? What if the order is taking too long? What if an order is made that by the time we get to that final debris that they say we actually don't do mokuccinos stuff that maybe cashier isn't aware of, there's loads of little problems that might exist. And for those of you who may not have read this really great Gregor Hope blog post a while ago, Starbucks does not do two phase commit. Okay, so we don't have these distributed transactions when we can hold state at a regular interval. Now what do I mean by that? In Starbucks, unlike my lucky Berlin coffee place, I will talk to the cashier and the cashier will immediately say to the order store or this person wants coffee cake. Now in the previous example, the cashier would turn around, but instead they simply say, no problem, I got that. The cashier then takes my money and then I stand and wait with neither coffee nor money for a little while. And I'm pretty happy with that. Meanwhile, I trust that this person will eventually tell the barista to go make the coffee and then we'll get that back. And now they even use a correlation key because they use your name, they take your name down. They think we'll keep that correlation key and then by the time we're ready to asynchronous copy is there, then we can do that. So let's make some changes to our current system, some really small changes to try and step towards implementing this. Oh, yes, one more slide. There you go. So the change I'm going to make is really simple. I'm going to add just a change to one service. The cashier and the briefs are going to stay the same. But I want to just add a workflow engine to this particular service because I want to see what's going on. I want to be able to have a little more visualization some history and versioning. That's the first thing. What I'm not doing yet is changing how the transactions work. I'm just trying to add the engine and let me show you how that actually looks using the awesome power of code. So if you take a look at the orders of the original one, it is very straightforward. So we have an order up rest call. We then do some really nasty parsing to find out what got ordered. And if the person wants cake, then we sort the cake. And if they want coffee, then we do that very simple. It's not too complicated. So the changes we're making in the new version. There we go. Is that I'm now actually creating, instead of doing the business logic of deciding what to do in here, the barista is just going to do a thing. There it is. Run this method which says start process instance by key and it's saying start this order process. Here are the variables you need and so on. So what's all that about? This is actually the workflow engine. I've added it via the just as a dependency in the palm file. It's right here. And this is just a way in which we can actually visualize what's supposed to happen. The other thing I added was an actual process. So let's take a look at that. So this is what I added. So this is BPMN Business Process Model and Notation. It is an open standard for describing state progression. It's also executable. So this diagram here is just XML that's describing how the state transitions. And you can see here that basically what we're doing is we're saying when we start this instance, we will then if cake was ordered, we'll get the cake. If coffee was ordered, we'll get coffee and we can now do this in parallel. Actually, that's the one small change from the previous things. So let's see exactly what happens if we do that. Factory is wonderful. So let's add one and start it up. So the first little changes, we'll see Commander coming up there, which is the name of the platform we're using. And let's go back to the front end. It's not there. What was it? Oh, it's opera, isn't it? There we go. So there you go. There's a front end. Again, this hasn't changed. I have not changed the front end or the barista. I've just changed the middle bit. So if I order coffee and cake, basically the same thing happens. Okay, because it's still completely synchronous. We've added the workflow engine, but we're not yet adding any state change. So instead. What we actually get out of it is if it ever finishes. What is that Barista doing? Okay, it's just taking his time. There we go. We got a copy and cake. So what does that actually give us? The first thing that lets us do is it actually lets us look into the micro service. So I meant I showed you the process already, and you can see it right here where to process. And this is the process that's running now by the engine. And I can actually take a look at the history of this. And I'm just looking into that spring boot service. So I can see what's happened so far. I can see the instance that was there. So what I have here is I haven't changed the fundamentals of how this works. I've just given myself a nice visual to see what's going on. I'm being able to do stuff in parallel in a very readable way. And I can also then check some variables and things to see, for instance, what was ordered and that sort of thing. So this is quite nice. This is a nice little addition. It doesn't solve any of the problems like before. Like, as I said, if the breeze falls asleep, we still have the same issue. So let's talk about that. Super. So let's talk about these again. So the breeze are taking a break in a synchronous system. There's no real way to deal with that because the system is down and you can basically retry. But the problem is the end user or when I'm sending in, my request is the one who has to do the retry. Once I enter that form and the data that I want and I send it. And if everything goes correct, fantastic. But if it doesn't, it's up to the user, the person or the system making the initial request to do the retry, which isn't quite right. We would rather because we have the request, it would be so much nicer to be able to just hold that request and say, we can deal with this when things are working. Again. If the order is taking a while, we'll discuss that as well. Because, of course, synchronous calls mean you need to be a little bit worried about how long the call is making because the transaction time could time out. And then even if everything is going correctly, it could still ruin things. I really don't like Mocha tunes, but we'll talk about that as well. So I want to change this same thing one more time. And this time I want it to hold state, which means when the cashier sends an order, I want it to return back just like in Starbucks and say, hey, I've got that order. Fantastic. You can just go and sit and wait there. I'll get back to you. And then Meanwhile, it can then handle the rest of the thing. So that's what I'm going to do. Next. And what that's going to give us is immediately it's going to give us a huge amount of problems that we'll talk about. But it also gives us some benefits, which are things like retries timeouts and versioning and a whole bunch of other stuff that we had before. So let's add that now. So what's going to change? So the first thing that's going to change is from this process to this process. All I'm changing is the process actually. And I'm adding this a transaction boundary right here. So I'm saying at this point, return the transaction before I try to do this. And then I have these two things here which will then send back a message saying it's done. Okay. So let's see how that looks. Barista can stay awake and let's send her a new one. Okay. So this again, the command platform is there up and running. Marvelous. And I can still log into the front end to see what's happening once it loads. Marvelous. And I can go in here and just take a look at my processes. I only have the one so I can go in there and take a look at what's happening. This is the current process that we have. Okay, super. So now the front end, as I said, doesn't really change. So Nile wants some coffee and cake. It's sent. And then immediately we get a response saying, okay, now you can make more calls. So then maybe Neila would like some coffee. Good stuff. And Pond, definitely coffee and cake. So I'm sending these requests and we're no longer blocking. We actually now can just leave our requests with a state right here. And we can actually see what's happened so far. And our front end should then have returned. We were working on the order and we should get a response pretty soon, hopefully. So marvelous. So this now has run through and it's able to actually hold that state, make the call, and then come back. Interesting turn of events. It is not responding. So let's try that one more time. It should usually tell me that my order is filled. Why would it not? So working on the coffee. Great stuff. We can see. So I want to show you that it is waiting there to make the coffee. Good there. Coffee is ready. Fantastic. So coffee is now ready. Great stuff. And we can now get that response. So now we have the same problem that we talked about. The barista is a sleep. So let's put him asleep. I used to say kill the barista, but I think I got really bad. So they did you from that. So no one was a dead barista. Plus the whole raising him from the dead thing was also not good. So sleep is perfectly reasonable. So I've now told the barista go to sleep. And by killing the service and now I'm just going to send some requests in. And the one quite interesting thing is I get cake back, which is kind of cool. Right. Because in both of these cases, I order coffee and cake. One of the first benefits of this is that I can actually run two different things in parallel, respond back when they're ready, even though the full request is not working out. Because, you see there we have two little errors. We can actually send back what we can. We're able to actually do as much as possible. And in this case, we can still get the cake even though the coffee is broken. So let's take a look. When these kind of things happen, the process engine can also hold the error message because it can then say, okay, here's the actual problem. There's obviously a four or four somewhere. It's all very bad. And now we are in a position where we now know there's a problem. And so we can then wake up our barista gently. Great. And if the Brisa is awake, it means any new requests for coffee and cake will now run through. And we should have a scenario where Nayla gets her cake. Marvelous. And then in a couple of seconds, she should also get her coffee, hopefully. Yeah, there we go. Coffee. Now, of course, this is causing problems for there are two people in the queue that have already ordered some coffee, and they are being horribly ignored because Neil has showed up and decided to just skip the queue and get the coffee. So that's of course, because this error state needs to be resolved explicitly, meaning that I need to actually go and say, okay, these incidents, I would quite like if we can just order the retry, which is a bit of hassle. So we'll have to fix that. So I'm going to add one more retry. Great. And then execute it. So now I'm telling them to try that request again. We have all the information we need to make the request. So it's just a matter of catching it and making it. And then pretty soon we'll see the baristas now is getting to work on that coffee, and then we should see it's done. Great. So then great stuff. Pond and Nile get their coffee. Okay. So far, we're in pretty good shape. We have this nice ability to be able to deal with errors in a centralized way. We've kind of broken up the synchronous communication. But we can also do another really cool thing, which is if our barista is actually asleep frequently, we could tell our barista just maybe take a break. So what we can do is we can actually there's my get coffee order, we can actually have a scheduled downtime. So I can just suspend this one process because as you saw, one of the problems was I had all these errors that I had to then retry. Right. That was kind of annoying. So one of the ways we can fix that, if this lovely Chap is asleep is that we can send in our requests. And at this point we know that let's say Pond can just get coffee. I think she's had enough cake. We know that the brief is asleep and what we've done is we've decided let's just hold all the state here like schedule downtime. We have four requests that are pending. It's not that big a deal if we just hold them there doesn't need to throw an error. And instead we can say, okay, great, let's actually wake up our little breeze here. And once the breeze is awake, I can then say great, let's activate these suspended jobs. You also notice that I still got cake back again. We can do things in parallel. If one service is down, the whole process doesn't have to suffer. So if I continue that job, I can also do it for scheduled time in the future and activate it. So then when I activate those again, our barista should eventually get to making that coffee. So it's a very nice way of again dealing with these kinds of errors. And it's not quite finished yet, though. It's still okay, but I still think there could be a lot of potential for improvement. Let's take a look at the next model. So this thing here is very, very basic and actually it can be coded in actually a pretty easy way without so much without the requirement of adding workflow diagram. Sure, you get some nice diagrams and stuff out of this, but you could actually code this, but there's stuff you can add here, incredibly easy that's also quite hard to actually code. And the next version is going to add this very simple little thing at the bottom. So this little thing here is a timer event, a global timer event. And it's basically saying if this whole process takes too long, we want to then trigger an event to tell the cashier, hey, don't worry, we're working on your order. Calm down, stop shouting, whatever, depending on how the people are acting. And this is a really nice way of being able to scope an entire process of distributed systems and still be able to hold this sort of global timer. One of the things that you might be interested in is how these chats are working. If you look at this, the way this works is this is a service task and the service task is simply implements an expression that resolves to a bean of some kind. So I've written these services as spring beans and I've just called the bean name here. That's all I need to do. There's not too much about that. So you can just build your if I take a look at the project number three, you can see that I just have all the required systems here. I have the sort the cake, I have the coffee shop, the the cake, and they're sort of independent. They're sitting there and the nice thing is this only deals with exactly sorting the cake. It doesn't need to worry about making further calls. It doesn't care what happened before it or what happened after it. All of that code is not necessary because instead we just need to say there's my code. In what order shall we do this? How will it be orchestrated? Well, we'll just build a model and then attach these and then the system will step through it and understands the symbols. It's actually a really nice way of being able to call code and be able to orchestrate and also separate responsibilities a lot of times to build this yourself. Anyway. So now I want to show you the this will be thing here, 30 seconds. Ok, so let's head back to here and let's kill this and do number three. Okay, so again, it's starting up. And again, the difference here is pretty straightforward. It's almost identical except that I've added this timer. And again, this can be really powerful if you think of the front end implications. So let's head back here and let's take a look at our front end. Okay, so here's our process. Great. And so now let's kick off some. Well, let's first actually let's straight up murder the barista. Okay, dead now. Great. So now I want this to take a little while. So again, Neil wants more getting coffee and cake. Norman hasn't had anything yet so let's give him something. Pond again, pond is ravenous. Here we go. And so all of these are going through. And again, the barista is. There we go. Cake is ready. Cake is ready. That's great. And let's take a look at what's happening in the diagram. So in the history, we can see that we started this three times, we then got cake three times, sent it back and that's all ended from run time perspective. We have three instances waiting here to do their work. And eventually after about 30 seconds we're going to trigger this timer. Because as long as this process takes longer than 30 seconds, we will then send a cashier back a warning saying, hey, this person, there it is. Says here your order is taking a long time. And again, this is quite nice because it's running in parallel to everything else. Like everything here is both running in parallel and also really in control of what's going on. You can really see what's happening and it's really easy to focus then rather than having to code, this will be actually quite complicated. So let's take a look at our history now and we can see this has happened three times. This little thing. There's one more really fun thing I did which you might not have noticed is this isn't causing an error. Okay, now why did I do that? Because if you have been using microservices and disputed systems, there is one key thing that is always, always true. Services are not reliable. They can go down maybe for a very short time. And if they go down for a very short time, you shouldn't have to contact your administrator to fix the problem. You should be able to say, I expect this service is a bit wonky and it might die for maybe a minute or two at a time. But that doesn't mean you just throw a bunch of errors whenever you get your first problem. What we can do is as part of a model like this, what I said was I added a retry cycle. So this is ISO 86 one everyone's favorite. Yeah. For those watching live, loads of smiles from the audience now on. So this says R four, repeat four times, four PT one M period time, 1 minute. Very simple. So it says we're going to try. If we fail, we'll wait a minute and then try again. Okay. So it means that when this errors, it doesn't require you to go in, investigate the problem, then do that manual restart you saw me do. Instead, you can rely on this on the engine itself to deal with predictable reoccurring problems like this. This is also hard coded, but you can also, of course tinker with this to make it as dynamic. Doing things like exponential back off and things like that is also possible and probably advisable. I think now the coolest thing is randomized back off or something like that where you have all your services don't all come back at the same time, but all that is possible and pretty easy to implement now. So if we take a look at our history now, even though our service was asleep for a while, it didn't create the same sort of problems as the previous iteration did, which is really nice. So we are at the point now where we actually have a pretty good a pretty nice change. And again, I want to reiterate that I go back to the slides to show you what I've actually done so far. There we go. We only really changed that middle service. We made it hold state, okay. Which meant we had to re implement a way of telling the cashier stuff. But what we got out of holding state was we got a retry mechanism which I showed we have an ability to time out the whole instance and you can also time it out to cancel it. But I showed a more dynamic way of timing it out and sending stuff back. We also have versioning which I haven't really showed, but you can have multiple versions of the same process running in parallel. And I want to show you next error handling in a more predictable way. Right now we've shown error handling going from a stack traces thrown to a predictable error. I want to show you next business errors. These are errors that happen that we expect may happen, and in this case it will be somebody has ordered a soy milk or something, and we don't in their coffee and we don't have any left. Okay. This is a predictable problem. We shouldn't be throwing an error about that. We have to deal with that from a business context. So let's take a look at the next model. So I've just moved the timer to the side, and I've added this lovely thing here. So this is an error boundary event. Now, when these microservices are running or you're calling other services again, the goal is that they shouldn't need to know what happens after or what happens before. But it's also important for the orchestrators to know that there are multiple end state for a micro service. It could end successfully. Fantastic. It could end being if the service is down, which is terrible. But a more interesting way the microservice can react is I'm alive and well. I got that order you sent me, but actually, I've got a problem with it, which means you have to deal with it from now on. And those kind of things can be dealt with and continued in the process. So business errors like this would happen. Like, for instance, let's say something like there's incomplete data or wrong data in the service you're sending it to. You don't want to have to throw everything out because of that. Perhaps you could have human interaction to fix that, or perhaps you could retry it after a certain system has run over it or something. But you can always automate these kinds of errors, especially when they're predictable. So in this case, I'm going to, once again, and probably the final time, kill my order sorter and have version four running. And once again, I'm going to open up Opera, everyone's favorite web browser for those at home, everyone in the audience agreed. And we're going to refresh this. So here is our lovely thing. So let's kick things off. Let's get a bunch of coffee. So coffee and cake again. Ponds waited long enough. There we go. Maybe Nile also wants some. Maybe only wants cake, Norman. Maybe just coffee. And then we'll have a quick flurry of I'm running out of names. I know. Let's go with John. There we go. Let's go with Mary. I'll go through the entire Bible. Okay, there we go. It's always good. Oh, my goodness. We have this awful thing. We have this little thing saying it reached our micro service and it ran, and we were able to pass useful information back saying, hey, we no longer have Mary doesn't get soy milk. That's real shame. So this is actually a really nice kind of error because it's one that you can predict that can happen, that doesn't need too much intervention. And again, we can take a look in cockpit and see exactly what happened in there. And you can see all sorts of fun stuff. We can see, for instance, the number of instances that were started. So four, three of them were unsuccessful and three of them went down here. Okay. And that's a nice sort of overview for the part of this that let's say a DevOps person would deal with or something like that. You can also do kind of other kind of weird and fun stuff, which I'll quickly show you because if someone comes in here, they maintain you do all sorts of odd things. Like, for instance, let's kill the barista for a second and let's just send in another order from Mary and Joe. I don't know the names of the wise men is not good. So we now have one instance waiting here as weird things could happen. I could be phoned up and said, hey, actually this person has just ran out of the shop. So we don't want to make this right now. This is an edge case. Almost everyone will be fine. We have not modeled the situation where someone runs out of the shop. And the nice thing is you can deal with that in a really nice way. You can say, you know what, okay, we can just drag this token somewhere else. We can pop it up here to add a stock. Right. So we can actually manipulate the state from here in real time. So if I just do that, why are you doing that? 1 second? Oh, it already canceled. Did I kill the barista or not? Apparently it still worked. So what I'll do is instead I will just say start one here, let's go. And I can restart them. Actually, that's actually pretty interesting. So let's take a finished instance and that needs to run again, for instance. So I would like this to run again. So I would like, let's say to where is my run time? Yes, I know it's finished. Why are you being so mean? Okay, so click on restart and then I can say, okay, let's just restart a bunch of stuff here. Start before kick it off, pick the one we want. Let's say Mary's back, she wants more coffee, and then restart that instance because we have all the data all stored. We know what happened before, we know what happened after. We can do this kind of thing really, really easily. And these are the kind of manipulations you don't expect to do very frequently, but you can still do. Here we go. Mary got some cake. You can still do in these edge cases where you need to actually manipulate your process. Now, the final thing I want to talk about relates to that data. As we run through this process, we have a nice visualization of what's happening within our micro service and which means we have a huge amount of data to help us improve it. And for that, we do have this nice little thing here, which I will. Let's see if this works, because I haven't tested this part. So this is optimized. So this is a front end that is not intended for the developer and not intended for the DevOps person. It's intended for the person who wants to make this process as efficient as possible. And so we can do things like create a report. We select our process. There it is. Select all and let's create a basic report. Okay. So I would like to see, for instance, I don't know, this sort of thing. So where have most of the processes gone? Okay, we see here that we have eleven here, we have 18 there. It's pretty good. And we only have eight here. It's kind of interesting. We didn't do this very often. And we can also then say, well, what about duration? Where do we spend most of our time? This is really important information. We can actually take all the information and say we need to get a new barista. This is fundamental information here. It says that on average 10 seconds, Meanwhile, this is three. And then you don't fire the barista immediately. You can instead say, you know what, I'll give you some target values. So I'll say right now you're like 10 seconds, but I'd like you to try an average of maybe 8 seconds and apply that. And the really nice. So then it's still over for now. But this is a live report, which means that as all this data comes in, the report sort of maintains that. And there's a whole bunch of different reports we can create on this based on the variables that are passed through. And you can also just kind of create a dashboard. Let's say this is a process performance overview. This is a template dashboard. So you basically say, here's my process, build me a whole bunch of reports that might be interesting. Okay, so let's take a look at what we got built. So we have the total number of instances, the average duration we have. Here something oh, MinMax and medium duration. You can see I took a break here between demos and various other sort of potentially useful information. And this then is really handy. And then being able to design new processes that are definitely better for your architecture in a reasonable way. As well as that you can also design new ones and then run them parallel because you can run multiple versions at a time. You'd be able to then empirically decide, I've now proven through the awesome power of maps that this process, the new one, is better than the old one for whatever reason. And then you can continue with that one. And the key thing that you get out of this engine is, of course, visualization across all of this demo. We started with just being able to see things better. The Beatman standard, which this is based on, is an open standard for designing this. And it has a really complicated and really cool set of symbols, including really easily being able to implement Saga pattern and a bunch of other really common micro service requirements. Being able to undo things non transactionally is really important, and that's part of what makes the change from single transaction to sort of asynchronous. You then need to implement a whole bunch of things that are there because it does cause some problems. But the visualization then can be brought all the way from development to DevOps to redesign because that image is the executable thing. There's no picture that's in the document. Somewhere what you saw drawn there is a real executable part of the code. If you're interested in building something that you saw there, you can go to and you can build a really quick little Springboard application a couple of seconds and start that up. It basically just adds the dependencies that are required for Commander. So that's a few settings in the application, YAML and the web apps and the engine as part of the Pomphlet. It's open source Hooray, so it's the model and everything else. The only thing that isn't open source is the fancy pictures and things and optimize. So everything else you can play around with and enjoy. So with the last few minutes, I will say thank you very, very much for the fun and laughter and happy to take any questions. Thank you. Yes. Oh yeah, sure. So in this case I used an in memory H two database because I was bringing this up, down and bringing it up. But realistically, the engine does need a data source. I actually use Two there for this application. I was using Elasticsearch, so I put all the history into Elasticsearch. It was really easy to query, but for the runtime, some relational database is useful. We also support CockroachDB and a few of the more distributed ones as well, but basically all the usual suspects for relational databases, Postgres and Oracle and MySQL and the rest of them. The engine does need something like that, but one of the nicer things is you can connect multiple engines to the same database. So if you wanted to add an engine to a bunch of different services, you can still use the same database and have an overview of your micro services and they don't need to interfere with each other. Cool. Another question. Spring Boo. Oh no, I just happened to be using Spring Boots as quick to start up in terms of what we support. We support. Obviously Spring Boot is the most fun one, corcus we have a Micronaut thing if you want to get bigger than that. Tomcat Wildfly at the core. What we're talking about here is just a Java library. You can just add it and embed it to anything. It's a really small four megabyte library that you can just put and deploy anywhere you like. So it's a pretty small, lightweight thing to add. I just quite like using it with Spring Boot. Another thing is that if you wanted to use it just independently without embedding all your business logic beside it. You can also do that. You can use it as a standalone orchestrator for like a Pottyglot system if you wanted to super. Any other questions? Yes, you can. We do have an API to build the processes in Java if you want. Well, that's a really nice idea. I find you lose quite a lot. It's really quick and really easy to build a process in the modeler because you just need to click a bunch of symbols together and you can just sort of build like an orchestrator. You could do exactly the same thing using the Java API, but this is pretty straightforward. You just build this really, really easily and really quickly. Another thing you can end up doing is if you want somebody to build the process and someone else to build the actual micro services, what you have here is a template. So you can say, hey, I want to do this service. What are the current microservices running? And you can kind of find a list here of like templates and then select them and then it's autoimplemented as part of the thing. That's quite handy. There's loads of ways of making it really easy, and I do think it starts with visualization. Any more questions? Yes, I would say there's not really a specific role, someone who cares about the efficiency of the process. This can be someone technical who is in charge of making sure this process is as efficient as possible. It could also be for a business user because these processes could span multiple departments. Yeah, exactly. We very much focus on developers for our tooling. Like everything we've tried to implement here is for developers. The stuff. This thing here in particular is a front end that you would never this is different to this thing here. For instance, this is a completely different application. This is intended for someone who is in charge of keeping this running and optimized here. It's a different application that is intended for people. This is read only. They can't break anything. So you can give this to any business user and hopefully they won't destroy your system. So they are aimed at very different use cases. Yes. Yeah. So the way Saga would work is if I had, how quick can I build a Saga thing? Let's build a Saga thing real quick. So what I can do is I can open up this. I can say book my hotel. Great. And then what I can do is I can trigger a compensation event. And what this says is it looks at the history and it finds out if I booked a hotel, then I want to do what a trackpad? This is complicated. There we go. I will then trigger this thing and they send me asynchronous. So this is cancel hotel booking, I guess. There we go. So, yeah, this works on events. So if this event is triggered at any point, it looks at the history of what's happened and it looks for any successfully completed services, whichever of those were actually completed successfully, it then runs the corresponding undo. So that's the way you can do that quite easily and that's it done. It doesn't actually require any code to implement that. The engine just understands those symbols, has all the data it needs and these can be all asynchronous. So let's say even if this undo thing is down, we still hold the state when it comes back up, we can still reverse it. So it doesn't even need to be transactional. So it's quite safe. You wouldn't even restart anything. Really, you would just need to yeah. So actually, I don't even need to restart the application to redeploy something. I just did that because it's easy to demo. So if I wanted to deploy this right now, let's just do that now. I'll just try and make this as simple as possible. So it deploys. I'll kind of doom because that's how sure I am. This will work and I'll just save it here. So you can deploy the way I did it was I had the model already in the springboard location. You don't need to do that. You can just deploy directly to a service with the model via the rest API. So if I deploy that deployment is successful. So now you can just go to cockpit and I should then be able to see there's doom alive and well deployed and I can then make changes to it and then redeploy it in runtime. So if I say, oh, actually we need also to add another undo here. This is a user task and then save that. I can then deploy it. Great. And now I can refresh and we have two versions, one and two. There's the one and you can then migrate anything from the old version to the new version in a couple of seconds. So it's not complicated to try and do this cycle of improvement of a process. Yeah, no. There's a rest API available that this front end uses. This front end is there beside the engine so it uses a local rest API. When I'm deploying, I'm using a public API that is outward facing. I have Time's up. It's going read. If anyone hasn't questions, I would love to hear them. I have a booth over there. You can't miss it. It is God awful Orange color, so you can come and check that out and happy to talk to anybody who has any questions. Thank you very much and I will see you downstairs. Thank you.