Video details

GOTO 2020 • Thinking Asynchronously • Eric Johnson


Eric Johnson

This presentation was recorded at GOTO Oslo 2020. #GOTOcon #GOTOoslo
Eric Johnson - Highly Entertaining Serverless Fanatic with Brilliant Insights; Senior Developer Advocate at AWS Serverless
ABSTRACT Speed matters, and developers are challenged to reduce latency in their applications at every turn. In traditional synchronous programming patterns, users are asked to monitor the spinning wheel as the application moves from one task to the next until a response can be returned. However, developers can reclaim these precious milliseconds by learning to think asynchronously. Asynchronous patterns challenge developers to evaluate what tasks require the client to wait versus what can be done after the fact. When developing serverless applications on AWS this process is made easier by the asynchronous and polling patterns that are native to AWS Lambda. In this session I will demonstrate taking an existing translation application that is synchronous and modifying it to use asynchronous patterns. This will be accomplished using Amazon DynamoDB Streams [...]
Download slides and read the full abstract here: #Serverless #DynamoDBStreams #EventBridge #AWS #Programming
Looking for a unique learning experience? Attend the next GOTO Conference near you! Get your ticket at
SUBSCRIBE TO OUR CHANNEL - new videos posted almost daily.


My name is Eric Johnson and I am a senior developer advocate from a W.S.. I'm glad you're here. I'm sorry I'm not. And I appreciate your patience. I would like to give a shout out to the go to Oslo team on how they approach this in switching it to virtual. For those who made it, I'm excited for you. For those who couldn't make it or doing this virtual. I hear the same thing coming to you live from Norway, the northern Colorado. And looking forward to sharing this. I hope that this is worth your time on watching my deck, on thinking synchronously. So let's get started. All right. First of all, a little bit about me. Again, my name is Eric Johnson. You can follow me on Twitter at E! DDA Geek. I'm a senior developer advocate at Cervalis. A native. Yes. Or for Cervalis at A.W. US. I'm a Cervalis tooling automation geek. I've been doing Cervalis since it was announced in 2014. I love Cervalis. When I first saw it, I thought, wow, why would anybody do anything else? I believe it is the future. I believe it is going to help people be more secure, be more scalable, reliable. All kinds of advantages here. I've been a software architect and solutions are tech since 1995. I am old. I'm an old fart. There's just no two ways around it. I'm a husband and a dad of five. That is not a typo. I do have five kids. I'm a music lover. In fact, my original degrees in music, I thought I was going to be a professional drummer. Turns out I'm fairly I'm a fairly average drummer. Nobody's gonna pay me to do it. And I'm a pizza diet. Dr. Pepper fanatic. If you are daring enough to follow me on Twitter again, a.D.A. Geek. I talk a lot about Cervalis, but I also talk about pizza. My family and why pineapple's shouldn't be on pizza. And if I were to ask you to raise your hand, I know everybody would raise their hand in support of the fact that pineapple absolu shouldn't be on pizza. But that is not what we're here to talk about. So let's get to the point. Let's get to what we're here to talk about. We're here to talk about serverless patterns and how to make them more reliable, how to take advantage of the managed services to make to to make them work for you. So let's talk about some common Cervalis patterns. We're building websites and there's a lot of divvies you can use Serverless for. We're gonna talk primarily about Web sites today. But this pattern, again, can be can be applied in a lot of different places. But we're building Web sites and specifically back in four Web sites. We use this common pattern. We will have an API, an API tier, a compute tier and a storage tier or layer or however you want to talk about it. And so with that, what we'll do is we'll say, OK, we've got at our API, we're going to handle our security and our routing. Then we're gonna jump over to our stores and that's going to handle, ah, you know, store. It's pretty self-explanatory, right? But in the middle, we have our compute. And what do we do? It compute pretty much everything else. Right. So we've looked at applications a lot of times. I see, you know, people are handling their authentication. They're handling throttling, they're handing, caching, they're handling. You know, the compute is kind of the kitchen sink. Everything's put in there. Now. The issue with this in this is the order that we do, we say API computer storage, but. If something is going to go wrong, this is the most probable place. Let me explain what I mean by that, OK? You say, well, why? Well, here's the why. OK, the why is because it is my code. Now, you may be saying, yeah, but my application is in your code. But yeah, it is our code. Right. And the reality of building applications today, especially with these managed services, when we when we look at Cheney managed services together, generally the weakest link is going to be the code that we put in there right now. I'm not knocking your development skills. I'm really not in a room full of 100 people. I probably am the top 50 percent. But I will not be the best developer in the room in any room. And I have a pretty good developer. But, you know, we do it all day long. And in in we introduce bugs. We introduce skill problems. Whereas these men services, yes, they are developments underneath, but they have gone through so much testing and there's we chain them together. It it really limits that. And that's true with really with any other cloud providers that you are choosing to use. So what do we do? OK, well, we can't get away from code. Now you're right, we can't. But there are some steps we can take to do this. And this is working where we're going to talk about thinking a synchronously. And the idea is we're going to take and we're going to move our stores to the middle and our compute to the end so that our API will talk to our storage and then we have compute. Again, this is called thinking asynchronously. Well, all right. You said the words you showed. It paints live animation. And if you've ever watched any my decks, you'll notice that I never do animations. I was very proud of that one. All my stuff builds out of slides. But what does thingee synchronously mean? So let's break it down. It really comes to two. Basically two things. Store first, process later, a process after. OK, so what's the advance? What I want to do as well, as I mentioned before, one. It provides greater reliability. Data is stored before my code gets hold of it. Right. So what I'm doing is I'm saying, look, I'm going to foolproof my application to say I think I'll store the data before I start monkeying around with them for a start playing with it. OK. Now, as I said before, code can kind of be the lynch panic can kind of be the problem with our applications. So what I encourage and what I try to do my own is I try to do less code. And this is the second advantage of this. Right. So when I reduce code, I'm using you know, I directly integrate my services. I don't have as much code. I don't have as much as where there might be issues. OK. So the next thing is faster response time. So let's talk about them for a minute. As developers, web developers, advocates, developers, any kind of developers. One of the things that we deal with on a daily basis is how can you make it faster? As I said earlier, I have five kids. Their ages are seven, all the way up to almost 90. And I've watched them as they're playing their games or Snapchat, their Twitter, their whatever. And the thing about technology is if I don't get instant gratification from your application, then your application is broke. It's done for it's broken, right? That's the truth. I can't wait for 15 seconds and watch The Wheelspin, you know. You know, we all use different services that would go away. And knows this take forever must be broken. And what do we do? We repeatedly click the button and you can't see me, but I'm unacted that I we were pretty clicked up button over and over and over. Or we go on or we just give up. So you know what? I'll find another app, see if it does it better. So response time is critical in our industry. So what we want to be able to do is we want to be able to say, hey, now, you made me say yes, but you haven't done the computer yet. Well, you're right. But the reality of clients is they're more willing to expect a message that says, hey, we got your data. Nothing's broken. We're doing the processing and we'll get right back to you versus wheel spinning. No communication. Is it working? Did my credit card payment go through? Am I going to get double charge? Am I going to get trouble charge? And then there's panic. Right. So communication and instant response is a very critical thing in our industry. So let's look at process after. OK. And the primary advantage of processing after is simply we can do more. Right. A signal processing makes it easier to do more in less time. Or it lillemor rephrase that in less apparent time to the client. Right. So we've already talked about getting an answer back to our client. Hey, we're working on it. And what I can do and a lot of us build. We have applications that have to do a lot of things. The application we're going to look at a minute is going to do some translation, some audio code. Just transcribing to audio can do a lot of different things. And so if I have to make my client Waywire, each one of those are happening, then I can't do as much because it takes too long. But if I'm processing after I store my data, then I can do multiple things. I can do my thing one more complete thing to my thing through my computer thing for my computer thing five and so on and so forth. So I have a lot more flexibility and I can feed small bits of data. If I've if I've written, you know, even with the web sockets or even, you know, polling from the client, I can feed data back to the client to keep them updated because communication's key. Right. But I can continue to process and work on the data. All right. So you maybe thing is I'm OK. That's all well and good. But how does this work? Right. If I don't talk to my compute, then how am I going to trigger it to do things right? So let's talk about making this work. And first of all, look. Well, let's go back to our original architecture here. And let's take a let's take a step out of this. Let's talk about Cervalis for a minute. And I'm not there to ask and then have you raise your hands. But I'm assuming you're either working with Cervalis or looking at surplus. OK. And we're talking about Cervalis. We look at, you know, what is Serverless in the basic idea. And so this is something happens. We react and do something. And for us, that's usually with a lambda and a lambda function where, you know, something happens and that's something can be an iota button gets pushed, data gets dropped. Dynamo DBI, you know, it really it's a you know, an API call is made as so disruptiveness three bucket. There's any number of things that can trigger a lambda. It's really kind of limited. It's only to your imagination then that lambda gets trigger and it does something. So I ask you so. So what you're telling me is, is Serverless is something happens, we recognize something happened and then we do something. Yeah, that's what I'm saying. It's called event driven patterns. It's all about events and how events are passed around. And on Adaba. Yes. So we look at some of this architectures. And just to be clear, I again, I am a senior developer advocate for Cervalis at a W.S.. You're going to see database examples here and now. I will encourage you to these patterns can be taken to other clouds, to other architectures that you might be working for. Any time you work with a venture of an architectural, this is something you look at. But I'm going to show you how to do it. I need to be U.S.. I think it's really slick and I hope you enjoy it. So let's look at what this looks like. A native U.S.. So the service that we're gonna talk about for a minute are Amazon, API, Gateway, Amazon, D'Attoma, DBI and and lamed aided by land of functions. And so how this is going to work is, first of all, we had this idea of what we call the API Gateway Service and Accretions. So what does that mean? OK. Service integrations allow us to connect services directly together. So Amazon API Gateway. I don't have to connect to a lambda to then talk to Dynamo. I can talk directly to Dynamo DBI. I'm not writing any code. OK, now this can get debatable because when we do this, first of all, I can communicate Tobias's over 100 plus services that we can communicate with Amazon API Gateway and I'm going to use a templating language called DTL. And I think, you know, you might be going, whoa, whoa, hey, that's code. And you you know, you would be right. This is something we can debate back and forth, back and forth. It is Cobh. It's templating language. And the kind of cool thing, the fray, it's frustrating and cool. Is it either works or doesn't it? It's very easy to go, boom. I blew that up. That doesn't work. So you can it's pretty straightforward ideas. And I would give you an example. I would encourage you to go to the Tibias blog and do a search for building a surplus. You URL shortener and I show a full example of actually building a U. Earles Sautter would just V.T. L, which is Apache's velocity, simply language on API. Gateway would Dynamo DBI and I cache it with our cloud front. There's no lambdas anywhere and it's wicked fast. Okay, so API integration services can can be very quick and they're so they're safe because you're storing the data first. Right. OK. Sorry. So. So now I need to trigger a Lamba function. You're telling me you've stored data dynamo DBI. How do I tell the lambda function? Well, with Dynamo DBI we have a thing called Dynamo DBI streams. Okay, Danimal, TV streams are their log. I mean, this is you. As you write data down Danimal DBE, it creates a stream that can trigger a lambda. So it'll say, hey, this new data was put there. Hey, this was updated. Hey, this was deleted. So on and so forth. And you could pass the actual data to the Lambda K. So as data gets written, it's going to trigger the Lamba function. And then I have control of this. Now, I will tell you the dynamo, Déby, is not the only steward's first pattern or service that you can do. There are others. And I'm and I'm certainly not listing a mall, but Kinesis is could be use. Can you see this? Scuse me. MS1 Kinesis, Amazon simple storage service as subs dropped in to a bucket. You can trigger a lambda dynamo's we talked about. Also ask U.S. or S.A.S.. So there's a lot of different options that you can do when you look at the storm's first pattern. All right, so let's look at our application and we're going to use in the in the in the detailed description of this talk, I said, you know, I'm going to take an Synchronoss application, convert it to an asynchronous sort of like kind of takita. What's going to happen. You can see what the application does. So we'll start, first of all, with our data. So with our data, with our clients, and you'll see this beautiful client that I built in a few minutes, it's going to pass a string in an inner perimeter called data. And that string is just some text that can be any language. That's the beauty of the translation, is that it'll translate from one to the other. And you say this is the text, and then you pass a culture, an array of cultures. Can those cultures say translate it to these different cultures? So in this example would be France. French, Turkish, Turkey, German, German and Canadian. French or French, Canadian. Everyone, look at this. I have four cultures that even translate this. Hello, welcome to my translation. So let's look at this in a synchronous pattern, very acceptable pattern on how you would build these out. OK, so first of all, this. This is kind of an example what a standard client server application would look like in Cervalis realm. Now, within the scope of this application does not show all about that. But just kind of give you an idea, as we would generally host our client in a dummy simplify, which would allow us see ICD caching all things rewrite stuff like that. And we would do our user pool or user authentication through Amazon cognito. And then we would use, as I mentioned, API Gateway to front our application. And then we would use Lambda as a compute. And then eventually you'll cism going forward. Dynamo reviews are stored. But we have to do some work on the data first. So when we push this data in, the first thing, it's going to happen as each culture is sent to Amazon. Translation as a separate request. So let's do it's going to take this data that we showed here where it is. OK. Take. Hello. Welcome to my translation demo. And asked that it be a request in French. That is. Say hello. Welcome to my translation demo and make it. I have a request to a translation turkey. So there's four requests that are going to happen here to Amazon. Translate. OK. When Amazon translate is done, it will then feed the data back to land, on to the land of function and the LAMDA function will take each of those strings. So I now have really five strings. OK. And the original English. And then I have the four translated ones, French, Turkish, German and Canadian, French, Canadian. And it's going to pass those to Amazon on. OK, Amazon Poly's then going to take that text for each one and convert it to an MP three file. So at the end of the day, I'll have five MP three files. So. Once those are passed back, those are passed back as a stream to the Lamda function and Buffer's is back and it takes those and writes them as MP three files to an audio bucket or an S3 bucket that I have and titled My Audio Bucket. And finally, once all that is, once S3 says, yep, I've saved those all four or five of them. Then we write the data to Dynamo D.B.. And when it's done, then this is what our data will look like. Right. So I've got my original and then I've got my and I'm sorry the English won't be translated. I don't actually translate the original, just the translations so that I've got my four audio translation links and then I've got my for translation texts. OK. And then down here on the bottom I have my S3 bucket. That shows the MP three files stored as well. Now let's evaluate this for a minute. Looking at this, like I said, this does work in a happy path where nothing breaks. This works fine. But what's the reality of applications? Reality of applications is something's gonna break at some point at Amazon. We build our services to be resilient. There's no doubt about it. But we're wise enough to understand that sometimes things will happen. So not only do we want to build resiliency, we want to build in redundancy. We want to build in the ability to degrade gracefully. Right. And then we pass on services or the ability to do that to our clients as well. So let's take, for example, what if my translations go through and I say, OK. Translate all for these and I get them back? Fine. Then I say, OK, translate them all to two audio files and I get those back. So now in my lamda, I've got all my audio files and I've got all my textured translations. Then I go to save it all to S3 and something and my code tanks. And it doesn't work. And my retry fails because the same error or here's what's happened. Now I've got the translations, I've got the the audio files. I couldn't save them. Everything tanked and I've left. I don't have the work done for my client. But not only do I not have the translations available to write to Danimal DBI, I have to go back to my client and say, hey, you know that job you requested? You I didn't get it done. And I lost the original request 30 to give that to me again. And it makes me look silly. We're losing credibility with our clients. So what we want to do is build our application that that doesn't happen now that we kind of talk to. That's a let's let's look at a demo and see what that looks like. All right, so let's go over to my application now. Here's the truth. I, I built these this design. I'm not going to lie. I started as a designer. And you're Prochaska yourself. Oh, my goodness. How could you possibly be designer? You're right. So there's a very simple design. But it gets the idea across. OK, so I was both those out earlier. You could see some different things here. You know, welcome to go to Oslo. And you could see and I'm on the synchronous verse, I'm not an age seeker's version. You can see the different translations of what I had here. And you can play these now again, if you saw that link and I'll show it again. You can actually play these when when I'm in the room with you, I can play them for you, but not the case here. So let's go ahead and make a new one. OK. So we're going to do that. And we had some heroes announce I. I've done one for Ben. Let's do one for you. Let's do this one here in Lagos, Nigeria. So where to copy his data? And we're going to say we want to translate this whole thing. And we want to translate it to every available language. Some select all. And the number to hit translate. OK. Well, I've got these three little dots. Let's go. Okay. Something's working. That's great. That's. It's working, right? Oh, maybe it's not working. Saddam's broken my. Oh, OK. OK, that wasn't too bad. A little slow, maybe a little nervous. Right. But I could click in here. OK. All right. Here is everything. I've got the translations. Everything's going good. All right. I've got the files. You know, I can play those. OK, so that wasn't bad. But I had to wait about five seconds or so. And you maybe think it is up well. OK, five seconds here. That's not a big deal. But that was a pretty sure piece of text. Let's go back to the text for a second. Look how short this is. What if I was doing chapters of a book? I would get pretty upset. Take translate the chapter book to two Arabic, Mandarin and Danish, you know, that would get. It's going to time out and I could set my time out up to 15 minutes at the lambda function. But that's pretty costly. If I run it every time I lamda fires, I'm going 15 minutes. There's got to be a better way to do that. And there is or you know, I would be up your Stan Stan talking. So let's go back to our demo. Now, give me just a minute. This is a that in fact, this is built on that YORO shortener that I told you about. If you'll go to slipped up link forward slash translate. You can play around this application. I did delete translations out for a while, so I'm paying for a bunch of storage. But feel free to play around with it. See how that works. All right. Let's take a look at that same application. But in an asynchronous manner. OK. So what will that look like? So let's go back. We'll start again. OK. Same thing. We're hosting a native use amplifier host in our client. We're doing a authentication through cognito. But this time, instead of writing directly to a lambda, we're going to use that service integration. We're gonna write directly to Amazon Dynamo DB. OK, so this is a resource request. It's written to Danimal DB. So what happens? I remember I talked about the streams. OK, so we're gonna take the we're gonna say I'm going to output a stream and I'm one lambda function. His entire job is to just read the stream. OK. So he takes and those streams that he breaks, he says, OK. You get some translation. You got an original one. It's got four cultures. I'll break those out of two requests and I'm going to push that into Amazon event bridge. OK. Now, Amazon, the Bembridge service we haven't discussed yet. Amazon Bembridge is a it's an event bus and it can handle massive amounts of events flowing through it in seconds. You know, trillions, trillions of events kruzan through it. And then you write rules to trigger lambdas. So in this case, I'm getting pushed for requests. I will push the text and say I want these to be translated to two these four different cultures. So what's going to happen is I'm going to write a rule that says, hey, anytime you see a translation request triggered this lambda function, that lambda function will then take and make the request against translate. And then write the date data back to Dynamo DBE. OK, so just those rules, OK? Now remember, if we have something going to D'Adamo DB up there, it's kind of a circular pattern. It's going to go into the stream stream reader reads. It says, OK. These have been translated. So the next step is I need to do audio transcribing. So I'm going to write another rule that says, hey. So the strimmer is going to push audio, transcribe requests into the amperage. And I'm writing another rule that says, hey, when you get an audio request triggered this lamda. And then. Right. And then send that data to Amazon, Pawley, Amazon, Polli. We'll take text files of multiple languages and convert them to audio files. And what I want are in P 3s. But the cool thing is I could say to play, OK, I'm going to sit here and wait for the buffer. Like like in the previous example. But instead it's because I don't want to spend my lamda. There's really no reason to. How about when you're done. You just drop that audio file into a bucket for me and policies. I can do that. OK. So all we do is we quickly send the text. You're talking millisecond request times here on these on the land, as you're seeing so far, very small footprints on these lands. Bull. They're up and gone. Right. So Amazon poly churns on those creates the MP three files, drops them into a bucket. All right. Well, this is great. So what happens if I did the translates? And then I tried to do AME's on the audios and it krass like an earlier one. Do I lose everything? No. Because my original data is still in Dynamo DBE. So I can write in some logic to test that. We'll talk about that in a few minutes. OK. So once the body of Farzat dropped into a bucket, it then triggers per audio file. Another lambda called my I'm calling my final slam dunk in the bucket, then triggers that and that then takes the data and writes the final record specked Dynamo DBI. OK, so now I've got four lambdas and you maybe ask yes. Oh wait a minute. Four is more expensive. The one. Well not necessarily. If you remember the other one had to run for five seconds or more. It ran for pretty good amount of time. OK, five thousand milliseconds. Right. Where's the stream reader is going to. That's probably you. That's a couple of milliseconds maybe. Let's say let's say 50 milliseconds for each one. Right. Translate. That's a quick. Boom, boom, boom, boom. Send these in. Get it back as translate happens. Very fast. Audio file. All I do is send the text and don't wait for a response. That's very fast. And the finalized. I'm just getting some data in writing to Dynamo DBI. So you've actually reduced the amount of time that your lambda runs and when you're billed for Lambda as you are building and vote. So the number of votes, but you're also billed for the amount of time just your memory is used. And so you're hugely reducing that by by going this route. OK, so let's look at an example of what we just did drop out again. OK, so now I go back to the home and I'm going to bring up an eight and actually let's grab the same one so we can, you know, apples to apples. OK. So you go to home. We're going to convert our asynchronous API. And all this is doing is just literally changed the API that the application hits. So it's most synchronous architectures and it's my asynchronous we're gonna create a new one piece that data. Once again. Now, last time I click this, you know, we were five seconds or more. So I'm going to give it a shot. Let's see what happens. And oh, oh, oh. Well, that was really fast. OK. Now look here. I've got all my translations and if I didn't, they would load in. But I love my customer. Oh hey, your audio's rendering and it will be here shortly. They don't mind seeing that. Now this sat there all day. OK, but they're willing to give this some time. Audio rendering. I get that. That might take a little bit. And all I'm doing I could do this through sockets, but all I'm doing is I'm pulling the application and they're popped in. You can see that it works. OK. So there's all my data. And yes, the overall amount of time was actually probably just a pinch longer because I ran into the system. But the amount of time my customers waited was much less. They were. Oh, I peruse the translations. Oh, OK. The auto foser in and I can listen to it. So I didn't stop my customer from working. They didn't have to sit there and watch. And then more importantly, I've instilled confidence that this thing isn't broken. Right. So there you go. So let's go back to our. Demo here. I look I don't look at it, all right? So there it is, slipped out, linked. For let's translate if you play around that. Please do. And look forward to seeing you. Give me feedback. So the last thing here or let's let's move on. What if we want to add even more reliability? OK. You know, Eric, you've built a very reliable system. Well, you know, thank you. I appreciate that. But there's still more we can do. OK, so let's go back to our application and evaluate this. So we're looking at the office. This will be built so far. And when we look at this, we say, you know what? We have four different lambdas right now. And the fun thing about these is each of these handle a different piece of code. But they also handle error handling and the Israelis probably have to look somewhat similar. Hey, at work, you know, in my instance, what I do is I actually write back to Dynamo DBI all the same data except for the resulting, you know, computer data that I needed. I put an error, so I'll actually flag it as an error. So later I can look for that and see, OK, what happened. Right. So here I've got no use for P or get four pieces of code, four pieces very handily. But those four piece of error handling probably pretty close. So what if I could roll those off to another Lambda K.. So instead of me handling the error code in and this idea, if you've never heard this is called dry. Don't repeat yourself. If I'm doing the same error handling and all four of these lambdas waterwise, centralize that and make another lamed to handle that. So we have this really cool thing we announced last last reinvent called land destinations. And the way this works is I can run a function and if it's successful, then I can trigger some data into eve amperage. Lambda S.A.S. or S. S or if it's on fail, I can then trigger data into the same data. Right. I could set the maximum age a number of Reacher's on my land and I can just say, you know what? Only char once four fails and send it somewhere else. Okay, so in this example, I'm going to use a lambda. I mean, really, you can do this all kinds of different ways. And for the moment, just for example, I'm just going to worry about errors. Okay. So what's going to happen with this is if my stream router fails, it's going to send data to my error function, say, hey, I failed you deal with it. Same with my translate my audio and my finals function. And so the errors Lambus is okay. I can deal with that. So let's put this back into our architecture. OK. So what the error function does, he then writes to Dynamo DBI and says, hey, this idea tried this action and this was a result, you know, and I may I'll write, you know, I'll save the airdate or something like that. OK, so now I've got that data saved and I can say, all right, well, you know, I, I want to evaluate that. OK. So we we're looking at that. If I've written that data back, then I can start looking at some of that. And this is pro I wanna because when you distribute kind of some self healing or application healing logic. Right. So what if I introduced a lambda. That's an error correction. Lambda. OK, so I can have this lambda that reads Dynamo DBI on a Krom basis. I could say, hey, every 10 minutes, look for any errors and try to get. And if it fails and then they keep account on that, and if it's second time, didn't notify someone. Let's see what's wrong with it or let's write some logic to see if we can figure out on how, you know, how you want to approach this. Right. So he can check that, read the Duke, do scan on the database, which is what we want to do all the time. But you can do a quick scan. You can there's some database magic. You do this where you can do some some some parts, personal indexes, things like that, or sparse indexes. I'm sorry to to write those out. So lot of power you can do with this where you can say, look, look for errors and let's do something about that. OK, so hopefully you're seeing this kind of a pattern building up here. Right. And so the first thing you said, you know, look, we're going to store the data, Danimal DBI getting keyser's to us, Kinesis, whatever. And this over to store the data in Danimal TV. We're going to with our with our stream Landow. We're going to analyze requests and insert into a minibus. And then we're going to. And this is where the pattern really starts. We're going we're going to create rules to trigger a specific lambda function to handle the job and, you know, translate poorly something else like that. Then we're going to write the data data back to Dynamo DBI. And if there's an error, we're going to send it to the AirLand, a function. OK. So when we take this and we say, OK, now we want to know we've got this much, but we want to build this application bigger, we want to do more. Right. So let's, let's extend this veha the pattern we just looked at. What, what we want to do is let's say our customer says we want to understand the sentiment of the content being translated. So if someone puts a big old long paragraph and we want to know if they're happy with this, we'll know that they're mad at us. We want to know if they're sad or, you know, whatever. OK. So we have this wonderful service called Amazon Comprehend that will analyze text and say, hey, this is mad at you, Melander, do something. Or, hey, she loves you. Hey, whatever. He loves you. Whatever. Okay. So applying our pattern. Let's see how we can add this service into our application. Well, first of all, we know it's going to get rid of Dynamo Modiba, you in there and stream. We're just going to insert that into event bridge at Event Bridge. We're going to create a new rule that says, hey, every time you see a sentiment request or we might just do it on the original however we want to do it, then trigger the sentiment lambda, which will then talk to Amazon, comprehend and get the data back and read Danimal DBE or if it fails, right to the air handler. Okay, so I could continue to add more services to do more things with this, but just simply write a rule. Trigger a lambda. Talk to the service. All ears or all data goes back. Danimal all ears go to the error handling function. Mansome out of power here, but done very quickly. OK. So the end result. This is what our application looks like. Don't let all the lines scare you because I'm trying to show a lot of different things. And hopefully I've you know, I've explained that properly. But the reality is we're keeping a very simple. Each of these lambdas do one thing and they do that one thing really well. They don't even do air handly anymore. So the the the code the lines of code are probably in the neighborhood of of maybe 20 lines of code or less for each lambda. And then, you know, my error handling is going to do some logic. So it's very small bits of code and their single responsibility. And then I'm tying into service. Cervalis some is. So if you haven't figured out Serverless is very much yes, we have code in lambda, but that's just part of it. There's so much you can do by simply using the managed services and chaining those managed services together to accomplish the goal you're trying to accomplish. So thinking asynchronously is is huge. So let's let's wrap up and kind of talk about this. Like I said, they give you synchronously. That's the first thing. Where can I push off processing? So my clients not waiting for it. Right. Put storage first. That's going to help you a lot with that when possible. I'm not someone who's sitting up here saying, hey, look, every time this should meet applications, George first. But I would tell you that you find a lot more than you think. OK. And finally, use less code chain these services together. Let them work for you. Integrations and API. Gateway is a great glue for that. There's some others you can use, but API Gateway integrating with back and services directly is extremely powerful. So let's look at. If you want to get started with this, great places to. You know, if you've never used Serverless hit hit Adaba a study Amazon dot com force us serverless. It's it's a great place to kind of jump into that. I would also encourage you. Two, if you could chance, please remember to rate the session. We want to know, you know, how we can do better. I know that this is a little different than most. Go tos. And if you're like me, you absolutely love to go to conferences. So you know that not being there in person was a bummer. But I appreciate what they've done here, that they kept a virtual and they kept it going. So again, my name is Eric Johnson. You can reach me at T.J. Geek. I'm going to try to be in the Q&A. So. So hopefully we'll be doing Q&A just shortly. But if you can't if I'm not able to be there, something happens. Then hit me on my Twitter. Glad to answer questions. And I hope that you have a great day if you're on the cruise, man. Safe travels. I hope you stay healthy. I hope you're having a blast. You know, have have a diet. Dr. Pepper, for me. If you're not on the cruise, stay safe. Stay healthy. Have a great day. Blessings. We'll see you later.