Video details

Leveraging Serverless in Full-stack Development | Eric Johnson

Serverless
09.30.2020
English

Eric Johnson

This presentation was recorded at GOTO Chicago 2020. #GOTOcon #GOTOchgo http://gotochgo.com
Eric Johnson - Highly entertaining serverless fanatic with brilliant insights; Senior Developer Advocate at AWS Serverless
ABSTRACT Using serverless reduces time spent managing infrastructure and provides developers more time to focus on code. In this session I will cover tooling, frameworks, and architectural patterns focused on building a web application from front to back. Along the way we will discuss pitfalls and best practices to help you get a jump start on developing without servers [...]
TIMECODES 00:00 Intro 01:39 Application architecture journey 07:51 Hello serverless! 13:56 Serverless in full-stack development? 18:24 Putting it all together
Download slides and read the full abstract here: https://gotochgo.com/2020/sessions/1399/leveraging-serverless-in-full-stack-development
https://twitter.com/GOTOcon https://www.linkedin.com/company/goto- https://www.facebook.com/GOTOConferences #Serverless #AWS #FullStackDevelopment #Programming
Looking for a unique learning experience? Attend the next GOTO conference near you! Get your ticket at http://gotocon.com
SUBSCRIBE TO OUR CHANNEL - new videos posted almost daily. https://www.youtube.com/user/GotoConferences/?sub_confirmation=1

Transcript

Hi, my name is Eric Johnson and I'm a senior developer advocate for Cervalis at AWB, and I'm glad to see you here virtually. And go to Chicago today, we're going to be covering, leveraging Cervalis in full stack development in a little bit about who I am. Let's get started here. All right, so who am I? Well, I'm a geek on Twitter, so if you're looking for answers for Serverless are looking to find out, you know, more about Cervalis. Glad to help you there. I'm a husband and father of five. That's not a typo. Yes, I have five kids. It's crazy world quarantined in and going nuts. In fact, you might hear them in the background. While I'm recording this, I'm a senior developer advocate for service, a database which I already mentioned there on the Serverless and Thulium or something is truly an automation nerd. I'm really doing an automation nerd in general. And, you know, I like to do a lot of things there. I'm a software architect, been doing that for better than twenty five years. My music lover, pizza lover without pineapple, of course, and a pusher of serverless forever one. You can see my shirt here, serverless for everyone. It is, it is a big thing for me. And we're going to jump in and we're going to get started here. I'm talking about service applications or we're Cervalis can plug in in full stack development. So let's start first of all, let's talk about the application architecture journey now. Hopefully this is a room full of developers or virtual room full of developers. You've been developing and you probably started somewhere along the line here building in your own house, be with the company, something like that. But most applications, most developers start out with a single computer. And on it you're going to have maybe a proxy, a firewall. You might run into an or patchier or some type of server or Apache, Tomcat, things like that. You'll have your code on here. Could be node, dot, net, python, ruby, you know, any number of things. And you probably run your database on here, too. So you run your SQL Server, MySQL, Postgres, Mongo, all kinds of different options. But we all kind of start in the same place where, you know, we're running that that's service, you know, in a server in our mom's closet, something like that. OK, so we think about this. Then we think, you know what? We can break these applications into their own tiers and we come up this tiered architecture where we might move our proxy or firewall off and we have our server, maybe our database in a separate place. We might even get crazy and split our client as well. So where we have our proxy, you know, we have a client or back in different things like that. And that's probably see, you know, you stay in these the servers up in your office, you might have computer room, you might have your own data center. Right. So but when we start building like this as application scale, we have to build in redundancy. Right. So we think about, you know, OK, we're building this now. We're, you know, all of a sudden we're in production. We're getting a lot of clients. And so we can't afford downtime. There's some sites. Can I have a blog? I run my mom, maybe my brother read it. If it goes down, nobody's going to lose their mind. But a lot of us work for companies where if your site does go down, that's a lot of money lost. And so we have to build in redundancy. OK, so what does that look like? Well, we might have a proxy or firewall in front of it and we might have to here. I'm just showing somebody must have look like an active, active, passive, passive, active, passive, some kind of combination where we're setting that up and then we're going to introduce a client load balancer. So we're going to run our clients using a load balance to several different servers. And then we need to load balancer on our on our back end as well. OK, so our client kind of talk to that. Maybe maybe that's our API to load bounce that points to an API. And then we have our back in and of course our back in May point to a database and our database is going to be redundantly, you know, sent over. There's all kinds of different patterns, but you get the general idea to build in redundancy. We need more machines. OK, now this is great for us because we go by and, you know, I'm excited about this. We've got a lot of business. We've got a lot of traffic. So, you know, we need this. But then we get more and we get more and we get more. And here's what that looks like when we look at the more redundant tiered architecture. OK, so we get into say, OK to services isn't going to cut it. I need five. I need 15. I need 30 servers. And I've got to balance out across multiple bakin servers and client services and different things like that. And it becomes kind of crazy and really it's more redundant, but it's more infrastructure, more management, more cost work. And I know you're thinking to yourself, hold on. Hold on. What about containers? Well, OK, fair enough. So we have this idea of the, you know, the proxy file. Well, let's say, you know, we're running our client or back in our database all in the same. Maybe we split it out a little bit. But the reality is it's still in hardware. It's still hardware. You have to manage. So we need to build the more redundant containers architecture. So what does that look like? Again, Prox in this may not match. Exactly. This kind of encompasses a lot of different things that can happen. But we looked at we have the proxy. We have, you know, the a load balancer kind of balancing that out. And then we have our client are back in and then we roll our database off into separate servers, maybe even some containers, but definitely rolled off into its own tier. OK, but the reality is, even with containers, you have more redundant, more infrastructure, more management, more cost. So, OK. We're going to move to the cloud, and that's a good idea. Hopefully you're not running around architectures in your own offices because, you know, the reality with doing that is, you know, when we need a new machine, we need a new server. We you know, we sketched out a plan it out. Then we ordered that we provision it and we patch it, that we build it and we do all the things and those are out of date that we've got to patch it again and then we're finally ready to load stuff onto it. OK, so you say, you know what, we're going to move to the cloud. So let's look at what that looks like in the cloud. All right. So this is this isn't a container's this is, again, a virtual machine or an instance set up and with a cloud is great. You know, we have the ability to to build in some different architectures where we can have instances and multiple availability zones. We can have auto scaling. You have an application load balancers that are managed services. We have managed services for database. There's a lot of power in the cloud. I mean, I've been a solutions architect for a long time and I've helped a lot of companies understand what embracing the cloud looks like. What about containers in the cloud? Same thing. So you have the ability to say, you know, look, I'm going to have even if I'm using the managed service like X, which is Amazon's Elastic Container Service, you could pull this out pop X in which is Amazon's Elastic Carbonetti service or even Farje where that, you know, that can grow and can add some stuff. But the thing about that is with these container services, there's still hardware, we're still infrastructure to manage. OK, the truth is moving to the cloud offers sophisticated automation and tooling to help manage scaling and redundancy. But. There's still infrastructure to manage and to pay for, so that's what we're going to get to what I'm here to talk about today kind of get this idea. This is a story that a lot of, you know, engineers go through and how do we support architecture and how do we build this out? Well, let me tell you and let me welcome you to Serverless. So how a lot of serverless, how howdy Serverless. OK. All right. So we jump in here. Let me let me just explain. Hopefully we may have some new folks. Now, I, I'm going to tell you straight up, everything's going to be talked about today is going to be AWB based SERVERLESS and the reality of services surpluses. It's managed services. And I don't apologise for that's a good thing and need to be been doing a long time. And we're one of the frontrunners. And but some of the technologies I'm talking about, you can look at some of the other services, like with Azure or Google or something like that, but it really is kind of a vendor. And yes, you may hear the vendor lock in. I don't know if I'd go that far, but there have been two specific kind of services that we work with. So you're going to see some stuff. And here's how we approach service of what we believe about it. The truth is 100 people in the room, maybe there's one hundred here and there's a thousand or two. So I'm nervous if you ask him to define Cervalis. The reality is a lot of times we'll say, well, you know, Serverless is not a hundred people in the room. I'll probably get a hundred different definitions, maybe one hundred and seven, because people always change their minds, right? Well, I like what they said. OK, but here's what services, based on how we're going to approach it, kind of have the same definitions. There's four core tenets to what we look at, what we said. What is Cervalis the first? Now, I don't know if my hands or my face will be up on the screen here, but I'm holding my hands up. There are some rules when I talk, I may be holding up one finger, but I mean, whatever I say. So you have to bear with me here. These are the rules when I heard Johnsen's talking because sometimes the hands don't always match with the lips are saying. So there's more rules and you can catch any one of my other videos to see those. But I encourage you to just have some fun. So what is Serverless? So the first thing is there's no infrastructure provisioning and no management. OK, so you don't have to spin up anything if you do that service. Right. So our goal is to have it where you have no infrastructure. The second thing is you're the infrastructure that you're not managing, but support system is of scaling, right? So it has the ability to grow as you need it, but go away as you don't. Right. So the third thing is pay for value. And we really see that as pay for what you use and only for what you use. When we talk about how service is is build, it's it is for exactly for what to use per invoke or per request or per Staats, things like that. It's not per machine sitting out there seven days a week, 24 hours for 30, 30 to thirty days in a month, OK, if you're not using it, you're not paying for it. And the final thing is highly available and secure. OK, so what we need, what we offer, you know, as a cloud provider, we have really, really mastered the idea of multi regions of multiple lability, zones, and our service infrastructure falls that same kind of structure where where we're going to build in highly available architecture to support your system. We also build in security. And while security may be the last thing I'm talking about here, it's the first thing that we do when a lamda comes up or one of our service architectures is they come with security. We use our identity and access management heavily to get very, very granular security in your service applications. Excuse me. All right, so let's let's kind of break down what a certain and kind of get the understanding. So I'm going to it's a pretty complicated issue. Stay with me. So I'm going to define kind of serverless idea. OK, here it is. Something happens. We recognize something happens and we do something. There you go, that's Cervalis. Something happens, we recognize something happen and we do something. So let's get a little more technical than that. OK, so you're like, well, boy, that's I need more. All right. So let's start with the something happens. So the idea is Cervalis is event driven, OK? Everything is event driven. You don't you're not necessarily polling. You're not necessarily looking. You're reacting when something happens. So you have an event source. And this could be a change in data state. It could be a request to an endpoint, could be a changes in resource state, could be a click of a button for like Iot. It could be any number of things. If you have an Alexa in your house, you're using Cervalis. When you talk to Alexa and that triggers that voice, then it sends information to make that make that change. Like you say, Alexa, turn the lights on. There is a surplus to do that. So what happens would not have been one that goes off. Well, generally. And there are some other patterns that we can get into maybe at another time. But generally it triggers a lambda k, a lambda function is the compute part of Cervalis, and that allows us to have an and you can run all kinds of different languages like Node, Python, Java, you can see him here. But that last one, you're like, well that's that's not really a language. No, the last one is a runtime API, OK? And that allows you to do custom languages so you can say, hey, I want rust or I want or I want Grayton, COBOL or something like that. And you can make a runtime of that. We have lots of examples out there on the Net that that our customers are building. OK, so Lambda gets fired off, it runs your code and then it does something and you could put it into a database. It can turn a light on across the world. It can, you know, start a car. It can do all kinds of different things. It's however you want to do it and what services that are available to you. Some of the most common things we see with this are Web apps back in data processing chat bots. Aleksa, as I mentioned earlier, I.T. automation. There's all kinds of examples where you can use Cervalis in the world, but we're going to climb into like a set. We're going to be talking about full stack development and kind of web web apps and backlands. We can spend most of our time here. So let's get to this right. Cervalis and full stack development. OK, so let's go back to our architecture that we had. OK, we're going to talk about how do we move to Cervalis. We've got an architecture built, so how to move Cervalis and then this. I'm going to assume that your client separated from your backend. So you've got a client and a server separation. But if you don't, don't worry. I'll cover that too. OK, so move to the service. So what we're able to do here is the first thing is we're going to break out our client. And instead of running it on a server or multiple servers and running the load, balancing and all that kind of stuff, we're going to move it into Amazon simple storage service, which is also known as Amazon s three. Can you could see here that it has hosting for HP and HTTPS. You could put domain your domain name right on top that you may not have known that you can host a website right in the bucket. And what you get is you get that that large, you get that that huge scalability that comes with s three. All right. And that ninety nine point nine nine nine whatever, eleven nines, data durability. So there's a lot of strength there. But one of the cool things, we now have a new service that's called Amplify Console and it actually has S3 behind it. And it also uses Lambda edX, which is an edge based compute to do a lot of stuff for you that we can do rewrites, we can do global availability through C.N. We can do basic password protection, all kinds of things like that. And it automatically gives you a second, which is a continuous integration, continuous delivery. So but think about that. We can pop that right out and we can take our client, drop it right into amplify console already. So the next thing we're going to do is we've got through the proxy firewall and that Nalpas not got rid of it, but now points directly at our back in. OK, so now we want to move our backend. Right. So we're well, we're going to get rid our firewall. So instead of using some servers to handle our proxy in our API, getting things like that, we're going to use Amazon API Gateway to be our endpoint. It's our API input that we can do. OK, so now I've got rid of all my load balancers. I've gotten rid of firewalls, I've gotten rid of my proxy servers, things like that. Now all I've got is just API gateway in front of my servers. OK, so now let's get rid of our backend, OK, so I can take all those services running on all those servers and put them in a lambda or a conglomerate or a group of lambdas. Now in a micro service world, we. We don't encourage you to say, hey, take thousands of lives code and shove them into one lamda, but you can break them down. And the nice thing about this is this is going to scale as needed. OK, we'll talk about that in a minute. Finally, for database, I'm not going to run my database in servers anymore. I'm going to move to a managed service. And we have all kinds of options for this. You know, Aurora Amazon document DB, which is my DB compatibility, Dinamo DB. There's all kinds of different databases that we can use and we will cover all that in this one. But know that whether I'm running a sequel, whether I'm running, you know, document, whether I'm running, you know, no sequel that comes up, we have or time based, we have all kinds of options for database in a managed service where you don't have to control the infrastructure underneath to support it. Right. So the main ones that we use and websites are generally Aurora, which has PostgreSQL, a school on a very scalable platform document, DB, Dinamo DB, which is my personal. That's the one I use all the time. Love. It's fast, very robust. And Amazon already has some great for if you're running relational database services. OK, so when I moved to this again, we separate these out Aurora. There's different things you can do on this. You can have with a service. It is Cervalis and ability that it has. You need more. It adds more. You do not maintain the fleet for this. All you maintain is the data in Amazon and Amazon. And yes, again, InGen Native Application Microsoft sequel, my sequel, PostgreSQL, Maria Debe, Oracle. So on Amazon, Dinamo DB for me with little thing we like to call Black Friday Amazon National TV is a big one in this and how to handle the massive, massive load that we have there. And finally, Amazon document DBI if you need that Mongo DB compatibility. All right. So let's put it all together and see what it looks like in an architecture. All right. So a general hosting a basic website. We're going to have, as I mentioned, API get in the front. We're going to have a lambda function that's doing lambda or multiple lambda functions that handle our code, handle our compute. And the nice thing is this e-mail can be rendered by the lambda function return. I remember I said, hey, if your client isn't separate from your back and that's OK, you can render HTML from a lambda. I don't suggest that just because it has nothing to serve, it's just a good pattern is to separate your client out from your back end. But there is a lot of, you know, a lot of things that are still being done with with rendering, you know, Rasor or Pog or some some of the templating engines to render from the backend whatever you need. You can do a lot of that from Lambda as well. Now, if you're using the already. Yes. Now, you know, you could use one or the other or mixture of both. You need relational versus no SQL, something like that. One of cool services we have is this thing called Amazon already proxy. And so because lambdas are very ephemeral, they come up, they go away. Managing database headiest connections can be a little complex. So instead, all you have to do is point your your driver, you know, postcrisis drive or whatever at an Amazon already as proxy input that will give you and that will maintain the connections for you. So very powerful works with relational database just came out here in the last six months or so. OK, so here's the cool thing, because these are all Cervalis. Every one of these are going to scale automatically as needed to handle client requests. API game was going to scale, lambda functions can scale, dynamo, DBS can scale, so on and so forth. So you have massive scalability without having to maintain all the, you know, all the infrastructure. To do that, your client application is going to be hosted on amplified console like I was talking about before, and that's going to again be hosted in a Cervalis scalable way. And then for authentication, you can actually use Amazon Cognito, which offers user management and authentication. OK, so remember what I said earlier. Each Lamba function can have a single responsibility so we can use more than one lamed to build architectures. So we might say look for users, slash user idea, go to this, lamba for imports, go to this lamda for orders, go to this lamda, so on and so forth. So you have a lot of ability to break those down with lots of different and also some resources, a little bit with a lot of different videos and content on how to build architectures, very granularly and very with micro services. All right. So finally, you can also do we talked about you putting stuff into Dinamo DB or using database. Also with some of these databases, you can do stuff after the fact. So asynchronous post-processing via streams and event passes. So it allows you to have very loosely decoupled back in architectures. And that's how we build architectures. Now, the micro services that are. Loosely coupled or decoupled entirely so that they're not brittle and so they degrade gracefully. All right, so how do I get started? Well, I got to say this, start with the framework. And what that means is, is don't try to handle lambdas. You can if you want to. You're welcome to, you know, build all your I am all this kind of stuff on your own. But I encourage you to start with the framework. And there's some great ones out there. The one my favorite is to be awesome, which is the service application model. And this comes in two parts and this always freaks people out. I'm going to split the squirrel in half. This is saying the squirrel. He is our mascot. And these are the two parts. We have the same templates, which is an infrastructure code and it is a subset of cloud formation. If you're not familiar with that, it allows you to design a template and that will actually bring up the infrastructure is needed. And then there's the same CLI which runs locally on your machine, which provides tooling for local development, debugging, build, build, packaging and so forth. And you can see the link there to check more about that. I also have a series that I do on Thursdays called Sessions with Sam, where I talk to a lot of this on Twitter to Livestream and I'll have a link to it shortly. There's also things like Itemise Amplifier, which is designed for really for Frontin developers, although you can get into the back as well. You could get in to deal with different types of APIs. If you want to do graphical API through absent or you can use API Gateway very, very powerful works hand in hand with angular view or direct JavaScript, incredibly powerful. And there's a link there, some other frameworks, third party circles. Starcom is one is a great partner of ours as well. A lot of these other ways to build out service architecture. And so I encourage you again, I would just say again, start with a framework. All right. So a couple of resources before we wrap up here. Fundraiser's first, I'm going to give you as our society to be a serverless. This will take you to our service page slipped out link for such a service. There's also the QR code. This is a good place to kind of read about what is service, what does it mean, why you, Cervalis, kind of go more in depth than what I've talked about here today. The next one I'm going to give you is our A.W. service, YouTube channel. This is new. We just started this fight, slipped that link forward slash Cervalis. There's a QR code as well. And this shows how to build a lot of things with Cervalis and including there's that you see that slipped up linking like what is that? Well, that's a you URL shortener that I built entirely Serverless. And I talk about how I build it in a way that's massively fast, massively scalable and pretty darn cheap, too. So you could take a look at that. That's an I do it under a thing called Happy Little API you could see in the picture there. And with that, I'm going to give you a thanks again. My name is Eric Johnson. I heavily encourage you to follow me on Twitter at EDG, and I hope you have a great day. Thank you.