Video details

Modern backend with TypeScript, PostgreSQL and Prisma - Part 4: Deployment


The goal of the live-stream series is to explore different patterns and architectures for modern backends with Prisma. Prisma Client is an autogenerated type-safe database client for Node.js (ORM replacement).
The series will focus on the role of the database in every aspect of backend development including data modeling, the API layer, validation, authentication, authorization, and deployment.
In this second stream, we will build an API, add validation, and write tests.


- And we're live. Welcome to those of you joining us just now. Welcome to the fourth part and the final part of the modern backend series. In this series, we're building a backend with TypeScript, PostgreSQL, and Prisma. And in this part, the fourth part, we will implement continuous integration and continuous deployment with GitHub actions. And what we'll do is we'll configure GitHub actions to run the tests and run the TypeScript compiler, and then deploy the backend to Heroku. And so, before we get into the details of deployment, it's probably worth going a little bit over the database schema that we've designed around which everything was essentially built. So here, I'm gonna share my screen and I'll also share in the comments, a link to the diagram if you're not able to view it so well. So a quick reminder, we had this idea of a user, and a user can be associated with different courses. As a quick reminder, this is an online grading system. So the idea is that you have courses and each one of those courses can have multiple students or teachers, and each course can have multiple tests. And each of those tests can have a test result. Now, as we see in the diagram, the test results are also connected to the teacher who graded them with this greater ID relations scalar field, and then the student, obviously. And in the previous part, we added a new database migration in order to... We added this token table in order to store these tokens for authentication purposes. And so we add these two token types and we implemented what we call email-based passwordless authentication. And so today, once we deploy everything, we'll get a good... It will be a good chance to take a look at how that actually works, how this authentication process without using a password works. Great, so you've seen this database model. I also want to point out that throughout this live stream, feel free to use the comments on YouTube. Use the chat on YouTube, I can see it all coming in. If you have questions, if you have doubts or if you just wanna share maybe some tips. So looking forward to hearing from you and I'll be checking every once in a while the comments. Okay, so in the last episode we implemented authentication and authorization, and I can show maybe a bit of the code base for this. But a lot of this was built around the token table that we created. And what we did is we used hapi-authentication strategies to essentially define a new strategy using JWT. And we implemented these, two routes. One was the logging. This is the first step that you use to authenticate your pass in your email. And then what happens, an email is sent to your email account with a special token. And then you take that token and you pass that along with the email to the authenticate and point, which will give you a long lived JWT token, which will give you access to all of the endpoints of the API. And then we used what we call pre-functions, and we define them in this auth helpers module here. And these were how we defined what users are able to do depending on the different endpoints. And so all of this is obviously available on GitHub, so you can go check it out. And let's get started with today's topic, which is continuous integration, continuous deployment, and so on. And so, we will be using GitHub actions. And GitHub actions, is basically a new service, it's been around for a little while, but, it's a service that GitHub introduced and it allows it to do a lot of automation work. You can do continuous integration and continuous deployment, and it's probably worth clarifying a bit what continuous integration is for those of you who might be less familiar with it, but really it's quite simple. It's a technique that is used to integrate the work from individual developers into a main code repository. And so, the main idea behind it is that you wanna catch integration bugs early and accelerate collaborative development. And so, typically the CI server, the continuous integration server or service that you're using would be connected to your Git repository. And then every time that you push a commit to the repository, the CI server will run. And usually there'll be different configuration syntaxes for this, with GitHub actions, you have this idea of a workflow. So a workflow is the main configuration file. And here I'm opening it up, I'm gonna make it a bit bigger. So it's legible and you give it a name. And here, you tell it on which events you want this workflow to run. And then, each of these workflows, and I should mention that workflows are offered in different services and different CI services. They're sometimes called pipelines, but, it essentially means the same thing. So we have this idea of a workflow. And a workflow can have multiple jobs. In this case, I have two jobs. I have a test job and a deployed job. And the idea is that once the tests run, we want the deploy job to run, but only on the condition that the tests have passed. And so, this is all configurable and we will get to that in a second. And then it's probably worth talking a little bit about continuous deployment. So continuous deployment is often spoken about in the same sentences, continuous integration. And that's because usually the same tools can provide you with the two functionalities that is integration and deployment. And so, the idea behind continuous deployment is to automate the deployment process so that changes can be deployed rapidly and consistently, because it's automated, you sort of program how you want your application deployed, you have these consistent process for deploying it. And so that sort of encourages these kind of a more iterative approach where you constantly deliver small features. So let's take a look a bit about, at the pipeline here. If you're a member we have... I can open up the package JSON here. And we had this test function, which used Jest as the test runner. And I can open up here the terminal and jest run NPM run test. And what this will do is this will actually start running all of the tests. Now, it's nice that we do it locally, but generally we want those tests to run just before we deploy, to make sure that those changes are really correctly tested. And so here, as we see all of the tests passed and we had no errors. In fact, the way that, in our pipeline, when we define the pipeline, the way that the pipeline, the sorry, the workflow we'll know whether to continue with the process, we'll see, but that depends on whether the test succeeds. So we had this test function and we also had this build script and the build script just runs the TypeScript compiler. So that will ensure that our application, the backend essentially is type safe and that we're not making any type errors. So on that note, I'll have a look. Thank you, Muhammad for the comment. We also had a question whether we can use MongoDB. So currently no, Prisma doesn't support MongoDB currently. However, there is an issue for that in the Prisma GitHub repository. And if you are interested in that, I would suggest that you go to that and you add a thumbs up. Okay, let's look at the workflow that we're gonna be using. And so, when we run the tests, we are actually accessing the database. Because if you remember, I'll open up the source code here, let's see. Let's open up, maybe the users test. And so the way the users test works is, it uses its server inject function. The server inject function, what it does is, it allows you to inject the request in... A synthetic request in without actually having to run the HTTP server on. And so, this is really useful. But when we call this, for example, here we call this profile endpoint that is defined, can probably open up the user's plugin. And so, here we have that and there is a handler for that. And that handler makes use of Prisma client here. And so, all of these tests actually they're considered to be integration tests because they actually test the full functionality of the routes, or in other words, the endpoints. And so, for the purposes of testing, even when we're running our tests, using GitHub actions, we want to be able to have a test database for the duration of the continuous integration run. And so, this is what we're gonna configure here. And so, as I mentioned before, we have this workflow, the workflow has a name, and this workflow runs on every push. If you're interested to learn all of the different things that you can do with GitHub actions is quite a lot. But they have very nice documentation, too. I often use the reference workflow syntax. I'll also share the URL for that in the comments. And so, this is really useful if you're trying to figure out what kind of different events you wanna trigger. For example, you could configure it to run every time you create a pull request, but for now, we'll just start with push. And so, the workflow has these two jobs. This is the test job, and here we define in what environment it runs. In some situations you might wanna run this on MacOS or something else. And here, we have this services. So services allow us to create these temporary services that we might need for the duration of our tests. In this case, we need Postgres. And so, here we just give it a label and this image is actually a Docker Hub image. So what will happen is that GitHub actions will actually download the Postgres Docker image and run that as part of this run. And here, we post this Postgres Docker image. We pass a username and a password now, because this database is only going to be running for maybe a couple of minutes for the duration of the tests. The username and the password don't matter so much. And here, we have a couple of extra options that I think were recommended by the GitHub documentation to make sure that, it ensures that Postgres is running and it's ready to accept requests, when it starts executing your different steps of the job. And so that was our services. And here we do a port mapping. This is the same port that we're going to actually connect from the tests. And so all of these can... All of these lines that I'm sort of highlighting here, these are associated with this Postgres service. And so now that we know that we have this Postgres, we need to post an environment variable that is accessible to the job so that it knows how to connect to this Postgres service container. And so here I have this database URL, you might remember from the Prisma schema that we... The Prisma client knows to connect to the Postgres database using this database URL environment variable. And so this is why we named that here. And as you can see the configured username and password for this temporary Postgres is the same one that is used here, in the environment, variable that is passed to the tests. And so here we get into the interesting part. So the interesting part of a job, in a GitHub actions workflow is the step section. The step section essentially defines an array using YAML. In each of these steps, will run sequentially, and it'll run sequentially. And if one of the steps fails, the whole build will fail. And so the first thing that happens is that it uses this GitHub action that is already predefined called checkout. Which it'll actually get the... It'll actually get the source code of the repository. And then the second one will actually configure node, and you can pass a different options. If you're interested, you can also learn more about these GitHub actions, like the checkout. Let's see. There we go. So each of these is really well documented and there's examples for the different possible things that you might be interested in. And all of these, I think can be found under this GitHub actions organization. So I'll share the URL for that too. And yeah, this is the setup node that we're running as the second step. And that will basically configure the node environment so that we can do things like NPM, use NPM and install dependencies and whatnot. And so, here we use NPM-CI NPM-CI is very similar to Install, but I think, it just doesn't install the development dependencies. So NPM-CI is usually what's used in CIs to install dependencies. And then we start actually running the scripts that we've defined in the package.json. So the first one is NPM run build, which runs the TypeScript compiler. And this is really useful because before we even start running migrations and running these integration tests, we wanna actually be sure that we haven't misused any of the types. And so that the easier bugs to catch are usually the better ones to run first. So we run this NPM run build here. And then, we call this NPM run, migrate up, and this uses Prisma migrate, and it calls this up command. And what this up command will do is it'll connect to this service database, the Postgres database and run all of the migrations so that when the integration tests run, the schema is already... The database schema is already up to date with the Prisma schema that we've defined. And so, once that's done, we can actually start running these tests. Then we have the deploy jobs or close for now, the test job that we've gone over. And then the deployed job also runs on Ubuntu. And it has this, "If" condition. This "If" condition actually makes sure that in case you push a branch it doesn't actually run this job for a different branch other than MaZda. So this is what these two conditions are and just to ensure that only MaZda is deployed. And here, I'm using this needs in order to make sure that tests pass for this to run. If the tests fail, then the deploy job won't run. And so here again, we do the checkout, we install the dependencies. Each of these jobs actually runs in isolation. And so that's why we have to install the dependencies twice. Theoretically, it's probably possible to have a single job and install the dependencies once. However, I think it's nicer. It's a bit nicer when it's a bit more organized into two separate jobs that have a condition. And in fact, caching is heavily leveraged by GitHub actions so that something like installing the dependencies is a relatively quick process. Now, here, there is something different. Here, we also, before we deploy... So after we install the dependencies, we call NPM run, migrate up. However, I've given this step a name, because this actually runs the production migration. So if in the test we ran the migration for the test database that was launched for the duration of the test job, before we deploy the application, we actually wanna make sure that all the migrations have been run against the production database. And so, to do that, we need access to the production database because it's considered bad practice to keep these secrets inside the repository. What we do is we leverage this secrets functionality that GitHub provides. And so for... If I open up the repository, I can go here into the settings and in the settings, I have the secrets option and we will get to the rest of these, but I already have here a database URL defined. And that database is actually for the database that will be hosted on Heroku. So we will get to that in a second. And so similar to how we use the secret here, we also need... We're also using this Heroku deploy action, which allows us to deploy from GitHub actions directly to Heroku. So in a second, once we finish going over this, we will actually create the Heroku application and figure out what are all of these secrets. But the main thing that we need here is an API key. This is so that you can authenticate well, so that GitHub actions can authenticate against... So that GitHub action, the action can authenticate against Heroku, and this is the app name. So Heroku has this notion of an app and an app is basically as you can probably guess, it's the application and you can have within an app an associated database. And then the email, the email address that you use for your Heroku accounts. So we have a comment here from Ben Schwartz. Thank you for mentioning this. So NPM-CI just makes sure it is a clean install. Okay, so it deletes node modules and so, I wanna say, thanks for that, Ben. Okay, so this was our deploy drop, and we looked at some of these secrets and now we will go ahead and actually create the Heroku application. There are two approaches to this, two possible approaches. The one is using the Heroku UI, and then the other is using the Heroku CLI. In this step, I will actually use the Heroku CLI, which I find to be very, very intuitive and easy to work with. And so, the first thing that you typically do with the Heroku CLI is that you do Heroku login. In this case, I probably don't need to log in because I'm already authenticated. Let's see if I can figure out whether... Let's see. Oh, no, this is for the status. Let's see. Apps. Okay, so I am authenticated, but this is what the process would look like if I weren't. So it would open up my browser and then I would log in, and that's it. So now that we've seen how the logging works, we can actually create the app. Now, I'm going to use this command and I'm going to create this app. So Heroku apps calling create This minus t if you're part of a team, otherwise, if you emit this minus t option, it will just create it on your personal account. And here, we give the app a name, and it's been created. And now, what we want to do is we also want to add a database to the application. So Heroku offers Postgres as part of their hosting services. And so we will actually create a Postgres database. They have a free tier, which I think allows up to 1000 or 10,000 rows. And so that should suffice for now. And to do that using this addons calling create, and then telling it Heroku PostgreSQL, and Hobby-dev, I should also mention that everything that I'm doing during today's live stream will be also available in textual form once we publish the article that is associated with this. So worry not if you're missing something, or if you skip something, this will also be available in the article. And so now, I need to update the name of the application that I created, so that was Prisma dash grading dash app. And, great. So the database has been created and is available. And what Heroku does is it automatically signs this environment variable to the process that will be running the no-js process for the backend, with the database URL. And so, it is not much additional work that needs to be done. Okay, so now we have this database, we have the workflow defined, we have the app. Now we just need to set the secrets in GitHub. And so, I will go back to the editor here. And so, we need to set this database URL, this API key app name and Heroku email. So I will go to GitHub because I'm gonna be messing with secrets, I will remove that from the screen for a moment. Heroku config, actually, this would be the command in order to get the database URL. So I'm gonna just move that away so that I don't expose any sensitive information, I'm gonna update that in GitHub now. And then, there's also the Heroku API key. This is something that you get from your account settings on Heroku. And so, there's something here at the bottom. And if I click reveal, I'll be able to view it. So I'm gonna copy that into the secret's interface quick reminder, this is what it looks like. Update the API key. Make sure that the email is correct. And update the app name to the one that was used, which was Prisma dash grading app. Okay, great. So now, I have these four secrets defined in my GitHub repository. And what I'm gonna do is, I am going to trigger a build. So before I trigger the build, let's think for a moment about what environment variables we need to be running in production. And so I'll have a look at... If you remember, we had this inside the auth plugin, we were relying on this JWT secret to be defined. And in the email, we relied on this SendGrid API key. So this SendGrid API key is what you get from SendGrid when you sign up. This is something that we did in the previous part, in part three, when we implemented a lot of the authentication stuff, because we were using emails, where part of that was creating the SendGrid account. And so, we need to make sure that those are set and the way that we do that is, again, Heroku provides a really nice command for that. And that is Heroku config.set. And then, you basically, you can pass multiple ones. And so, I will actually move that away from the screen and set those. That will be just one moment. Let's see. Oh, yeah. There is a nice way to generate it pretty easily. And, okay. And, I just configure that. And so we have the SendGrid API key, and the JWT secret. Great. So those have been successfully set. And now, now we can actually trigger a run. And so there's usually in order to trigger a build, you can go to the UI, and you can, usually sort of take one that is existing and re-trigger it. But, what I'm gonna do is I'm gonna actually create an empty commit to trigger the build. And so, git commit dash dash allow empty. And once I push that, we should see the workflow showing up here. And so, here we have this new one that just started now. And if I click on it, you'll see that currently there's only one queue job. However, the second job should show up once this one completes. Cause as you remember, we set this in the workflow file as a dependency, using this needs parameters. So let's have a look here, what this looks like for this job. So first it sets up the job. You can sort of unfold each one of these to see what's happening. So I can imagine that this actually starts the Postgres and so on. And okay, so there we go. That took 10 seconds to run all of the... To install the dependencies. The TypeScript compiler didn't fail. Then here, what it did is it ran the migrations and we should see that in the end. So all of these were executed. Done with four migrations. And after this step... Oh, and there we go, deploy is already showing up and then we ran the tests. So we get a bit of warnings here because some of these secrets aren't set, but if we get to the bottom of this should probably configure the tests to be a bit less noisy or to reduce, to disable the logging while it's running tests. But as we can see, if the tests would fail, actually the job would obviously also fell and this relies on the exit code. So exit code is like a Linux principle where every command that you run has an exit code, and that can be an erroneous one. So usually one or anything that isn't zero would be considered an error. And so, this is how generally speaking CI systems detect when a job fails. So, okay, now we have this deployed job. And it ran the production migration, which is great. This ran it against the database that we created inside Heroku. And so if I open up my account here on Heroku, I'll up the project, the Prisma grading app. And you see, we have this Heroku Postgres, that is an add on. Okay, so right now we only have four rows and it's available and that's all nice. And also here, we see that it's been deployed just now. So that means that probably, this deploy to Heroku step was successful. And so, this is the step where it actually deploys in and you can see the build process because what happens is that Heroku then builds the project too, in order to prepare it for running and then, okay, it's running. And so that was successful. And now I can open up the app. Now, I should see this up true and that is coming. If you remember, we declared this status endpoint in this plugin, which is just a simple endpoint, which serves on the root path and it just returns this up true. So this is a really good indication for whether your application is running. Now, there's a couple of things that are interesting once you deploy an application because it's not like that you deploy an application and forget about it. This obviously things that you care about such as logs of the application or monitoring the application. So let's start by looking at some of the logs. There are two ways to do this, via that UI or using the CLI. And so via the UI, you can do that here by clicking on more in view logs. And if I open up, make a call to this endpoint again, we should see here again, another request coming in. So that's that. And if we were to look at this in the terminal Heroku, I think it's Heroku logs dash dash tail, and there we have it. These are the same logs that we're seeing here. And now, I wanna showcase what the login we wanna make sure, obviously that the connection to the database works and that the logging process, the authentication process that we introduce with passwordless logging works. And so, for that purpose, I'm going to use a client, an HTTP clients similar to co called HTTPie. I find it to be... To have a slightly nicer interface in comparison to co. It's really nifty. So I'll share a link to that also in the comments, if you're interested in some of the syntax. And so, I wanna log in. And first of all, I need to know a bit more about the URL of the thing. So I could copy it from the browser, or I can just use Heroku info. And here I have the URL. And so, I'm gonna make an HTTP call a POST one to the login endpoint. And here, I should POST it an email. And so, I'm gonna just POST my email, and hopefully that works well. Oh, and we gotta do a hundred. In fact, if I open up, I can split my terminal here and open up the logs. And so here, we have that POST request to the logging endpoint, and I should have received an email now. So check my email, and if it did arrive correctly. Okay, now. And I got a token. So let me just, show this, it's a bit slow. Too many tabs probably open and see if there's any questions coming in in the comments. Okay. All right, and now pop that out so that you can view. And so, this is the email that I've received and it just contains this eight digit token. So I'm gonna copy that token and I can close this. And then I can make a second call to the other endpoint to the authenticate endpoint and pass it also the email token. And what I expect to get back, is a JWT token via the authorization header. So I can make that a bit bigger and I'm gonna make the call. And, oh, great. So I got this JWT token, I'm gonna copy it. And, it might be interesting to just see, to show, to demonstrate what's in that token. So pasted it here. And really all it contains is this token ID. That token ID references a row in the tokens table of our database. And so now I can actually start making requests with that token to different endpoints. For example, let's say I wanted to create a course, a course, in this API. So I would make an HTTP POST and I'm gonna just use that URL that we have here. I'm gonna remove these POST parameters and I'm gonna call the course as endpoint. So this is the endpoint that POST courses is the endpoint that is used in order to create a course. And in order to POST it the authorization header, it uses this colon notation. And so here, I pOST the authorization header, and then I'm gonna give it a name. And this is like a normal sort of JSON parameter that goes into the payload. So I'm just gonna call it, say modern backend, and I'm gonna have to give it also this course details. How to build a backend with TypeScript, Prisma, and PostgreSQL. And that should suffice it. Now, if I say that I should... Oh, great, so I go to 201, and I got this course ID. So this is probably the first one because we just provisioned this database. And so now, we might want to create an associated test. And for that, we're gonna use a different endpoint. Let's see. So we wanna call instead of just the courses, we wanna call course three and the tests. This is the endpoint that allows us to create tests associated with course ID three. And that's what this three is here, that points to the same. Oh, I'm sorry, that was one. So that's the course ID and then tests. And then, I'm using the same authorization token, which is, I believe it's valid for about 12 hours. Yes, and then here, we need to POST the name for the test. So first test. And we also need to POST a date and so prepared here a little ISO date object. We also don't need these course details, date. And then, let's say let's set it into in the POST just for the fun of it. Yeah. And send it off, and there we go. Now, I could also just make a normal get request to get the different courses. Oh, wait, I made a typo there. And so here we get this course that we created and the test. And in fact, if I go and try to create another test for it, that maybe happens in October, and name it the second test, then we have the second test. And if I call now the courses again, I will see the course and the different tests that we created. And then, we see that this is the 14th of October and this was the 14th of September. And so, if I remove the authorization token, I am getting a 401, which is how we expect it to behave. So that was a bit of like, the API that we deployed. Now, let's take a look a bit at some of the functionality that is offered, by Heroku. So Heroku has some really nice metrics that you can get about how your application is running. And, oh, I'm sorry. And this can tell us about different events and these are events like deployments, configuration changes. I mean, when you, for example, if you wanna change your JWT token or your SendGrid API token, or your a database URL for that matter, you would need your application to restart. And so, this is what this events table, this events graph shows, and we have the memory usage of the application. We have also the response time. And what's really nice here is that we get the 99th percentile, the 95th percentile and the 50th percentile. So these are generally using percentiles is the common way to measure a response times in APIs. And that is because there are some usually... There's some tail of requests that are really slow. And so that, to the extent that they can usually throw off a lot of the averages. And so, the reason for using percentiles is that it provides a better overview. So when we look at say the 99th percentile, we are essentially ignoring some of the requests that are coming in. So that's a really nice way to sort of check out the performance. And then we also see here that the throughput of the application so that's measured in the requests per second. So if I were to run, say a load test against this API, we would probably see that spike. Besides that, it's probably worth mentioning that there are no language metrics, which are currently in beta on Heroku. And that allows you to see a bit more information about how the garbage collection off the Node-JS process that is running the backend is operating. So if I enable that, it should probably trigger a restart. Let's see. Let's see. Oh yeah, we need to redeploy to get those. And so, we can trigger another build. I can also maybe do that from here. Re-run jobs, which will trigger another deployment. And indeed, we're getting that. And... Now there's a couple more things that I'd like to mention about the backend. And so how did we do logging? What I did was in the server JS, I introduced this hapi-pino, which is a logging library. And so hapi provides a logging interface through Server and you will see in many places it can go server.log, or even in many of the requests. In the request handlers, in the route handlers, there's also, you can access the same logging interface. And so with this logger, it sort of aggregates all of these logging requests that you make, or these calls that you make throughout the application. And this also ensures that the authorization, like when it logs the request coming in, the authorization header is removed. And obviously don't only pretty prints when in production. Yes. Oh, I think this can also be removed because we already have the pino logo. And so while this is running, let's see, how's it deployed. Okay, that looks like it's succeeded. Now, there's another thing, that I wanted to show. And that is, the Prisma Studio, which allows us to explore the data. And so, I've just configured my environment variable here to connect to the production database. And now what I can do is I can go NPX Prisma Studio. Oh, I misspelled that, studio. And what I'm able to do, so Prisma Studio, is part of the Prisma 2.70 release that just went out yesterday, Prisma studio is now stable. And so with this, I can actually explore, the different, the data in the database. And so, for example, we have this, what we're looking here is we're looking at the users table and we can easily access all of the associated relations. For example, it has... It's a teacher of this course that we created through this enrollment. And there are two tokens here. One was an API token that is the long lived token, which essentially the... If you remember the token ID in the JWT token that was pointing to this. And then we also have this email token, and that was the actual email token that was sent to my email. And that has a short, valid, oh, yes. Sorry, I'll make this bigger. Thank you, Ayosh, just zoomed in. And so, this was the email token that was sent to my email and then was again verified when we did the second step to get the JWT token. This one also had a very short expiry and so that already expired, I think. Yeah, I believe that was five minutes ago. And so it has, I think like a 10 minute duration. And the reason for that is, if you send an email token, and you don't invalidate it, then you have this problem that if someone gets his email account hacked, essentially they can use those emails to log into the API. And so, we don't want that, we wanna protect the API. And that's why this email token is only valid for 10 minutes, because it should only be used for the process of generating this long lived token. And so, that way there is nothing sensitive in your email that lives there forever. So that was Prisma Studio and obviously you can look here at the different tables. In fact, you can also add data here. So for example, I can do another course on pottery, or pottery course. Learn how to make ceramics. And I might want to actually add... Oh, I can do that in a second. So first we save that. And then here, I'm gonna create, enroll my user, in this course, what course was it? Or, I can pick that through this. And so, we wanna do that, and then save one change, and there we go. Okay, and who we assist user? Well, we just... We can just click here and then we can see. And so, this is really useful when you wanna see, Oh, all of these user interacting with my API, I wanna get a sort of an overview of everything that's going on. You can do that, it's a bit like some of those other database browses, but this is obviously integrated into Prisma and it allows you to really interact and edit data and create relations and whatnot. And so, we can open up maybe the tests in a new tab. Let's see, tests. And so here we can add a record, first test, and set the date to 2020. Maybe the first and then associate that with the pottery course. I mean, it would be a bit strange for a pottery course to have a test, but why not? And there we go, it's been created course ID two, which is the pottery course. And so, now if I make that same, HTTP call that I made to get the courses, we should be able to see that also via the API. And in fact, we do see that and the associated test. Okay. So what have we covered so far? We looked at GitHub workflows, we talked a bit about continuous integration, continuous deployment. We set up the Heroku account, we looked at the GitHub actions workflow, and we configured the two jobs, each of which had couple of steps. We also used these secrets in order to store sensitive information that is necessary for the GitHub actions jobs, like the... What was it? The Heroku API token, and also the production database environment variables so that the migrations can run at this point by GitHub actions before deployment. Then we created the Heroku application and added the database. We also set the environment variables in Heroku. Well, I sort of hid that away from you so that you don't see the secrets. And we also triggered a build. And then we viewed how this workflow ran. And we verified this deployment by actually accessing the URL. We looked at some of the metrics that Heroku provides. And so now that we've also re-deployed it, we should be able to see the additional metrics. Let's see, maybe I need to reload this. Let's see. Okay, that's enabled. Interesting. That's interesting. Well, I guess this is in beta so it's slightly buggy, but it was working before. Perhaps another deployment would help. I mean, maybe this didn't deploy. Oh yes, I think that, because there wasn't an actual commit, it didn't deploy. And so, we could do git commit and then push that and then that should deploy it. Also, it's a... And then, last thing we looked at is also the logs. And now, I wanna get to the bonus. So in the beginning I showed this UML diagram using DB, this one. And this is something that was generated automatically for me, so I didn't really need to do much. The way it worked is this is using an open source project by one of our community contributors named Mark. And what this does is it looks at your Prisma schema and then it generates the format, this DBML format, which allows you to visualize the database. And so, I can actually show the branch here and the changes that I made. So I'll check out this branch. And if I look at the changes here, so this is the main change, sorry, I added this dependency, the Prisma DBML generator. And besides that, so once you add that, you also, need to open up your Prisma schema and you... All right, let me open up that way for code. And added this generator and that was it. So it really was packaged JSON, adding this dependency, the Prisma DBML generator, and then in the Prisma schema, I added this generator. And then, when I run Prisma generate, which generates Prisma client, it will also generate this DBML and that's what we have here. And so, if we open that up, we have this DBML and then we have this file, and this was automatically generated. The source of truth is still the Prisma schema. And then, that was essentially added to DB diagram. And so, for example, if you go to DB diagram and you... Let's go here. Let's try to just create a new one, so it's nice and clean. You paste this here, and you could, you've got this order arrange, and there you have it. You have a visual representation of your Prisma schema. So this is really, really nice. And this isn't the only one. There's also another contribution that we've had that works similarly, but generates a different format. So I just thought this would be really interesting to share because I'm personally, I find it very useful sometimes when prototyping, when implementing new features to have a visual representation of what the data model looks like, especially once it's already in place and you wanna adapt the application to introduce new features. And so that was it. We're about an hour in, and I'll use a couple of moments now for questions. So if you have any questions related to Prisma deployment or any of the things that we did today, now is the time. Is there anything else? Okay, well, in that case, thank you everyone who joined. Thank you for the ones who participated today in the chat. And I should mention that tomorrow Prisma will be hosting another live stream, once you in Prisma 2.70. And that will be hosted by a very dear Nicholas Berk and Ryan Chunky. So if you like, I will share a link in a moment and you can already, add that to your reminders and that is here, and here's the link. So thank you so much and have a nice rest of the day, evening, depending on where you are. And until then.