Video details

Day 3 Keynote | Minko Gechev | ng-conf: Hardwired


Minko Gechev

Watch ALL the talks from ng-conf: Hardwired 👉
ng-conf is a three-day Angular conference focused on delivering the highest quality training in the Angular JavaScript framework. 1500+ developers from across the globe converge on Salt Lake City, UT every year to attend talks and workshops by the Angular team and community experts.
Follow us on twitter Official Website:


Hello, everybody. My name is Minko Gechev. Today, I want to share with you what is the current state of D and your deployment, our side surrendering and pre rendering. I'm going to put a couple of dates about what you dueled over the past couple of months into the context of the evolution of the Web. And I'm going to mix everything together with my personal story, how I got to introduce Web development. So I'm Kreig excited. I'm sharing this constantly. First, that is go to nineteen ninety one when the first Web site ever deploys to the Internet. It looks something like that. We had this fascinating infrastructure of like distributed network of web servers that were serving static email files that were linked together through hyperlinks. We were just serving these static files, but if one of these Web servers was going went down, everything was still working cooperatively because we had this full tolerance. At the same time, serving static files wasn't satisfying the requirements of the business. So that's how different developers came up with the idea for the common gateway interface. CGI is just a way for the Web server to pass the user's request. To a external program that can fix some data from a database or read a file and try after, let's return some rendered content to the Web server. Does the Web server was able to forewords to the browser? This way we were. We were able to build much more dynamic experiences. We were able to generate ultimately the page and store some data provided by the user. So here is how this will look like in terms of boxes talking to get her as the user is requesting an 18 or page from the Web server or Web server is forwarding this request for an external program which was fetching some data from somewhere and try it out for real, generating the contents that these are supposed to see in terms of markup. This markup was rendered by the browser. And at this point, we have the largest content will paint the page. So at this point, the page is already useful. The user can engage with it, can read this content and can do their job. So the largest contest will paint is a user centric metric for estimating when the page is useful for the user. This important metric that we're going to mention further into stock. Although this was decent and it's hurt the business for a bit, too, we were far from Destil like user experience. We didn't have this responsive experience that people were used to from their desktop applications, where just by clicking on a button, something was happening on the screen, on each Steini interaction with the CGI architecture. We had to reload the entire page instead of just updating a subsection of the evolution of the web and different web technologies such as the eye frame. And so it's an urgent request. We can wahlquist the technology Ajax Ajax is an umbrella. Technology, which includes different API is that we can use Europe, represent a scene, Col's requests to a Web server, fetch some data and right after the bay, just a tiny portion of the screen. Here is how things looked like back then. We're still sending request to the Web server, which was forwarding this request to an external program through the Common Gateway interface, which was fetching data from a database, let's say, right after returning Randers contents of the browser in the browser was able to render something. So it was making the beach useful at this point with the largest content. Warpaint, right after the page, was also referencing some JavaScript files. And these JavaScript files files were making the paging director in terms of desktop like user experience, where they were adding a different event listeners to certain DOM elements. And just by clicking on any of these sentiments or somehow interacting with it, they were able to seem to trigger a synchronous requests, fix some data from the network and update only a small portion of the screen instead of reloading everything. So that was a time when I got introduced to Web development, thanks to my high school teacher background. This is the authentic cover of the first book, The Parents. This is a Bulgarian for development of Web projects with BHP and Eskil. Yes. I started with these two technologies. I was not really sure what I'm doing. And this was this was a British British debut reads over 500 pages. So I just threats about some API since I pretty much learned in my heart because I really get know what's happening under the hood. So that's that's my skill. Has some drivers for me. And they were sending some requests over and that's very concerning. They put through TCAP sockets. I had no clue, but I built this website, which is in fact the CMF system that is one of my first websites and had a lot of Ajax, a lot of miscalled that I ended up presenting gets in front of a audience consisting of these two people in 2008. So this audience was a bunch of Web experts in Bulgaria. They evaluated my websites and thanks to this, I was able to go to university. I go to Web site, beg them based on this E.M.S. system for meteor showers, for observation of meteor showers, because I was pretty excited, pretty passionate about astronomy. Around the same time, two thousand eight, two thousand nine. Things got pretty intense in terms of development on the front end. We had backbone and angular chairs which put some structure on the web applications that were developing that we were building. These two technologies helped us help us. Help us to build more rich user experiences with JavaScript and dramatically change the way that we are rendering pages from the server, we directly went to the client without rendering absolutely anything from the server. So we were using the original architecture from nineteen ninety one where we're just serving static to nail files and try after yet loading a bunch of JavaScript files that were executed and fetching some data from the network and finally rendering something useful onto the screen and making the page interact. The page was really the biggest, really interactive with single page apps, as we are all know. We really provide this top like user experience. But at the cost of. Making the user wait for a long time until we show something useful on the screen. And that is very far from ideal. Right. We we spent a lot of time in just loading strips and the users. We don't provide any useful information to the user. So they very often just leave. The page says there is nothing there for them as framework alters who are spending a lot of time on optimizing the user experience and optimizing the bundle sites in particular for your applications and for the framework itself. During the first day of the conference from Carrie, you heard how we spent, how much we invested in making sure with accuracy appreciate Kaboul. We're generating the optimal set of instructions for your templates so that we can make your bundles smoulder. At certain points, we even had the Joe Krantz version for like the same joke joke as the previous. In case you didn't hear it the first time we had this joke, that's by version five point four, we're going to make inkers bundle size negative. So just by adding the framework to your application, we're going to shrink your bundle size. That's obviously not possible. And we still have a lot of work to do in order to optimize the angular and to make your bundles even smaller. But at a certain point, you just need to shift some JavaScript rights. You should definitely invest in making your applications smaller, but at certain points that your bundles just can't get any smaller. I put this very insightful call here for myself, and that was in fact one of the motivations for different folks trying to run this Glan site rendering. On the server, there are to have some cold trios between the client and the server and return some rendered content to the user ahead of time. That's what surrounds no, Jess was getting traction. This was probably 2013, 2014, maybe a little bit before that. Different folks from the JavaScript open source community try it, tried to experiment with server side rendering of Glenside applications just by mocking some brawls or apex. And we went back to a similar architecture to what we previously had with CGI. We have clients Rouzer, which is sending a request to a server. And this time, instead of forwarding the execution to an external program, we CGI, we were just running a client landside application in no jayce environment. At this point. This classified application is not using the Dome API is in the way that they're used in the browser, but it's using no just implementation of them and it was actually returning a rendered content that's the user was already able to use. So they were getting the largest content will paint on time. And later on, the browser was downloading a bunch of JavaScript so that we're making the page interactive as well. 2014, around this time, actually, I gave my very first international talk on Engy Vegas. I was I was super nervous. In fact, after I gave this talk about my research on and your gess and how it can you see mutable structure, which is to speed up our applications, like after my talk, I just went to my room and I locked myself there. And I stayed until the end of the day because that's how nervous I was. It was pretty huge experience travelling 6000 miles from Bulgaria to Las Vegas to share my work. And on this conference, I met a lot of fantastic folks. One of them was Patrick and Patrick to get her with Jeff. Two months later, they presented their initial prototype for angler's first server side rendering and universal. This is a really exciting project that is still being developed very actively between you and your team and the community. And, Suzanne, this project evolved a lot. It is now a robust industrial solution that a lot of. Really fantastic companies are using in production. I want to invite Amanda on our virtual stage from Krunch Base, who is going to share more on how crunch bases using and Universo. Hi, Endicott's, I'm Amanda, so faulty. I've been an engineer at Krunch Base for four years and I'm here to share why Angular Universal is just so important to our team. Crunch piece is the leading destination of private company data on the Internet. It started almost 15 years ago as the small database for tech reporters to keep track of funding data. Four years ago, we became our own private company with a mission, and that mission is to democratize access to company information and provide solutions that guide our users to their next opportunities. But every time we spun out as a private company, we had we had a Rales application and some of the pages were taking nearly 20 seconds to load and adding more data would make those pages even slower. So how are we going to deliver on our huge mission as a Web platform? We knew we had to start from scratch with a foundation that was going to allow us to deliver new data to our users quickly without having them pay for that in their low time. Enter angular universal server side, rendering our pages means that we can surface data faster. So places where Web users are searching for and sharing that data. And really importantly, we're doing that all in a singular code path. It also means that when users go to access that data on our platform, we're getting it to their browser in the quickest and lightest way possible. When we think about how we make our data discoverable and accessible on the Web, we're talking about web crawlers. The Google crawler has a two wave approach to indexing sites. The first wave of indexing takes into account any HDMI oil that it can access by requesting the source code, as well as adding in any links that are in the source to the crawl queue. Q. The second wave of indexing renders in indexes, JavaScript generated content, and this can be hours or weeks behind that initial wave of indexing. So. If we were in that second wave of indexing, access to newly added information on our platform would be limited to the users who already regularly visit us. But what if you are searching for a new job and you didn't hear about the latest funding round? A company got because our pages were a week behind getting crawl. Making decisions about opportunities is all about timing. And we want to be in that first wave of indexing. Because over 70 percent of the traffic that we get to crunch based ARCOM is attributed to organic search. So in addition to showing up in search results as quickly as possible, rendering our pages on the server also means that we're able to add semantic information to our pages, which enables us to better represent our data. Having the ability to add structured data to our pages helps to classify the full contents of our pages and enables us to show in special surge features like cards. In open graph tags allow us to show up with descriptions, titles and images when pasting them in various contexts. This means that as much as possible, our DNA is accessible to our users wherever they are, even if that's not crush based ARCOM. What does it mean for the users who do visit Krunch based ARCOM? We serve over two thousand pages per minute to one million users a week to users on six continents to the tune of one hundred and fifty milliseconds per request. Responding with fully formed HCM, Al from the server means that we're getting information to our users as fast as we possibly can. And as a small team of engineers, we value angular universal as a tool that allows us deliver on our huge mission. We've been with the is the one Bita. And we look forward to continuing to learn and improve along with the community. Thank you very much, Amanda. Talking to a lot of developers on conferences and like any external events and external forums, we often hear about two main challenges change refresh time and deployment. So we heard this feedback and a couple of improvements in version nine. Version nine. Once you add the Angourie versus schematics to your project and you make your project on angering virtual compatible, you now have an angular Seelye bewell there, sir, SSR, which allows you to have the same life reload experience as engy serve. But this time for angering over slow application. So this is not a complete replacement of changes serve because we need to build more things. We need to build both the server and the client. And it is slightly slower. But at the same time, it is really great to not have to review what your client, your server and try to do through start your development and your universal server. Every time when you make a change this way, which just by running and Jiron, the name of your app serve as a star, you will be able to have like reload and you'll be able to test each piece of cold and each change that you do to make sure that it works both on the clients and on the server in order to take advantage of this feature. Everything you need to do, just run Engy Update and Universal Flash Express engine. And this is another example of this fantastic collaboration between the open source community, the and your team. And like different people with different backgrounds here. Alan from the SEAL team worked in collaboration with Manfred's, who is one of our fantastic committed collaborators, to build this experience and make it available for everyone. Well, then our example for a declaration that we had was in terms of deployments. So there are many different ways to deploy an angry words of application. There are fantastic tutorials online. But at the same time, there are a lot of boilerplates that we can definitely eliminate with schematics and all the power of the accuracy. So in collaboration with the FIREBASE team at Google, we worked on introducing deployment schematics for angular universal applications as part of anger such fire. Let me show you how this works. So here I am in an angular CLIA Universal project with the express engine. All I need to do in order to deploy this application to production with firebase functions is just run engy apps and you're such fire. Here I'm adding a culinary version because I recorded this video. Wow. This functionality was to experimental. Now, I'm waiting a couple of seconds to install a couple of dependencies and try to offer at Angourie Virtual Detect and your fire schematics detected that I am instead of an angry virtual project and suggest it be deployed as a firebase function. Everything I need to do at this point is just agree with the firebase schematics. And once I do that, Firebase is going to install a couple of other dependencies which would help us in our deployment, is going to promise to select one of our fighter firebase projects. And now we can just run and deploy. Whilst we run and deploy, this is going to start to. This is going to build our application, both the client and the server, and try to get started, deployment process is going to deploy our static assets to firebase hosting. So we just need to use a CVN and provides the static assist from the closest location to the user and deploy the server to a firebase function. While we click on the Eurail, we can see that we already have our application deployed to FIREBASE hosting and it is all server side rendered. So we were able to achieve this with just to commence Engy at and your Stosh fire and right after that and you deployed. When talking about deployment of universal applications, we often have a couple of things to consider. For example, we already mentioned that we are running the application on the server rights, but very often a lot of these applications are pure rendering. They do not change at all. They're pretty much static advocate. They're pretty much static pages that don't depend on any dynamic parameters at all. What you can do in this case is introduce a cat. King Lear. At this kitchen there can help us to speed up the server side rendering and not performance when possible. So first, the user request is going to go through this caching there. If we find that the response associated with this request is already there, we can directly return it to the user. And this is going to be pretty fast on the other side. If it is not there, we can perform server side, rendering cash the response. And right after that, return to the user. Once these are has the response from our service 800 page. We can just download the audio associated JavaScript and bootstrap the page just hydrated. And this works. All right. This is great. In fact, but at the same time, we have this redundant caching player. And we are rendering a lot of pages that don't really have to be rendered at request time. Right. Because they do not change at all. So we shouldn't make the user wait until this page is rendered on the server, a tool that we should render it ahead of time. And that is where we can use pure rendering, pre rendering technique, which is very similar to server side rendering. But instead of rendering the page, it's request time on the server. We're pre rendering the pages as part of our build process. This can speed up a lot of things. It is going to make our deployment maybe a little bit slower because we have to upload more assets and we have to have longer build time. But at the same time, it does need to reduce the time required for the user to extract to get the initial page. Because we can direct provided from the end instead of rendering, it's on the server. As part of anger, reversion nine, as Wagner already mentioned during the first day of the conference. We already introduced a pre rendering Bueler for the annual sealife you can Butte's your projects with this prete rendering Bueler just by running and Joram. The name of your projects call on Pier Under. Once you trigger this command, we're first going to beaute your application. We're going to. Produce. All the production assets were bulls. The anger you were so server and for your and your client and tried to after us. We're going to start rendering process, which we played loss to optimize. It's quite a lot in this case. You can see the rendering 400 pages for less than a sec. And this is only possible because we are pretty rendering the individual pages on different course of your machine in parallel. So folks from Korea today did even more aggressive experiments here, Steve, pre rendered 10000 pages in seventy five seconds. And this also includes a bill, the bill time is taking. I'll say probably about a minute out of these. Seventy five seconds, which is pretty impressive. All these features that I'm talking about, they're making us even. Better solution for the jam Steck. If you haven't heard about the Jemma's tag, this is this initiative for booting classified applications just by using JavaScript, API, ice and markup just by deploying them to a CDM. This way we can do it very fast applications and we don't have to think about deployment at all because these are just static assets. We can put them once again and forget about. With anger universal, you can already do that pretty easily. You can just run Engy at Universal Stosz Express Engine and right after a Prieur under your app and deploy it wits either end deploy or with GitHub. There is also a fantastic community project. By here, this is Kelly. This project is a full fledged. Static side generator with a lot of plug that allow you to provide. Allow you to get to different information from different data sources. It uses different pre rendering mechanism compared to anger reversal. Instead of using Universo, it's used the headless browser which visits the pages in your application one by one, which has its own tradeoffs. It is a very complex tradition, so it can pre render pretty much any page. That's where it's in the browser. On the other side, it is a little bit slower compared to Universal, which is really fast. But at the same time, it may not work for all of your pages, especially if you're touching some browser. API is that they're not available in dot com. So I don't know if you've felt my excitement here and how many times I said collaboration between different parties. But like, that's one of the main reasons I'm pursuing a career in the open source. All this fantastic collaboration between different people, all this dynamic and open world of exchanging ideas and knowledge. I'm really excited about being part of this open source and your community. And so thank you very much for being part of it as well. Thank you very much for your attention.