Video details

Concurrent Mode in Angular- Non-blocking UIs at scale | Michael Hladky | EnterpriseNG 2021


In application development, there is nothing more important than performance. Bad performance scores, blocked UI, and slow applications are annoying and can be time-consuming to solve.
In this session, I will teach you the theory behind Concurrent Mode in Angular, how it works, how to enable it in your builds, and how you can use it to speed up the performance of your application.
ng-conf is a three-day Angular conference focused on delivering the highest quality training in the Angular JavaScript framework. 1500+ developers from across the globe converge on Salt Lake City, UT every year to attend talks and workshops by the Angular team and community experts.
Join the Angular Community: Get your ng-conf tickets: Follow Us: Learn More: https://ng-conf-school.teachable
Read More:
Hear More: Official Website:


Hey, there. I'm Joe Williams. Before we get into the video, I want to remind you about Ng Cough happening on August 31 in person. So head over to to check out the speakers, check out the talks, and get your ticket. It's time to get Be back together. Hello and welcome to my talk. Angular, concurrent mode, highly performance and nonblocking UIs. This talk is all about performance, and this performance is focusing on rendering. And I want to compare server side and client side rendering in a pretty funny way today. First, let's think about those two concepts. Here we have the server, and the server sends us a lot of HTML and CSS and a little bit of JavaScript. And while interacting with this page, we were sending back and forth some requests and received new data. And nowadays we send a little bit of HTML and CSS and a lot of JavaScript, and all the rendering of the Dom updates are happening on the client. So let me stress out that this server side rendering basically is the good old server side rendering, where we used HTML and some relational database and PHP. Not the fancy one for single page applications, but okay. So to make it clear, I don't want to talk about the stack or the framework in which we can build the stuff. I really want to focus on rendering, on rendering in the browser, on the client side, to be more specific. First, let's jump into a quick demo and let me show you what a blocked UI means. And here I have a number of items in a list. And when I navigate back and forth, we see that those items are rendered. Let me decrease the number a little bit so that you don't have to wait for so long. Okay? And if I navigate now here, you see that the button is frozen. Everything got stuck. Just to render those 30 default items here, I do it again. And please look at this animation here. If I click, it freezes, and then I can navigate. So this is a blocking UI. This is one demonstration how a blocking UI feels. And this is of course, not a really good feeling. To be more specific, this feeling is really bad. Let me fix that for you. But before I fix it for you, let me quickly introduce myself. My name is Michael Latke. Latke is very hard to pronounce and speak, so let's stick with Michael. I work a lot on performance with my company. I focus on not only performance, but also reactivity and angular. So feel free to contact me whenever you want. But now back to the problem. Let's focus more on rendering and responsiveness in the browser, and let's take a look on how the good old days did work. And back then, when we had a list and we received that list from the server, server side rendered, we could basically interact with the list by clicking a button next to an item and say move this item one field up and then we send it to the server. Server renders the list again and bomb. We had sorted our list. Of course, these were in the good old days and we never ever want to sort our list in that way, right? It just took forever for all the round trips. So how does it look today? Today we have instant response. We render those changes on the client and we just send it if to the server. So the initial list of course still is sent from the server and then we can drag, click, drag and drop item to a specific place, leave it there and then we just send the if the move command to the server and all of that stuff is really snappy and instant. So users always want to have this instant response and they want to have it on any device. And this is a huge problem as you might already see. So what can we do with that problem? Let's dive into some theory and then I will show you some more demos on where we can fix or try to fix that. One of the theory blocks that we need to know about is performance optimization through user space schedulers. And I will introduce them. But before I introduce them, let me give you a small concept of idea of scheduling. And I will do that by comparing scheduling with us, with our calendars. So how do humans schedule their work? Well, we have calendars and on these calendars we have events, these blue circles and all these events can also put together in relation so we can have one meeting and schedule another phone call. We can do that, we can interact with the future in some way. And I have now a phone call and I scheduled a meeting for this Friday and maybe if I realize in the next phone call that I need to skip that meeting, I could also do another call and say well, let's cancel the meeting on Friday so I can schedule work in the future and I can also cancel already scheduled work. And this happens in my timeline in the calendar. How does scheduling work for browsers? What we see here is a flame chart. A flip chart basically represents the work that is done in the browser with a lot of colorful boxes and arrows. On the very left side we see a button click that executes some work. And in this picture I splice out a little bit of work from this button click event to the next task. So the button click is a trigger, an event that triggers me something, the scheduling process. And the scheduling process takes a package of work and moves it into another place of this flame chart. To be more specific, in the next task through the Request animation frame and this native browser event kickstand in and later on execute my scheduled task. It is very important to understand this is our first fundamental knowledge. How can I splice out a chunk of work and move it to some other place? We can do that with a lot of different APIs, set timeout, few micro tasks, post message request, animation frame request, add callback, and a very fancy cutting edge on the post task scheduler. And if we do that, if we click a button and we schedule all that stuff at once, we can see where these scheduled packages of work will end up. Here we see the synchronous and then the micro task. Then we see where the set timeout would end, the micro task. We see the idle callback, the animation frame. Now right before the paint. So this green dashed line here is the paint event. And then we have the post task schedules, priorities. We have three different priorities here. So you see they all end up in different places in my flame chart, another piece of theory that we need to understand next to this, scheduling. So, scheduling is about moving work, chunking work and moving it to other places. And we move those things in the main thread. What is the main thread for humans? So if we think about smallest pieces of time, we could take our calendar and chunk it in those smallest pieces. And let's say the smallest piece is ten minutes because it will not make more sense for us to have a smaller unit. And this is basically one task, one unit of time that we will definitely not go smaller with. And in this main street we exist, we schedule all our work. How does this main thread look for the browser? You already saw flame charts. So at the very top of these flame charts, there are the units, there are the Socalled tasks and a gray box. A task is one piece of work that the browser has to perform in a row without having any other time to process. For example, user interactions and user interactions are very important. They, to be more specific, are essential for core web vitals. If the user interface is blocked, we call that a frame drop frame drops are marked with this red small, tiny scene here that we see on the slide. And they are very unfortunate. They delay our largest content for Paint. They delay also our user interaction and they count as total blocking time. So they really block our user interface. And this is really bad. To give you a special and really nice example, I will open up the angular IO page and I then will do a recording. Basically, I did a recording already when navigating through this common routing task here and the overview. And if I would click that and navigate, we see that it sometimes takes time and it really sucks. So if we zoom in, we have some really interesting dropping frame here we have this long task of 325 milliseconds. And in that task, a lot of work is placed. And if you scroll in, these are all landed animation frame requests, which later on execute components. So those are angular elements. And it would be really cool to fix that frame drop here, because then we could increase the web vitals of our angular IO page, right? And this is what we want to do. So this is a good example for a frame drop, especially this one, because it requires a little bit more knowledge to fix, in particular those multiple animation frames. We also have another one here which would be a little bit easier to fix. We will learn how to fix both of them. Let's jump back into our presentation. It stands, but let's see if we can understand this problem a little bit more. So this red block here, this frozen user interface, is called a frame drop. And I will introduce you to frame drops for human. To be more specific, how I for myself introduce a framedrop in my life, in my main thread, in my calendar. Well, normally I'm pretty good with meetings. Sometimes I talk too much and then I run over the time and I push forward maybe another meeting or any other chunk of work that I need to process. And until I'm not done with my talking, I cannot do anything else. So I block this time and everybody or everything else has to wait until it gets processed for me. Another thing that I'm really good at is I schedule my meetings whenever I have already another meeting, I schedule on next Sunday. And then I realized, oh gosh, I already have two other meetings with good friends on the same day, at the same hour. Maybe I can collapse them together into 1 bar. But it's not always that easy, as you can think. And the best thing that I do is I calculate my work. I estimate how long it will take to implement this and that feature, and I say ten minutes. And if you look at this huge blue box here, there is not enough space on the screen. So I work weeks when I say it takes an hour. And this is how I crash my main thread, how I introduce frame drops into my life. How does a frame drop look in the browser? Well, I already showed you this red bar here. Let's zoom in a little bit more and let's have a close look on a so called long task. A long task is always a good hint that we have blocking UI, which always results in bad web vitals. So if we look at that task here, we see that the left part of the gray box is gray, and the right part of the gray box has additional red color on it, a small red triangle and these red lines. And it's not all of it, it is only the part that is longer than 50 milliseconds. So everything under 50 milliseconds is okay ish for the browser. We can read up more details when we have a look at the rail model. But this is a little bit too much for this talk, so we will just stick with the term long task. You can also look that up in the web. So everything else is a little bit too much for the browser and causes bad user experience. If we have a close look, we see the majority of this click event here is caused by a scripting work. And of course it would be really cool to get rid of this scripting work. And in this talk we will also only focus on scripting we work, not on styles recalculation. This would be the spot for another talk. So let me show you a little bit how we could reduce drop frames. And by doing this, I will open up another browser window. I will open up my incognito browser here to do some cool measurements. And I have this movies app and this movie appear basically displays different movies. And when I hit the refresh button, I could measure it and I could measure and this is what I already did that if I refresh my frame chart looks like this. And at the very beginning I have this 600 milliseconds frame drop and if I zoom in a little bit for those who might not read it that quickly, just trust me. The left block here is some webpage magic script compilation code and a lot of other stuff. And the right part here within the micro task is bootstrapping off my angular application. And if I hover here, you see bootstrap module factory. So this left part here within this huge long task would be the bootstrapping of angular. So by knowing already that we can use scheduling techniques to fear, let's jump into the main TS file where we bootstrap or angular app and let's do something funny. Let's introduce a set timeout and let's wrap only the bootstrap of our angular application of the platform browser dynamic reset timeout and then recompile the app. This settler is set to zero, so it will immediately execute. And after my module would reload. I already did the measurement here because measurements normally take some time. And if I would zoom in in the second measurement, we would see let me do that here, that the bootstrapping phase now is separated into the stuff from webpac. And on the other side we see in my time I fired the bootstrap module factory. So what does this mean? We want additional 50 milliseconds as we already know that are now considered as nonblocking. So 50 plus -50 milliseconds of total blocking time is pretty good for such a small trick. So this is the first thing that we now can do with our knowledge of scheduling. We can wrap the bootstrapping phase, but as we might remember from here, this would not be enough to fix this problem because here we have a lot of different isolated scheduled packages. And to fix this problem, we would have to have a notion of time to understand until which part we should stop and what should go in the next task. To solve that, let's learn about a real fancy and new thing. In the next slide, I will present to you angular concurrent mode main thread scheduling that is smart mainstream scheduling that can prioritize and cancel work. We did a case study. We analyzed Ng Four. This is a really good thing because Ng Four renders a lot of stuff synchronously. And if we look at a small table here and this event click, we see a frame drop, because most of the work in rendering this list is done synchronously one after another. And as you saw before in the navigation event, if I put too much into one task, it really feels ugly. This is another reallife example from a really huge task. And what I marked here is a potential optimization case. This is a huge list that we could, for example, analyze and see how we could optimize it. We optimized it by really doing that scientifically. We first created a case study of initializing a list. Updating the list four times with a button click here, and then starting an interval that updated the list as fast as possible. Down here, the black box, you see the image of the list that was rendered and the red and green things here the check mark. And the cross represents the numbers, the values. So this was a nearly empty list. And as you can see, we have framed drops all over the place for every button click and for every initialization or the intavir too. This here is a close up. So what we did is we introduced scheduling techniques not only to chunk up the work, but we also introduced notion of time. And I will come to that a little bit later. But here is a comparison of the same list and we see that the huge task on the left side got separated into multiple smaller tasks on the right side. How did we do that? We introduced this technology in a directive that we called RX Four. So RX Four is a drop in replacement for Ng Four, for example. And this helped us to render this use case list here, this case study list that we had with five times ten items. If we have a closer look on the exact same use case, we see now that the initialisation phase has some quickness here, but not a single frame drop. We see that every single click event is clean without any frame drop. And the interval is most exciting for us. In between the updates, we even have a fully idle browser. We can totally interact completely fluent and not a single frame drop occurred. This is pretty amazing and an incredible good outcome. If you're interested on all the details of this case study, you can read it up. If you have a look at this issue here, I will quickly open it for us so that we can see what else it took us to introduce that and you can scroll through that case study. Here is more information on what the bad parts in the Angie for implementation are, some quick demos, then a lot of concepts, a lot of features that we wanted and already implemented here, some error handling stuff. And then the comparison as I showed you, and some more details in the rendering process. This all is done. And by recording this video, I guess you should, when it is online already, be able to download it. We will release it tonight or tomorrow. With that said, let's have another quick look on the really surprising comparison of this RX Four implementation. So we see the native Ng four, and down here the RX Four. And if you have a look on the red boxes, they are all gone. And this means incredibly good time to interactive, incredibly good experience for the user, and sometimes also a good improvement of the largest content for paint are important Corvette vitals. So I showed you already case studies. Let me go back to this navigation example. If you remember this blocking navigation here. And now, have a look. When I switched from native, which is a really long blocking, to concurrent. So empty, concurrent, empty native. I do it again. Empty, freeze, wait, done. Immediately done. And I even can go back immediately and don't have to wait as I would do it here. So this is a quick demo on how it is solved. I guess I should give a hint more information on how this is even possible. I mentioned it before we gave our small library notion of the frame budget. So at the top, this is native Angular rendering. All the blue packages are packages of worklist items, for example, or other parts that we want to render. And they are all rendered at once because Angular has no notion of the frame budget. Of these 50 milliseconds, which are considered as a good frame, concurrent Angular has that notion of the frame budget. And as we can see in the below image, we see that every single task is non blocking. And if there are two masks, we just schedule it into the next task. With that said, let's jump into our last demo. Let me refactor the movies app that we already saw here. And let me introduce some quick changes, some quick updates to demonstrate how powerful and how quick we can refactor this application. I jump into my project and I already introduced this set timeout. So let me go to the movie list. This very component here is the part in the code that renders all the tiles with the movies. And here we have a list and this list is rendered over in Ng four. Then we have the observable that gives us the request from the server. And then we use the Async pipe to render that. Of course, we already used track by to improve the performance. And now we will refactor that to concurrent angular. So I go to the Ng four and I replace that with an RX. And then I jump here and I will delete the Async pipe and hit the save button. By doing that, I will let the app recompile and jump into the browser. And I will have a close look at the old flame chart here. And if I open that up, give me 1 second. And if I open it up, I see that at the very end we have this blocking task. And down here we render the list of movies in especially this area here, which is the majority of this whole red blocking task. So what I already did is I measured this improvement here. I can just navigate again so that you can see it is still working. And then I jump to the next recording. And in this recording I already introduced the RX four. And as we can see, let me zoom in. Now, our XHR request, this was the blocking task is only 60 milliseconds. Of course we can do better, but this was just a tiny and quick refactoring. And everything else after that here is non blocking. So we have not a single red frame red task here marked. So this was a first quick improvement with the RX four. The RX four is basically a drop in replacement for the Ng four. What else can we do to introduce all those rendering concepts? Here we have a nice single pattern, that is the Ng effect, so called, and we often use that to display or hide content. The same pattern is also implemented here in the app shell component. In this case it is not used to show or hide pieces of our component. In this case it is used as we call it the Ng Effect. There is an object bound to this observable which is always true. And we just use the NGF here to be able to bind this observable over the Async pipe as a variable that we can then later on reuse in our template. So we also introduced another directive which is called the RX lead directive. And this lead directive basically helps us to pass the observable directly and use the leads in Tux to bound or bind these variables coming off the stream to the Vs variable here. And this is another drop in replacement for the Ng effect that in addition to that, also improves the performance. Let me go. Oh no. What did happen? Did we introduce a small problem here? Let me see what I did. I most probably removed some pieces of the code here maybe it is working now. So now we have that. I tried to compile it again. Demo cards be with me and it is working. So let me open up the old measurement again and let me navigate to the bootstrapping of these components. This would be this very task here, where if I zoom in a little bit, introduce the app component, and then here later on the app shell. So with my improvement, the right part, this should be chunked out into a separate task and we again win 50 milliseconds of this blocking task. I already introduced the change. Let me interact with the page and see if it is still working. Of course it is still working. I open up the last measurement, zoom out, and I navigate to that part here. And then we can see that the left part is still in the micro task. The right part, where we have the app shell component, is now splashed out. Of course, there's still a lot of work in this task, but it is already a little bit more separated. We have 50 milliseconds less blocking time and it was only again a tiny, tiny refactoring. This was just a very quick introduction of scheduling techniques and how they are implemented in RX Angular and of course, how you can immediately use them today, if you are very fascinated from the developer experience, where you can just bind observables to Rxlab or RX Four instead of having the NGX effect. You can do that even without introducing the performance optimization. So you can also disable the performance optimizations. Otherwise, please have a look at the package RX Angularcdk. This is implementation details of the concurrent mode that I discussed. A lot of those tools are used. The library that will help you and the template is called RX Angular template. So with this library, you can import rxlet RX Four. With the other library you can use the same scheduling technologies in TypeScript. For example, in the component class, if you liked the content, the cutting edge and super easy performance improvements. Please cheer up our contributors and of course myself. Go on GitHub and hit the star button. We really did a lot to ship those optimizations and I guess it is really exciting news for the Angular community. This was it concurrent mode in Angular nonblocking highly performance UIs. My name is Michael Latke. I thank you a lot for your time. If you have other questions, feel free to ping me on Twitter, on email, feel free to check out the repository and have a nice rest of the conference. See you then. Bye. Hi, I'm Joe Williams. Thanks for checking out this video from Enterprise Ng 2021. Online conferences were great, but it's time to get back in person, see your old friends, make some new ones, and take your career to the next level. Head over to Ng to get your ticket. See you there.