Video details

Measuring and Improving React Native Performance | Alexandre Moureaux | App.js Conf 2022

React Native

The web has several tools to measure performance, but what about us, React Native devs? Have you ever added some optimization in your code (like a memo here and there) and wondered if it had a real impact on performance? Presenting a Flipper plugin to measure React Native performance, with a concrete case of how we used it to bring our app performance to the top quality. Bonus: a deep dive into React DevTools flame charts!
Make sure to follow us on Twitter not to miss any news on the next edition of App.js Conf!


Hi, everyone. Can I just say such an absolute pleasure to be here talking and, I mean, the venue is really nice, the city looks beautiful, this conference is really great. And thank you so much, Dr. Or mentioned, for organizing because this is great. All right, I know what you're thinking. I know that I'm the only thing standing between you and lunch. So I'll get a move on. I'm Alex, I'm a tech lead at Bam. We developed mobile apps in Flutter, Kotlin, and of course, React native. And if you don't recognize me from the website profile picture, well, I have bit less hair and bit more glasses, but that's me. Anyway, I'm specifically interested in the subject of mobile apps performance. And why is that? Well, short story, we had a client at some point and he asked us to build a mobile app, interact native and a web app in React. We thought, cool, easy peasy, that's what we do. And this client had very good technical standards and he had performance standards that he wanted us to enforce on the website. He told us to respect a certain Lighthouse Core, to respect certain metrics like time to interactive, et cetera. Very cool. On the mobile apps. He told us, well, the app should not lag. That's it. That's the only thing he told us. And well, I mean, that's not very scientific, right? Like you have Lighthouse Core, the app should not like, that's it. And I thought, well, there's such a disparity between web performance monitoring tool and mobile apps performance monitoring tool that it got me thinking. Well, first, what actually makes an app performant? How do you know if my app is performant with like a scientific measure? Well, if you Google it, you will find a video by Google which will be available in the slides saying that your app should run at 60 frames per second, 60 FPS. It should be able to draw 60 images per second. For example, when you scroll down to give basically an impression of smoothness. Basically it's like a movie. It's animatic pictures. Your app is just drawing those animatic pictures to give an impression of movements. And the question is, what about react native apps? Because, I mean, we're react native app developers here. And while this is true also for react native apps because essentially they're native apps, so they have to run at 60 FPS as well on the native UI thread. But is it enough? Well, of course, there are some added complexity, and one of them is the Jet strip. Your app could be running at 60 FPS natively. But if your app is a react native app, you're going to have a lot of logic, probably running on the jazz side of things. And so here I have an app with a Click Me button. I click it, it updates the state. One, two, three. But I also have a Kill JS button, the blue one. And this one runs expensive calculation on the jspread if you're interested it calculates the Fibonacci suite, whatever. But basically here you can see that if I click kill JS I can click me as much as I want, nothing happens. And it's not only until Kiljays has finished running its expensive calculation that yeah, I can see that the button has been actually clicked. So your app can actually run at 60 FPS natively, but the JS could actually make it still unresponsive and the user could actually be clicking stuff and nothing is happening. So it's important to check it out. And this is why react native offers is you the performance monitor displaying UI so the native thread FPS and also jsfps. And this is why we created a flipper plug in to be able to display it in a graph over time. And also as an added bonus, it gives you a performance score kind of KNIFELy calculated but it's a nice thing to use for performance benchmarking. Well, speaking of performance benchmarking, it's quite hard to do actually. And so let me give you some small tips on performance measures. The first one is easy, but I think it's one of the most important ones you should test on a lower end device. If you test only your app on an iPhone, chances are that you will miss a lot of issues that your users can actually have. It's interesting to note that actually an iPhone 13 can run code ten times as fast as the Samsung Galaxy A. This is one of the most sold Android devices in the world. So important to take into account those lower Android devices. Second tip make your measures deterministic. Well, I should say as deterministic as possible because usually performance measures are not going to be deterministic. But basically since you're never going to get really the same results, you can just average your measures over several iterations as much as possible. Keep the same conditions for every measure like what data was loaded, what is the network latency, et cetera, stuff like this. And if you can automate the behavior you want to test, it's the best thing to do to keep the same conditions, right? And you might not need an end to end testing tool, sorry to do that. Like on Android for example, ADB is pretty awesome. Like here you can do ADB shell input swipe and boom. The list will just scroll just with a simple comment line which is nice. Third tip disable JS dev, you might encounter issue. So this is available on Android only actually. But if you shake the device you can disable the JS dev mode because otherwise you might encounter issues that will not actually happen in production. So just to be sure, disable it. And the fourth tip is, well if you find some issues, analyze them with the best analysis tools around. And basically you can have two issues either on the jazz thread so far. And in this case you can use rag dev tools, which is pretty awesome. And I will show later on how we used it. For example, you can run a Jsflame graph with Hermes profiler that can be useful as well on the UI thread. Let's paste it. We're jazz developers mostly, so it can be a bit trickier. But on Android Studio you have the system trace profiler. It's also available in Perpeto UI. And on Xcode you have Xcode instruments, which can be useful. All right, but let's dive in a concrete example. Now, so TFR is a French TV channel and we were rebuilding their news app in React native. And so we thought, well, let's ensure that the home feed scrolling is actually very fluid. So we wanted to ensure that on the low end device, the Jsfps will always be over zero, because if it hits zero, basically the user could scroll down and click and it would not be responsive and that the UI FPS would stay at 60. So the list would be fluid. And even if the user tries to click, the apps should still be interactive. So following the four essential tips, that's what we did. We used the Samsung J Three 2017. This is a pretty bad device, but this is our favorite ones. And our research shows that actually about like 15 20% of users had a performance similar to this one. So still, we set up measuring for 10 seconds in our Flipper plugin. We reload with just Desmond disabled. As I mentioned, we wait for the feed to be loaded so the top of the feed, because it's paginated, just to be sure that we're in the same conditions every time we hit start measuring on the Flipper plugin, we run ADB shell inputs, blah, blah, blah, and we wait until the end of the ten second for the measure to finish. And then when we have some measures, we reproduce this five times to be able to average it. Okay, and this is what it gave though plugin was saying our score average was 40 out of $100. Basically, you see those graphs, they should never reach zero, but the JS one was reaching zero for 4 seconds, which means that the app would be unresponsive for 4 seconds. So not very good. The UI thread was okayish. So needless to say, this was just unacceptable. And we ran to regdevtools now to fix it. And Red Dead Tours is available in Flipper out of the box as well. And the first thing you want to do to analyze issues is first just check this box, record why each component rendered profiling. I mean, you can guess why, but we'll see later on why it's useful. But then basically click start recording, redo what you were doing. So in our case, that would be running ADB shell input swipe to trigger a scroll, stop the recording, and it should give you something like this. Okay, let's navigate a bit through this. The first thing you want to check is this. This is the list of comments that happened during this phase. Comments is the phase where react applies any changes. So this is the place where stuff actually happens. The expensive stuff actually happens. In our case, if we zoom in, we had 21 comets happening during our scroll. And usually when finding performance issues, you want to find the most expensive ones. So in our case, that would be the 11th bar, which is the highest one, and it's also in orange. So if we click on it, this is what we see. All right, if you've never seen this before, this is very colorful. Basically, essentially, this is the hierarchy. The reaction of all the components in your app and in gray are the components that are not rendering. So no performance issues there. The rest actually is rendering. All right, if we zoom in a little bit, we can see that. At the top here, for example, we have flatlist all home field was a flat list. So there it is. It's direct child is a virtualized list. This is internal to the code of react native. Inside the flat list code, you have virtualized list as a child. Then it has context consumer as a child, virtualized list, virtual assistant, tech provider, et cetera, all the way down to context provider, which has several children called cell renderer, which is actually also in the internal implementation of flatlist wrapping, what you pass in in your render item. All right, you also have for each component two metrics. The last one is called the self time. Self time is the amount of time that the component took to render excluding all of its children. The right one is total time, the total amount of time that the component took to render, including all of its children. You will notice that here, this is 2.9 seconds. Needless to say, this is very high. I mean, we had three or 4 seconds of JS being dead. I think we have like a nice culprit here. All right, so we have a virtualize list here rendering, and we have a lot of all of our list items here rendering. So what you want to check now is why are there rendering? Right? Because maybe those are new items appearing when we scroll down, appearing at the end of the list so they would be mounted for the first time. Okay. Because of the option that we activated before, we can check if it's indeed an initial render by hovering over the component. And this is what you would see. This is the first time the component render. Or maybe it's a rerender. And in our case, this is what we saw like virtualist was rerendering because of a state change last. We don't know what that is. It's inside the flat list implementation. All right, so this means that basically we scroll down our list is rendering and some list items that were already rendered are also rerendering. So we scroll down and basically all our list children are rerendering and this is super expensive because it takes 3 seconds and I mean they don't need to be rerendered, right? We rendered them before, they should not. So what's happening here? All right, let me take you through four small iterations that we took to fix those performance issues. First one, let me talk about virtualization. If you have a scroll view, if you have a list of like 10,000 elements, of course you don't want to display them in a squirrelview because 10,000 elements to render at the same time is going to kill your phone. So you virtualized. Here, for example, you have the green stuff which is virtualized and you render only a bit ahead of the viewport and only a bit before the viewport, the user's viewport, I mean, I'm saying a bit but it's worth noting that in React native by default you render ten screens worth of item on top of the user viewport and ten screens worth of items below the user viewport. So basically virtualization, I mean you actually render 21 screens worth of items by default in the flat list. So it is virtualized but by default you actually render a lot of items. Good to know. And how does it work in the flat list, in JavaScript, in Ragnative internal implementation? Well, it keeps the states internal state with first and last. Those are the indexes of the first element to be rendered and the last element to be rendered. Does that make sense? Actually, you scroll down, the last element to be rendered increases because you scroll down. So that's what actually is triggering the state change and rerendering the virtual assist. So that makes a lot of sense actually, which means that by design when you scroll down, render item is actually called so you have a flat list, you scroll down and render item is called when you scroll and a new element appears all the time which means that you should memorize all of your list items and ensure that they don't get rerendered. And so that's what we did and we went from something like this if we run the same experience with the memo in Red Death Tools, we had something a bit like this. The biggest difference is here we have a lot more gray stuff or elements not rerendering above it. You still have some green stuff because it's the internal implementation of the virtualised list with the state change last and first and we said that was okay. So all good at the bottom. You still have green stuff though, which seems annoying we'll get to that soon enough. First, I mean, we made a small improvement it seems. So I want to check it out and check out what our score is now. And with the plugin we saw that we went from 40 to 52 kind of nice. Jazz was dead for 3 seconds now, still pretty bad, but better. And UI thread was still kind of okay. All right, so just with a simple memo somewhere, we made some nice improvement. But let's go a bit deeper. All right, this was the last pregnant flame graph. And let's take a look at why we have those green stuff at the bottom. So if we zoom in on this one, we see that we have a sell a list item, or sell renderer here. And the first element rendering here is the virtualized list of virtualized list. So we have a virtualized list inside, the list item inside or vertical home feed. And this is because actually, we have horizontal carousels inside our home feed implemented with recognizing the carousels, which uses a flat list. So those are the virtualized list. Like, if we take a look here, those 1237, we had seven carousels at the top of our home feed. And so those are the ones that are actually rerendering for some reason. If we take a look at why, we see that it's rerendering because of context change. And actually, this is because when you nest flatlist, the parent flat list will pass in its virtualization window state. So the first and last one before it will pass this as context to the nested flat list. Which means basically that if you scroll the parent variable list, the render item on the nested list will also be called, which means, again, that you should memorize everything. You should memorize your list items. And so in our case, that would be our carousel slides. So we memorized our flights, and we went from this to something like this. I think it's very hard to see the difference. Mainly here you have a bit more gray at the bottom. Those are the slides that we memorized. Actually, something new also appeared at the right. But if we check it out, we see that this is the first time those components rendered. So this means that actually, when we scroll down the list, some new items appeared, and so they were rendered for the first time. So okay, let's not deal with that. Let's focus on excessive rerendering for now. All right, so first let's check our score. And we went from 52 to about 54. I mean, basically the same J spread, like, was the same, UI thread was kind of the same. So not such a good improvement, actually. All right, so we need to go deeper. Iteration three, what's going on? Let's zoom in on one of the carousels here. Okay, we see that this is the list, the horizontal list Carsell. And we see that it has one, two, 3410 cells. So basically or ten slides. Wait, ten? No, that doesn't actually make sense because in our app, we have only carousels with four slides. But this is when we realized that this is because we enabled the loop property on reactive NAV carousel. And the loop property on Reactive Snapcraper cell works by adding three slides at the beginning of the carousel and three slides at the end of the carousel. The brightest among you will have noticed that this checks. And so, yeah, basically it renders ten slides instead of four. And I mean, this is important to know what the libraries you're using are actually doing, because we thought, like, well, it's not so great for performance that it's actually rendering ten slides instead of four. So we disabled the prop and then we went from something like this in rag deftools to something like this. If we zoom in on a carousel, we now have four slides or four slides that are being rendered. So we reduce a lot the time to render for each of the carousels. All right, let's check the score. Now, we went to 70. All right, jazz is only dead for 1 second, and the UI thread is still kind of close to 60. But still Jazz being dead for 1 second means that the app will be I mean, if the user scrolls and tries to click, it will take 1 second to basically process the click. So still not great. All right, that brings us to the last iteration. Okay, let's click on the virtualized list. So this is still the carousel. If we zoom in on one of the slides of the carousel, you see that there are the four slides for column of the carousel. If we zoom in, we can see that we have the virtualised list at the top, and it's still taking 75 milliseconds. And it's not supposed to be doing anything. Like, okay, I mean, it's rerendering, sure, but we mano is so much stuff. It should not be doing anything just like small comparisons. Like, it should take less than one milliseconds, basically. Okay, we noticed this guy here, animated components, it's displayed in orange. So Reactive tools will display in orange the components that have a very high self time compared to very high total time. So it took a lot of time to render, and it's not because of its children. Like, there's something problematic with this guy here. Okay, so animated component here basically is expensive to render and it's rerendering every time we scroll. But where is it coming from? Because here, this is the flat list code. So from virtualizlist to sal render here, it's our code. It's our memo slide. Like, we can see anonymous memo. Probably shouldn't have some anonymous component, actually, but anyway, so this is actually coming from recognizable and indeed, animated component is what is used to make the nice transitions between slides. But here it's expensive and it's rerendering. So it should at least be memoised, and it's not. So we thought, well, maybe we could patch react native Snap carousel, but at this point, we thought nesting virtualized list is tricky. Like, OK, like the nested list is rerendering because of the parent list because of blah, blah, blah, virtualization windows everywhere, maybe? Can we just avoid it? Actually, the answer is yes, because remember when I was saying that by default, react will render ten screens worth of items ahead of your lists in your list? In our case, we have only four slides. So actually, if we have a flat list, by default we have no virtualization benefits whatsoever. So we just disabled it. We actually recorded the carousel with scroll view and the Paging enabled prop, and now we have this and we have gray everywhere here, which means that nothing is rendering anymore. So we killed all of the excessive rerendering now when we scroll down and so now if we check our score, we went to 90. Chess is never dead, and the UI thread is still close to 60. Alright, well, this was actually a day worth of work. This took a while. And so my advice here is measure your app's performance. Try it out. I mean, chances are that if your app has some code that has been there, like if your app you've written code in your app for like three years or something, you will likely have performance issues if you've never really checked, specifically if you've never checked on a low and Android device. So I encourage you to try and test your app on a low and Android device. You can use the plugin, try to find areas hitting zero, JS, FPS or being below 60 UI FPS allies, and fix them, obviously, and then check your score improvement. I mean, the score is always nice to be able to say, oh, we went from 40 to 90. If you want to tell that to your client, he will usually understand better that the score has increased and if you talk to him about like FPS or something. All right, well, what's next? Well, we have a high focus also on automating measures because you could see that this was a bit tedious to do, and I'm also looking forward to watching the talk from Mihau. I don't know how to pronounce that story from Col Stack later on about something like this. Very excited for this. We want to add new metrics also, like time to interactive Ram usage, stuff like this, and measure production apps, at least on Android, because we know that we're pretty far from having a lighthouse for mobile apps now. But if we do all of this, I think we're getting closer and a lot of those are actually coming over the summer. So if you want to stay tuned, you can follow me on Twitter or if you're have any questions, you can just ping me on Twitter or just hit me up during the conference whenever. That's it for me. Thank you for watching and bye bye. Soon.