Video details

Nic Jansma: Auditing and Improving the Performance of Boomerang

JavaScript
07.08.2020
English

Nic Jansma

by Nic Jansma
At: FOSDEM 2020 https://video.fosdem.org/2020/H.1309/webperf_boomerang_optimisation.webm
❮p❯Boomerang is an open-source Real User Monitoring (RUM) JavaScript library used by thousands of websites to measure their visitor's experiences. The developers behind Boomerang take pride in building a reliable and performant third-party library that everyone can use without being concerned about its measurements affecting their site.
We recently performed and shared an audit of Boomerang's performance, to help communicate its "cost of doing business", and in doing so we found several areas of code that we wanted to improve. We'll discuss how we performed the audit, some of the improvements we've made, how we're testing and validating our changes, and the real-time telemetry we capture for our library to ensure we're having as little of an impact as possible on the sites we're included on.❮/p❯❮p❯Boomerang is an open-source Real User Monitoring (RUM) JavaScript library used by thousands of websites to measure their visitor's experiences.❮/p❯
❮p❯Boomerang runs on billions of page loads a day, either via the open-source library or as part of Akamai's mPulse RUM service.
The developers behind Boomerang take pride in building a reliable and performant third-party library that everyone can use without being concerned about its measurements affecting their site.❮/p❯
❮p❯Recently, we performed and shared an audit of Boomerang's performance, to help communicate the "cost of doing business" of including Boomerang on a page while it takes its measurements.
In doing the audit, we found several areas of code that we wanted to improve and have been making continuous improvements ever since.
We've taken ideas and contributions from the OSS community, and have built a Performance Lab that helps "lock in" our improvements by continuously measuring the metrics that are important to us.❮/p❯
❮p❯We'll discuss how we performed the audit, some of the improvements we've made, how we're testing and validating our changes, and the real-time telemetry we capture on our library to ensure we're having as little of an impact as possible on the sites we're included on.❮/p❯
Room: H.1309 (Van Rijn) Scheduled start: 2020-02-01 14:00:00

Transcript

Hello, everybody. First of all, I am honored here to kick off this Web performance, Devrim here, SDM definitely want to thank the Wikimedia performance team for organizing this and for you all to be here today. Thanks for coming to listen to me and the others. My name is Nick Jahns. Both I work at Akamai. I work on our impulse products, and specifically I work on an open source library called Boomerang. And today I'm going to talk about the performance audit that we gave and boomerang and what the findings that we found, the things that we improved and kind of how we've made it better. So why are we here today? So Boomerang is an open source, real user monitoring library. It monitors performance when you visit a Web site. If boomeranging is on it, it's going to capture all the performance data that we can. How fast it took the characteristics of the page itself. We are Akamai are the primary contributors to Boomerang. Here's a GitHub repository if you care about it, but it is an open source product. You are welcome to use Boomerang if you want. If you have your own back end that you want to send the data to. You can send it to there if you want to use our product impulse. We will be the real time dashboards for you to show all this data. So we are a third party, if you will. We provide a script that other people can include on the Web sites to capture the data in building a third party, especially real user monitoring script. We think it's very important to try to not affect the page too much to minimize our costs. Quite a shame if we were performance monitoring product that also had an effect on the performance of the pages that we're on. In fact, our customers are people that pay for impulse have been increasingly sophisticated in making sure and asking us whether we have a cost when they're including us on that Web site. So two years ago, what we did was we decided to take a step back and look at from a holistic perspective, what is the cost of including Boomerang on your page? What is the cost, including the script? We wanted to not only understand it better for ourselves. We wanted to look to see if there were places that we could improve. And we wanted to be able to share this with our customers. We think it's very important to be transparent. I think it puts a lot of trust with our customers to show the good and the bad with what you have and to help the customer understand the benefit that you're bringing along with the overhead that it takes. So for the past two years, we've been working on performance as one of our main features for the product. We're really trying to improve Boomerang consistently over time. And this is the kind of stuff that makes me passionate, makes our team passionate. We have a team of five developers. A couple of them are here today. We love performance. Right. And so this is what I'm going to talk about today. So why should you care? Why are you here? Well, for one, sorry, you're just stuck in the room with me, so you got to stick with it. We lock the doors. We can't escape. Just kidding. Maybe you're like me. Maybe you develop a third party library. Does anybody else actually develop a library that other people use at all? For many reasons. Couple hands. A lot of people are Web developers, et cetera. Use third party libraries. They use other scripts to help build their Web sites. So maybe you've had this happen. If you've been working at a Web Web site or a big company, your boss comes by and says, hey, I really want you to include this one new library that we need for a Web site. It's just this one simple line of code. Just a script tag. Please include it. Nothing can go wrong, right? It's very simple. Unfortunately, that one little line means a lot, including that script on your page means it could stop your patron loading entirely. It could slow down your Web site. It could create incompatibilities with other libraries on the page, too. It could change from underneath you if you especially if you're loading it from a city and or from another place that those those JavaScript sites could change without you even asking it to. And at the end of the day, it really has complete control over your Web page. JavaScript is running the Web page. It can do anything intentionally or not. So I wanted to put this into some sort of context for us. As I said, we provide boomerang for our customers, Arkoma. We're on over 14000 of our customers Web sites. We track and measure over a billion page load today, which is quite a lot, according to the open source statistics that I could find, which on somewhere between 75000 and almost a half a million Web sites, depending on what data sources you're looking at. So a lot of people are including the open source version of Boomerang on their pages as well. And if you take all the different combinations of all the different pages that boomerangs on all the different browsers that can it, all the different people that can load it and all the different geo locations that can load it, all the different other scripts that could be on the page, too. It's kind of scary. There's a really, really big matrix here of things that can go wrong. You know, you could have an incompatibility with another script. You could have edge cases for your performance, et cetera. So what I hope to do today is to share some of the things that we've learned when we've evaluated ourselves so that you might be able to do this as well for any library that you're taking a look at. So how would you evaluate the cost of a third party script? How do people that want to include Boomerang on their page? What questions should they be asking about what is its cost? And how can we, as a provider of a third party script, convince our customers that the benefits outweigh those costs? So every script has different aspects to it. Right. One of those aspects would be how big is it? What's the bite size? So taking an example here, moderniser that J US is ninety seven sorry. Ninety two kilobytes on a minified three to go. Jesus. Is that good. Bad. I don't know. Honestly it's just a number. At the end of the day, you probably want to minimize this as much as possible. Incise can be an important factor of a JavaScript library, but it's definitely not the only thing that doesn't tell the whole picture. Resource way is the concept of the total bite size of your page. So this includes all your JavaScript, all your images, everything else that's required to build your website. Right. And it's very important. Every single byte matters when you're trying to improve the performance of your page and lowering the bite size of different libraries. Choosing thinner libraries and stuff like that can really help for your performance budget. And while it's probably one of the easiest ways to judge a third party just as based on its size, it's definitely only just one aspect of it. So what we decided to do with our audit of our own code was kind of break up the lifecycle of our script into five main phases. So first, you had reloader snippet or how the script gets on the page. Maybe you include this in your JavaScript bundle. Maybe you load at the attack manager, but the browser needs to know to load your script in the first place. The second thing, the browser goes and fetches the script itself. And obviously the less bytes you have, the better here. The browser then needs to pass it and copilot get it ready to be run. Finally, it runs it for the first time. So there's probably a lot of initialization that happens, maybe registering global variables or handlers and then runtime. So the reason that it's there in the first place. Why is this library on your Web site? And all of these are important from this, the entire lifecycle of a script. So what we did for Boomerang is we decided to jump into the developer tools profiler's from Chrome and other really kind of show, you had a really high level what's going on and then give you details for all the specifics. So this is what Boomerang looks like if you're loading it on an empty page. We start with the loader snippet, which is the way that we get Boomerang onto the page in the first place. Our customers, Akamai, load it from our seat again. Open source customers could bundle it into your application bundle, right. However you want to load up. Then we download it. Boomerang gets downloaded, the browser passes the JavaScript and executes it. We initialize. We set up some global variables. We register event handlers, etc. In our case, for impulse, we actually go fetch some more configuration data. The adjacent request. And then we initialize that. And then finally, the main reason that we're there is we're collecting all this performance information. We're gathering data on how long it took. And that's what we call the car onload event handler. We package all that data up and we begin it out. We send it back to the mothership for processing. So these are kind of the main five stages of our script loading. So keeping those faces in mind. Let's take a look at how you might order a new third party script or how we order the boomerang. We're looking at these at these different phases. So the first question is, how does a new script get on a Web site? Some libraries suggest a very simple script tag, write a script, async tag or something like that. This tag itself has no intrinsic cost. The browser just knows to go fetch that data. Other libraries, like Google Analytics, for example, have a small snippet. This usually is meant to load the script asynchronously. You can see in very small farms right there. That's kind of Google analytics or snippet. And this has a very minor costs a couple of milliseconds usually to execute it. And then to trigger that download of the JavaScript. We are boomerang with to do things a little bit differently. We actually have a more complex loader snippet that we give to our customers. We don't use just a script async tag because, well, you can load JavaScript asynchronously. The browser will still block that content before it fires the unload event. In other words, if you have even an asynchronous script tag that's loading your analytics. If that analytics package takes 10 seconds load. The browser is still going to be in its loading state. The loading indicator is still going to be there. The unload event won't fire, etc.. What we want to do as a performance monitoring script is make sure that we're not blocking. We're not perfecting the performance of the page that we're on. So we have this 40 lines of JavaScript essentially that tells the browser to load boomerang and asynchronous and non blocking manner. We use an eye frame to do this or we used an iPhone to do this. It's a little more complex that actually costs a little bit more than the previous waves I mentioned. So in our case, it could be up to 40 milliseconds. And a lot of this is because the eye frame itself creating an eye frame in the browser actually has a cost to it. So it's a little more expensive. And we actually wanted to focus on that at this when we were doing our audit as one of the main things that we could try to see, if we can improve on, can we get down the cost while still maintaining the non blocking nature of what we're giving to our customers. And if you want more details on exactly how we do everything that we do, it isn't the boomerang documentation I get. You can read all about it. We've explained it quite a bit more there. OK, so now the browser knows that you want to load your JavaScript, right? It's going to go fetch it from the network. This is when the download begins. Every byte that is downloaded affects the overall page rate. You know, every byte matters here depending on how the script is loaded. It could affect other things on the page, too. If it's part of your main application bundle, for example. It will block all the stuff afterward. I know that a lot of people are using more exotic ways of loading JavaScript these days, like two modules and stuff. But it's important to keep in mind that every everything that you are loading has a cost to it. And it's also really important to know that a lot of libraries that you will load analytics, libraries or other widgets, social widgets, et cetera, they'll often load additional data, activate after they load the JavaScript. So they may load other S.O.S or images, get Jason from from various places, for example. Right. In fact, some of you may you know, that's pretty cool to hear from Simon Hearn's called request map. I mean, it kind of lets you see all the different things that get fetched and what triggers other things to get fetched, etc.. It's a good way of knowing, kind of like the whole cost of a library when when you load up. So it's on request, snapped up the perfect tools. Check it out. You could also use things like Web page tast or looking at your network to have to see all the other things that are downloaded. So if we take a look at a couple sample popular JavaScript libraries, we kind of kind of just picked a sample of ones that I knew of. Put them in order of size. We can actually see that boomerangs kind of near the high end of cost here. So simple things like underscore could just be a couple couple kilobytes. Big things like the D3 charting library is 70 kilobytes boomerang and it ends up at around forty seven kilobytes, a little less than 50 kilobytes. It's big. Honestly, this is one of the things that we thought we could improve on. We are doing a lot of things in the library, but maybe there's ways for us to have it not be such a big library. Also, I did want to point out that the bill that I was talking about is the impulse specific version of boomeranging, which is a pretty big goals for all of our customers. If you're using the open source version of Boomerang, you can choose to build it smaller. We have a plug in architecture and you can choose which plugins to include or not. So if you don't need a single page app support in JavaScript, you're tracking that. We include for Impulse. You could trim it down to about half the size even. OK, so now the browser is downloaded your JavaScript bytes. What does it have to do? You actually asked the person compile it can do this before it runs any of it. The main idea here is the more more bytes it is, the more complex it is to pass the compile, et cetera. So, again, you want to minimize byte costs because this does have a cost for us for boomerang. You know, in a modern browser, modern device, it's 10 milliseconds or so is not a lot. But some of the bigger libraries like Angular can be quite a bit more. Twenty twenty five milliseconds too, to just get it ready before you before it even runs. So again, building a smaller library can really help with a lot of this. OK, second last phase is initialization. So this is when the browser has everything ready and it hits run on the script every for every script, this is different, right? This is what they're doing on the page. They might be registering global variables or hooking into different events, et cetera. They might be fetching more resources. Maybe it's a social widget. Hopefully it's only doing what you're asking it to do. In our case, for Boomerang, again, we registered a couple variables and listened for the OnLoad event and some other events, depending on the features that are enabled. We don't do a lot of work here, only 10 milliseconds or so. Not too bad, I would say. We did find a couple of ideas for improvements here. We might defer some of the work that we're doing. We might break up some of the work that we're doing, but not not too bad overall. And then finally, the runtime of the script itself. So, again, it depends on exactly what you're using the script for. If it's a utility script, maybe you're calling into it quite a bit. If it's a social widget, maybe it's loading all the lights for something. Maybe you have a Bitcoin miner. It's mining Bitcoin for whatever reason for us. Boomerang. This is where Boomerang does the majority of its work. So after the pays all that's happened, we look at all the performance data for that page. We package it up. We compress that. We put it on the beacon. We send it out. So we're capturing things like all the resources that were fetched on the page, how long it took to navigate what the DNS time was. And a lot of other things like that. If there were any JavaScript errors, we package those up and send those as well. Don't lot the cases and most sites. This wasn't taking too long, say less than 50 milliseconds. But we did find in some some examples, especially in lower end devices and more complex Web pages, that we were taking 300 milliseconds or more. And that's starting to get into the territory of people would notice that there's a visitor. We notice that that we would be affecting actually the performance of the page. So this is one of the areas that we flagged that we really wanted to get into and figure out if we can improve. And I'll talk about some of the things that we found later. So one thing I did want to point out really quick, too, is all of the boldface is up. There are part of the critical path of the browser using your JavaScript. All of the things up there are generally done serially on the main thread, depending on the browser. But any any of these things that you're doing here are affecting the rest of the site being built. And if the user is trying to interact with the page or potentially affecting their interactions, so you're delaying the user input, et cetera. So what you really want to do when you have a third party library, a script like this, minimize the work that you're doing. Break it up. If you if you know, you need to do a really big calculation, try to break it up into pieces. This avoids things called long tasks, which are tasks that are running on the main browser thread that would potentially block user input if they're trying to click or scroll. It would make for a non responsive user experience. So after everything that I said, these are kind of the numbers that we came up with at the end. We did a lot of investigations on alleged different sites. We took a lot of profiles. We wrote about it. We published it in a blog post. And these are kind of the tail, the our costs of boomerang. So the loaded snippet, you know, generally only a few milliseconds on some browsers, it can be up to 40 milliseconds downloading. We're about 50 kilobytes or less passing. Again, that's related to the size of Boomerang. And this is kind of like the low end to high end devices. When we were looking at it anywhere from six to forty seven milliseconds initialization. Again, less than 15 milliseconds unload pending on the work on the page. We could be doing a lot of work here. We could be spending upwards of 300 milliseconds, which is quite a bit, and then we package all the data. Why don't we talk about the beacon much? But we have to send the data somewhere, right? This could be anywhere from two to 20 kilobytes or more, depending on the complexity of the page itself that we're looking at. So an important part of this, too, was we file bugs, everything that we found, every little trace that we looked at, IDEO, big or small. We filed bugs within our GitHub repo. You can actually check them all out there and get a job. And since then, we've been trying to make steady progress on fixing these. So I'll go over a sample of some of those and a little bit. But one thing I did want to chat about really briefly is just the tooling, some of the tools that we use to do some of this evaluation. So we really heavily relied on browser developer tools. All the devil, all the browsers today have really the developer tools. We use profilers and others from profilers because all browsers behave slightly differently. I know that many of you, maybe some of you are not super comfortable with things like profiler's, but they can really give you the insight that you would need to really look at a third party script and to evaluate it. So profiler's can can show opportunities. My my advice is if you're new to profiler's, if you want to look at it, we just don't know where to start. Take your time. Look at the big picture, try to load like a third party script on a blank page and see what it does and look for the extreme look for the longest amount of time of something being run or the luck, the largest call stack. Those can really point to different opportunities or places that are not performing the best. There's a lot of other free open source tools for evaluating different libraries. Lighthouse from Chrome, obviously is a fantastic resource request map that I talked about earlier. Web page test. The list goes on and on. I also have a tool that I've made called Third Party I o that helps you audit third party scripts and I'll show you more about that later. So as I said, you know, we performed this audit. But one of the main takeaways out of it was we wanted to improve. Right. So along the way, we filed a lot of different ideas for improvements. I'm going to go over some of those. I don't think they're going to be appropriate for everybody. You may not care for some of these. I think a lot of my somewhat interesting and maybe it'll trigger a little bit of thoughts if you're working on a library or a script for some ideas or ways that you can improve yourselves. But it's also just kind of good to understand some of the ways that you can improve. So one of the things they talked about initially was our little snippet. Again, this snippet that we give our customers helps make sure that boomerang is loaded in a non-working manner. But it's expensive. It takes a lot of time to run. We actually looked at some of the features of modern browsers and we found something that allows us to rush to load boomerang in a nonbanking manner. That's much cheaper. So if the browser support the preload feature, we actually can rely on preload to load boomerang in a way that's a lot better. In this case, instead of 40 milliseconds only takes a millisecond. It's basically a no up you can barely see at the profiler anymore. So a lot of research was done into looking into this methodology. Now, we tested it on a ton of Web sites. We're still kind of deploying it out to some of our customers and stuff like that. We've documented all this in our boomerang documentation, but it was a big win for us to be able to point to our new methodology that is still allowing you to load the script in a nonblacks way but be way cheaper. The other big area that I pointed out was we were doing a lot of work during the Arnold event. This is when we're capturing all the performance data on the page where we're compressing it, putting out the beacon and sending it back out to our back end. Unfortunately, this is also one of the most expensive things that we did. So one of the things that we found is we actually capture all of the resources that refection on the page. We look at all the timings and we save all the Urals. We compress them and then put them on the beacon. This allows our customers and opensource users to look at the full waterfall for a page load for every single page that is measured. We compress it down because otherwise it's really big. A lot of data. But this was taking a long time to do. So what we we looked at we took a look at the algorithm. We tried to figure out ways that we can improve it. And we actually decided that found a way that we could decrease the efficiency of it slightly. So it gets about maybe two percent larger payloads than before. So a little bit less efficient and depressing, but it is four times faster to run. So just this little tweak of the algorithm, we thought that tradeoff, obviously, of four X speed up is worth it for them, for the minimal cost and the size of the bytes. So and a lot of the sites, the you know, the sites that were taken 300 plus milliseconds are now under one hundred milliseconds, seventy five milliseconds cetera. So that was a big win for us. Another big area of thing, a big thing that we want to focus on was just reducing the size of boomerang in general. We actually had quite a few improvements in this area. For one example, we actually had an open source, community member noticed that we include a lot of debug messages even in our production belts. These are things that we use in our debug builds to like understand what's going on better. But the messages, even though they would never be used, were put into our production guilds. So we stripped all those out, saves six percent of the bytes. We did other things like change our minified from Girlfight two to three. It saved a couple bytes. We weren't using Bratley before. We were just using our content. Luckily the Archim I see the hand knows how to do Bratley, so we enabled that. So that was a really easy, cheap win for us to do. I saved eleven percent of the bytes and then we did things like refactoring our plugins. So we, we have things like our S.P.C.A. or MDI and D5 plugin. We're able to shrink those down tremendously and save a few more bytes there. And I'll show you some of the details about that and about one other thing. I don't talk about too much. I write and talk about it all year is our cookie. So we set a first party cookie on all of the pages that we're on to help measure sessions. So this is how many pages that person visited on your Web site, for example. Our customers want the cookie to be as small as possible. And in some cases, we're over a kilobyte bag. We found some ways to reduce the size of the cookie, for example, instead of storing the full year all over the page. We hash it and just store the hash because we do comparisons of your LS in various places anyway. At the end of the day, we ended up making it forty one percent smaller, which is a lot of our customers were happy about. And then another thing that we found while profiling it was we were reading and writing the cookie a lot. We were doing 21 reads of the cookie and a traditional on a regular page load. We reduced that to two and we're doing eight rates and we reduced that down to four. So we just made it a lot more efficient. And then the final big area improvement that we did was just simplifying our plugins. One of this one of the things here was we actually were using MDT five to do hashes of things like your LS. One of our developers found a replacement for MDT five. That is about five percent of the size of MDT five and three times faster. And for our needs, it provided just as good hashes as MDT five was. So give us a really big bite savings and speed up. And then we did another kind of refactoring of our single page application plugin that just made it smaller as well. So with all that, we actually have some improvement from the numbers they showed up earlier. So the Lodish snippet in modern browsers is generally only a millisecond or less. I talked about a lot of different ways that we reduced the size of Boomerang from all those changes. If you can see the number here, it's actually the same size right now. We also worked on new features in the meantime, and those new features ended up eating a lot of the bite savings that we did over the same period. So it's definitely something you've got to keep an eye out for is the needs of performance versus the needs of new features, et cetera, can often take away all the improvements that you're making. So it's still something that we're focusing on and we have other ideas for reducing the size of it. But I guess another way to look at it is there would be a lot bigger now with all these new features had we not spent the time to also do other improvements. And then the big other big section was the notice that we made it a lot faster. There's still a lot of opportunities here if you take a look at this list. These are things that we still want to make better. We want to make faster if possible. And we're hoping to focus in 2020 and beyond and further reducing the size of boomerang, further reducing the overhead, making it more efficient, making it much less likely to have an observer effect for any Web site to run. Again, we're tracking all these improvements in GitHub. And one thing that we kind of focus on for and I felt was really good quote from Rico Mariani from Microsoft is in a mature product with a healthy process, you're much more likely to see a 50 percent gain come in the form of many five percent gains compounding. In other words, you're not always gonna find like really big low hanging fruit. That's just gonna make it 50 percent faster. Right. It's gonna take constant, constant iteration, finding little things, doing little tweaks here and there, and make improvements over time and continuing to make those improvements over time. So another important thing to us is now that we've made all these improvements, how can we make sure that we don't accidentally regress later? Right. It's very easy to undo your hard work with a few couple mistakes. One of the things that we worked on was what we call our Boomerang Performance Lab, which is just a suite of tests that we run in our SII environment. It's a bunch of scenarios, simple scenarios and metrics that we capture with a headless browser. We do things like looking at the C.P.U time of the profiles for all the work that we're doing. We have various counts and durations that we're measuring just using user timing marks and measures. And we just look at simple things like the code size that we have and we can plot this over time. Another way of protecting the line is looking at real time telemetry. So you could if if you are already sending data from your library, you could look at things like runtime stats for different events that you're doing, whether you're triggering any large tasks for us. For example, we actually capture all of the errors that we throw within our own code. We have a big try catching on everything, essentially, that we package that onto the beacon and send them into our real time dashboards so we can see in real time as new versions of Boomerang come out, as new customers come online, as new browsers come out and break things. What's happening with the health of our library in real time? So these are all the different JavaScript error messages that we're seeing recently. So if you have a if you have all this data can really provide an answer and to provide a look into the health of your system. And finally, there is a really cool new API out there called the JavaScript Self Profiling API. You could use this for your Web site. It's an origin trial right now, but it lets you actually get sampled profile traces in the wild from a subset of your visitors. And so if you have a way of looking at this, these sample profiles, in aggregate, you can kind of get a sense of how expensive your code is running in real time, not just on your developing machine. So that's what we did. What can what can you do? Let's say, again, your boss comes to you and wants you to add a new script to your page. What should you do? Well, you can perform a lightweight performance audit. You don't have to do all the stuff that we do. You don't have to go into that depth of it. But it can be useful just to see exactly what that script is doing when you're putting on your page. Put it in an empty page and take a profile tray. See what it does. See what it loads. Make sure that its benefits are outweighing its costs. I think you should also try to ask the vendor, had you performed for a performance audit? Have what? How does your library perform? I think it's good for us as a community to all share this information, to be transparent. I think it will help everybody. Everybody here. I'm a little opinionated on that, but I think it'd be great if other people did this as well. And then, you know, every library that you include on your page try to have an owner, the internal champion that knows about that script, why it's there, what it's doing. And when can you remove it? If you have somebody that really knows this or if you've documented this summer, it can really help with the overall life of like a Web application for all the different apps that are being included. Besides just all the performance stuff that you could see in a profile there, there's a lot of other things third party script should be doing there. Pretty much like a checklist of best practices, right. So try to load the script from a citizen or your own citizen. Make sure it's compressed. Make sure it's minified. Make sure it has a right caching headers. Make sure it's meant minimizing the amount of work that it's doing from C.P.U in from the network. Make sure it's minimally touching the DOM, et cetera, and then make sure it's not doing a bunch of things to make sure it's not triggering JavaScript errors. Make sure it doesn't have the debugger keyword in it. Make sure it's not throwing alerts. These are all things that scripts ideally should not be doing if you're including them on your page. One of the tools that I made to help you know about this is called Third Party Io. So if you take any JavaScript your URL and paste it into the text box. It runs that JavaScript in a headless browser. It goes through that checklist that I just told you about and more and kind of gives you a list of best practices. Is this script loading too much? How does it compare to all the other scripts that we've looked at, for example? So it's a free tool. Please use it. Let me know if you like it. A bunch of links. I'm sure we're going to share up these slides, but it's just pretty much everything I've been talking about from the audit that we published. We just recently published an update with a lot of the stuff I talked about today and the planet calendar and third party as well. Thank you. That's all I have for the slides. Do we have a few minutes, we're OK. And we do it a few minutes for questions. Uh, three or four minutes if anybody does have one. So, yeah, so the question was, we said we cut down the number reason rights to the cookie, could you. Is there a performance implication to doing that? We actually saw these cookie reason rates and the profiler quite a bit. My guess is because often cookies actually get pushed to desperate to more salads, that storage. You may see them pop up a little bit more just because of that. So after we did this optimization, we often don't see cookies read, writes, showing up as much. We did what we were doing was excessive. You know, 21 reads of the cookie doesn't seem necessary. A lot of that was probably cache and memory. But so we just figured we could do a better. Question. Yes. And you're doing some computation with that, with a beacon, basically. Yes. The question was, we're doing all this work on the main thread to compress the data into the beacon and stuff, we have thought about that a little bit. It would be quite a complex thing for us to get a Web worker working on our all of our customer sites as well. So we played with the idea, but ultimately we decided just to do everything that we're doing on just in the library itself as opposed to a Web worker. Yeah, yeah. I can see the first part of the canvas. Yep. So the question is they're using cameras a lot. And what can you do to profile it behind the scenes? I don't have a ton of experience in canvas. I would assume that the profiler would be able to tell you a lot about what it's doing. I mean, there's still all these interactions with the API is that you're taking ET. Profilers like Croma will show you the frames when they're rendered, et cetera. So you can see all the work that took getting up to that frame and stuff like that. But I would just say profiler itself, if you're using a library to draw the canvas, like using D3 or something like that, you can certainly put it through its paces as well in the profile, or you can make sure it's being loaded in all the right ways. But those libraries are generally pretty optimized if they're doing drawing and stuff. Yeah. Dissolved another minute or two. One more question over there and we'll take more questions afterward if anybody wants it, please. Compared to what products are. Century, oh, Iraq, JavaScript error reporting suff century I. Yeah, yeah. So the question was, how does Boomerang itself compare to something like century? I haven't used it very much. Boomerang does capture JavaScript errors on the page, and we do report those in real time dashboards providing a similar ish service. But I'm not sure all the comparison details between the two, but it's certainly worth checking out if you're interested. So.