Video details

"Time Travel Debugging JavaScript Applications" by Cecelia Martinez (Strange Loop 2022)


Developers spend up to half their time debugging, but often struggle to reproduce and investigate issues with existing developer tools. A time travel debugger lets you record a bug then pause, rewind, and fast-forward your application execution to dig in at specific points in time. You can even add console logs and evaluate expressions retroactively to go back in time and debug.
This talk will walk through some common bugs that occur in JavaScript applications and how to approach them with time travel debugging. Debug along with interactive recordings using to get hands-on practice with debugging real-world examples.
Cecelia Martinez Community Lead, @ceceliacreates
Cecelia Martinez is dedicated to building developer communities that are inclusive, constructive, and make software development a better experience for all. Her role as Community Lead at involves coding, writing, speaking, teaching, and most importantly listening. She is a lead volunteer with Women Who Code Frontend, chapter head of Out in Tech Atlanta, a mentor, and part of the GitHub Stars program.
------ Sponsored by: ------
Stream is the # 1 Chat API for custom messaging apps. Activate your free 30-day trial to explore Stream Chat.


Hi everyone. I am Cecilia Martinez and this is time travel debugging JavaScript applications. If you'd like to follow along in the slides or save them for later, you can feel free to scan the QR code or grab that URL. I will also be showing this again at the end of the presentation in so if you want to wait until then to grab them, feel free. So, I am Cecilia Martinez. You can find me at Cecilia Creates on Twitter or GitHub. I am currently a developer advocate at a company called Ionic. It is an open source mobile SDK for web developers. I have been very fortunate to be able to actually work at a number of different open source developer tools during my career. Previously was at Cyprus, which is a software testing automation framework. Also at Replay, which is again a free open source debugging tool that we'll be talking about a lot today. So I am an unofficial ambassador, community ambassador so to speak, for all of those tools and tweet about them quite a bit and like to teach those in those areas. So, here to talk to you about time travel debugging specifically. But before we get into the future, I'd like to start by taking a look at the current state of debugging. So, developers spend a lot of time debugging, and by a lot I mean a lot like everyone in this room probably feels this as well, but statistically we actually spend up to half of of their our time debugging and maintaining software. We talk about how developers are more like mechanics, just trying to keep the car running, versus architects, building out new features and functionality for their users. And debugging is complex. And it's complex because our applications have become more complex. I specifically work with web applications and so I'm talking about JavaScript applications today specifically. And we've seen a lot move to the front of applications and less than half on the back end. So things that are happening in the browser have become more complex, right? So if you think about an ecommerce app and an Add to cart button, very simple functionality, you want to click the button and add an item to a cart. All of these things could potentially take place in the flow of your application from a single button click. So if you go to click Add to Cart and nothing happens, there's a lot of area to COVID when debugging. You have everything from your event listener and handlers. Maybe you're setting a loading state while you're checking your back end, sending off that API request. Maybe you're updating a database to reduce the amount of available in your data. Maybe you're checking a third party micro service because it's actually another vendor that provides that item and coming all the way back to the front, handling the response, adding your loading state and finally updating the Dom. So again, if something comes wrong, there's a lot of complexity that you have to then debug. Most developers that work with web applications are also using frameworks. So frameworks like React, Angular View, maybe you're using Self or one of the other new ones that are out there these days, those add additional complexity. So instead of our flow looking like this, it's actually more like this because you have things happening in between your application where you're reacting or interacting with the framework itself. So, for example, you may need to emit an event from that button component up to a parent component. You may need to dispatch an action to your state store in order to update that loading state. You may be using a Lifecycle hook in order to send out that API request when that button clicks. So again, these are all additional things that could go wrong that you have to then debug. We're also limited with our current debugging tools. Show of hands. Who here has ever taught how to debug? Somebody sat you down and was like, this is how you use debugging tools. This is how you should start when something goes wrong. Okay, so those who raise their hands, very lucky. I did not have that experience. It was very much a, oh, no, something's wrong. There's no error message. What do I do now? And a lot of trial and error and learning how these tools work, right? And a lot of times we'll know maybe one or two functionalities within a developer tool, but not necessarily all of them. And there are some of us who may know what a breakpoint does, but not exactly how to use it effectively. So if we take a look at a bug so this is Excalibra. It is a whiteboarding app. It's open source, a developer tool. And I use it quite frequently. And we're going to take a look at a bug that took place in Excel Draw. Anybody here ever use Excel at all before? Yeah. Cool. All right. So again, whiteboarding tool. You can type, you can draw shapes you can draw. And so in this bug, specifically, this is a video of the bug. When you have a shape, in this case a square, and you resize it down to zero, eventually that shape locks up and you can't resize it. You can't interact with it anymore. There is a link on the bottom if you want to follow along with this replay of the bug in order to see it. But essentially, that's what we're dealing with here. And we can see visually on the screen what's happening. I was able to record and reproduce this bug. So what are some of the ways that we could approach debugging this with the tools that we have? Right? So one of the first things that everyone does is look for logs or error messages in the console, right? And so this is what the console shows in the replay, in the recording of the bug that we have here. So there are some things here, but related to analytics, couple warnings, nothing specific to what we're seeing on the screen and what's related to our bug. So logs and error messages happen. You're reviewing them after the fact, after the bug has already occurred. So you can think of it like an informant or a man on the inside. But it's only helpful if that person or that log is at the right place at the right time with good information. And it requires developers to anticipate where things could go wrong. So it requires us to try and do something like scene into the future and predict where errors might occur. And if I could predict where every bug in my application would be, I would probably never have bugs again, which is not the case. So we can also use tools like Live or Runtime Debugging tools, things that allow us to inspect our application as it's running. So in Browser Developer Tools for example, or in your IDE Developer Tools, you may have the ability to inspect your HTML, your CSS. You can also use Framework Developer Tools for react angular nvue to inspect your component props and state. But again, all of these require you to manually reproduce the bugs and try and catch that moment in time. So, looking again at our resizing down to zero, I'd have to try and catch the moment that the bug occurs once it hits that point, or I'd have to recreate it over and over and over again. So if you think about live debugging as a mechanic who like starts the engine and then hits the gas pedal to see what's happening in the engine as it's running, you'd have to start the engine and reproduce that over and over and over again to try and catch what's wrong and inspect different things each time. So, speaking of time, what we're missing is time, the dimension of time in debugging, right? So things don't happen statically in our applications. Things happen over periods of time. Application renders, event handlers, network requests that go out and then you get the response. All of these require time and happen over a period of time. Traditional debuggers only deal with one specific point in time, whether you're pausing an execution from a breakpoint or you're reviewing it after the fact. So usually this happens also after the error has already occurred and you're trying to piece together evidence after the fact. Like a detective, right? So that is where time travel debugging comes in. Time travel debugging allows you to record and replay a trace of your application execution. This gives you the ability to rewind, fast forward and pause at different points of time in order to debug. Some of you have may already used time travel debuggers. There are some what I would call full functionality or fullfledged time travel debuggers. There's one for Windows native, there's also some UDB. By undo time travel debuggers for any GDB compliant or what's the word compatible languages like CC Plus Plus, Rust and Go. So those allow you to record a trace of the execution itself and then replay it in order to debug. It is performance heavy at recording time, but it captures every single thing that's happening in the execution. You may have also seen some debugging tools that have elements of time travel in them. One example is Cypress, a software testing framework that allows a command log, which essentially has every single command that your test is executing while it's running. You can then go back and you can hover over each command and the application preview. The state on the right hand side will update to show you what the Dom looked like at that point in time. Call that time travel. That was actually my first encounter with time travel debugging. There's also a Playwright, another test automation framework, this one from Microsoft, that allows you to record a trace of your test execution and then view that trace after the fact. Again, it records network requests. It also records the actions that your test code is running and allows you to kind of jump back and forth at those at different points in time. Another one that you may have seen is reacts dev tools. So redux dev tools, again, as you're interacting with your application, records all of the dispatches that are taking place and allows you to jump and navigate back and forth between those. So then there's replay. So a Replay is a time travel debugger for the browser and also for Node. This is the replay browser. I'm going to be focusing on web applications today. The Replay browser essentially allows you to start and stop a recording of your application execution and then create a shareable via URL replay of that. So you're able to use that URL to attach it to bug reports and to share with your team in order to collaborate and have the reproduction of the bug. So you can capture it once and it's reproducible forever. So anybody who's ever worked on open source or on support teams I started in technical support, may have dealt with this before, where people want you to fix issues but do not provide a reproduction of that bug and makes it really, really difficult in order to understand what's going on and fix it. So with Replay, you're able to have that recording of what actually happened at that point in time. The way that it works under the hood is pretty complex, effective. It uses essentially effective determinism. So this is essentially a replay is not specifically deterministic. We partially record the lowlevel system interactions that are taking place in order to reconstruct them on demand in a virtual environment when you replay. So it's not a full fledged recording of what took place, it's a recording of enough information to be able to literally reconstruct that in a browser in a virtual environment when you go to replay it. So we are able to essentially reconstruct what would have been at that point in time by doing it this way. Because otherwise, if you recorded everything, the overhead becomes too much, especially with the browser or web application, to be able to replay everything. So if you want to know more information about how that works, replay IO effectivedeterminism. Digs a little bit more into the technology behind that. And some examples of how we debug our back end using replay as well. But essentially we're able to simulate the world as it was when the bug occurred. So when we talk about time travel debugging, it's not just traveling within time, within the recording of fast forwarding and replaying, you're literally traveling back in time to when the bug occurred. Similar machine, similar operating system. If the bug took place at noon in Japan, the browser is going to think that same exact situation is happening when it replays the book. So this is an example of a replay the viewer was looking at earlier was the viewer mode, which had a nice big screen of the visual of what was happening in the Dom. It also has debugging tools built in, so you're able to see all the source code that was captured in the browser. You have a console, you have your element react and network inspectors, and then you're also able to scrub back and forth between the execution of the code, and you're even able to add console logs retroactively. So if you didn't have a console log in your code before the bug heard, you can go back and you can add one after the fact, and you can evaluate variables at different points in time to see what the value was before or after during the bugs execution. So it's almost like you become the time traveler yourself, and you're like, oh, I wish I would have had a console log on that line. Instead of having to open up your dep environment, recreate the specific version of the code that equals the bug code, and then adding the console login, refreshing your browser, hoping it was the right spot, and it turns out it wasn't. And you have to add another one and do console log here, here, here. One, two, three. You can add them retroactively within the replay itself. So that's where our scala jog example. This is a replay of the bug and in our viewer mode. And this is a real bug. I actually was able to reproduce it. I experienced it in Excel draw. It was already reported in GitHub, and I was kind of curious to see if I could debug it. I was not familiar with the codebase at all, but Excel Draw is open source and published it at source map, so I was able to get in and see what was happening. So using the built in reactive tools to inspect, I was able to kind of find some context as to what was happening. So I found a resizing element property in the react state and I can see that that has some information about the square that we were resizing, the width x and y value, some scroll x scrolly and also a scale x and scale yvalue. So from there I searched just that kind of resize function and found resize single element. This is all within the replay and never pulled down the code base and within that function. So I found that resize single element function and I was able to see that this function executed 95 times in the recording. So this is where time travel comes back in. Because we have a recording over a period of time, I can get information about the code execution during that period of time. This helps me to isolate where I want to dive in for debugging. So I retroactively added a console log to log. The width of that element is the value called element new width and I was able to kind of zero in on when we got closer to zero and eventually it hits NAN. Element new width becomes not a number and it happens right after it gets to zero. So I was able to isolate that point in time. And that's where the red line is we can see on the log there. This allows me to trace up the bug from there. So I add another console log further up where element new width is being defined. And I did it conditionally so I really only care about once it gets smaller. So I said if element width is less than four then log scale x because I could see that element new width was being calculated based on scale x. Going back to the point in time where I'm locked in with that red line, I can see that after element new width become zero, scalex becomes negative infinity. That's a good sign we're getting closer. And once you're locked in on a point in time, you can even hover over the variables in your code and see what the value is at that point in time. So here I'm locked in at that moment where scale x is negative infinity and I can see that I'm calculating it by dividing by a variable called bounds current width. If I hover over that variable it's zero. Mass is not like it when you divide by zero and everything breaks after that point. So everything after that point. Scale x is negative infinity and element new width is not a number and we're no longer able to run that resize function. So because of this I was able to add the replay to the GitHub issue. It had been sales for almost a month and they were able to push a fix the next day. So this is kind of an example of how the time travel element allowed me to really focus in on what was happening to the specific code execution level and trace those variables back without having to recreate everything and try to add all those console logs manually. So with better debugging and better debugging tools, that allows us to better understand our application. Again, I was not a scholar contributor before this. I didn't know the code base, but I was able to learn it because I had better debugging tools at my disposal. So Kennedy Vijayne, who is software engineer at Replay, he also wrote the Defensive Determinism blog post about how replay works. He said, you know, we don't always understand all the possible ways that the programs we have written can execute. When the program behaves in a way we don't expect, we are often at a loss and we want to know why. So by seeing the application execution, over time, we can see, we can get a clearer picture of what our application actually does versus just what we think it's going to do, because those two things don't always line up, probably less often than we'd like, right? So this helps us better understand our app and how we can also identify systemic causes of bugs. Again, the interesting thing about time travel debugging specifically is that a trace or a replay captures complex information about our application execution. There's a lot of data in there, and we can use this data in a wide variety of ways. Like, debugging is just the beginning. It's just one application of how we can use that information. So the Replay Protocol, which is publicly documented API, that allows you to interact with the information that's recorded during a replay, you can use that in order to build out different types of functionality. So some examples that we have is code coverage extraction, seeing what code run during the replay in order to calculate how much code actually executes. Another is react mount detection. Are you triggering too many component mounts than you expect? And you can use the information that's recorded during a replay in order to extract that and to calculate that. There's also information that you can get about your automated test. So Replay allows you to record your automated test execution. And if you think about that, you can over time, understand when tests pass, when tests fail, and what has changed within the context of that execution. Better debugging also leads to a better understanding of frameworks. I mentioned earlier that frameworks add complexity to our applications. And sometimes we have to debug the framework, not necessarily our code, but how we're utilizing the framework, right? So going back to our example of all the things that happen in our application, when you think about what's happening underneath the hood, within view, react or angular, it really looks more like this, right? Because if something goes wrong with the framework, that can be very, very difficult to debug. You could create a breakpoint in like the NPM package code and try and catch when it's rendering or with a replay, because it's capturing everything inside the browser, you can actually inspect. This is an example of the View recording or recording of a View app. You can actually go in and you can see how View is rendering elements and creating elements based on your code. So here I've added a console log within runtimedom Esmbunler JS for every single time that the create element function is executing. And I'm logging in the tag and the props that are being passed through. So if something is an element is being created in a way that I don't expect, or maybe my props aren't passing through correctly, I can actually go in, I can make a reporting, and I can debug the framework itself to understand how it's interpreting my code when it actually goes to render it onto the page. And then also so better debugging also leads to better understanding of bug patterns. There is a really great talk. Jen Creighton, aka Grow Coach, the senior software engineer at Netflix, she gave a talk on Debugging, a sync application that reacted on earlier this year. You can watch it on the YouTube link on the screen, but one slide that really stood out to me, or one point that she made is the best debugging tool is knowing something. Well, a lot of times you don't know something until you know it. And the more that you dig into your code when debugging, you can better identify some common patterns that can lead to bugs. Some of these patterns are component rendering. Just talked about that one, right? Understanding what triggers a render. If your component does not render when you expect, you may have stale state, you may update your state, but the Dom itself is still showing the old state because the component did not actually trigger the render. You could also have performance issues if your components render too many times. I recorded the View application that I was showed earlier. Every single time I clicked a plus one button on a single card, every single card on my page. rerendered that's not performant. I don't want that to happen. And I was able to dig in more and understand why that was occurring. I was using the wrong type of using a rag instead of an object. That was basically what it came down to. Another pattern that we see involve state, right? Multiple sources of truth, accidental mutation, and competing dispatches of actions, originating commands in different areas of the application that could change State in ways we don't expect. And then we also have error handling. I know that it's very difficult whenever something goes wrong and the console is beautiful and clean and there's no help whatsoever there. So with JavaScript, because the type of language that I do is when you use Frameworks, we may not have conventions for error handling and error reporting, or you may do it in ways the framework doesn't recommend or it may not have certain conventions, depending on what you're doing. So another is passing errors correctly from different areas of your application. So I have been personally victimized by an okay status Network request. But then in the body there's an error message, right? I don't know if any of you have seen that. I've seen it. It hurts because I was like, Network is good. That can't be the problem. It's got to be somewhere else. 3 hours later, finally looked at the body and had to go take a nap after that one. So making sure that you're passing errors correctly and again, utilizing the conventions the way that they're supposed to be utilized. So let's take a look at another bug resulting from one of the patterns that we just talked about. This is a bug in replay itself. So it is a replay of a replay. We can get some pretty serious replay sections. I think I've been like five levels down one time. So if it's a little disorienting at first, that's okay. It's just because there's a lot going on here. Again, there's a URL if you want to be able to follow along with this bug or check out the replay itself. But what's happening here? This is the very first bug that I fixed on the replay team. And we have two ways to toggle views. One is that Toggle menu up in the right hand corner lets you go back between View Mode and DevTools mode. That works great. There's also a modal shortcut, what we call the command palette that allows you to switch views using a keyboard shortcut. And if you use that method to change views, then everything the layout breaks. We have an extra column on the left hand side and we have another row on top with additional Toggle and the share button. So it works one way, does not work the other way. And again, I wasn't super familiar with the code base at the time, so I was trying to get my bearings in there. Replay. The DevTools front end is a React, specifically an XJS app, and it uses Reacts just to give you a little bit of context of the code we're going to be looking at. So I knew the bug didn't occur when we use the Toggle. So I was like, okay, what's the expected behavior? What does it look like when it's working? And so I went into the View Toggle component and I found this handle toggle function that has and we can see towards the bottom here that it's calling two functions set Selected Primary panel and Set View Mode. And there is some logic that checks whatever the current panel is. And that's panel refers to that left sidebar, the one that was breaking. So this is what working looks like. I got a little bit of context and I'm able to see okay, what does it look like when it's not working. And I found this on key down on the command palette where we're execute command. And I was able to lock in at the one time that that command ran and see that the active index was zero. And when we have an active index of zero, the first command that runs is Open Viewer. So we're executing the Open Viewer command and that's the command that I'm running. Whenever I'm selecting that Open Viewer in my modal, I can see that in the code next to the recording. So next step is to see what happens in that Open Viewer command. So I found the EXview command and it says a key equals Open Viewer. We are. Dispatching set view mode. Does anybody already see what's missing? So this is the working code. So we're calling Set View Mode. And we're calling Set Selected Primary panel. If we're doing it via the command palette, we're only dispatching Setview mode, and we also are missing the logic that checks whatever the current panel is. So we have our updating from two different places with two different sets of logic, and we're not doing the same thing each time. So really what we need to do is have all of this take place within our action itself, within read ups. So every single time that this executes, we wanted to make that logic check and we want to dispatch both of those. So the fix was to move that logic into Set View mode. And so now, no matter where the request initiates, we're always going to go through the same logic and the same state updates from one single location. So this is an example of that pattern we talked about where you have competing dispatches taking place, one from a component and then one from your global state. So really, what I hope you take away from this is to embrace bugs intentionally with a learning mindset. A lot of us approach bugs with dread and say like, how am I going to get through this? But think about what you can learn from your application as you debug, what you can learn about the framework that you're using as you debug, and then also what you can learn about patterns as you're debugging. So some additional resources replayo Examples has a lot of different examples of replays that have real world debugging, like the ones we just walked through. It also has some general debugging resources. Mark Erickson is on the replay team. He is a core maintainer of Redux and talks a lot about debugging with redux dev tools and has some great resources on approaches to debugging from a process perspective. I also have done nine talks this year on debugging various different types of applications. So I have one on View, have one on Angular, have one on React, have one on CICD Pipelines, have one on Just Frameworks. So if any of those situations apply specifically to you, you. Can check out those. Those are all public. And again, you can follow me on Twitter at cecilia creates or on GitHub. I do talk about testing a lot, debugging documentation, basically all the things that developers hate doing that somehow became my sweet spot. So feel free to reach out to me there if you have any questions as well. And if you missed these links or any others, you can grab them all in the slides so that QR code will take you to these slides. Or you can also use the URL. Great, thank you so much. I'm available if anybody has any questions. Yes, absolutely. Okay, so what Replay is doing is it's actually a runtime recorder at its core. And it has a replay protocol, which essentially is a bunch of commands that records the interactions between your application and the browser or your application and node. So we do have some examples as well of node. We've seen people use the node recorder to debug typescripts, for example. So they'll run the TypeScript compiler and node and then record it. And we've had a few people fix bugs based on that because they could see that it was compiling to the wrong area. So it's partially deterministic, as I mentioned. So we're not recording every single thing that's happening in order to be able to save that. So what's happening? We're recording instead these protocol commands. And there's a large API that essentially defines out all these potential things that could happen in those interactions between your application and the browser. So sometimes we're getting paint points, sometimes we're getting the time to first paint all the network request information, the execution of the code, and we're capturing all of that in a way that essentially turns it into a first class object. So we're taking an execution or an occurrence of a runtime and turning it into an object that has all of those properties that are the protocol API, essentially, and the values for those properties across different points in time. Then when we go to replay it, we actually have a virtual environment where we have browsers that are like it's kind of crazy. Honestly, once I learned about this, I was like, this is wild. But if you think about, you know, how, like that saying is, if you have enough monkeys in rumor typewriters, eventually they'll write Shakespeare. It's like that. So we literally are, like, spinning up browsers and using the information from that object to recreate inside all of these browsers in a virtual environment and enough in a way that's effectively deterministic. So we have, like, a joke that it's like, just enough deterministic, or like, I can't believe it's not deterministic. So it's essentially deterministic enough that for the end user, it replicates that experience of running the browser as it did at that point in time. And we have a protocol viewer that you can actually turn on in your console that shows all of the calls that are being made to the Protocol API while you're reviewing a replay. So we'll say, like, for this point in time, get the logs, at this point in time, get the paints, and then if you were to go back and add a console log, we're literally sending that message back. We're saying update the object as if there was a console log here and then rerun it. So we're literally like, re executing the code within that virtual environment in order to add those console logs retroactively. So there's a lot of wild DevOps actually happening because those are all like, virtual machines that are being spun up and browsers that are being spun up in those types of environments. And recently we made a change to architecture where we allowed a single replay to be split up across multiple controllers. That allowed us to be able to record longer replays. So initially we tried to keep it less than two minutes because it became like too much overhead in the performance. But now that we're able to actually split that across multiple controllers in our setup, we're able to have recordings up to ten minutes now, and we're hoping to be able to get to the point where we have always on mode for your Node execution. So you can just continually record your note and then when you encounter an error, you can go in, you can pull that time, and you can start to debug from there. Yeah, so it's because there's a lot more happening than just the JavaScript, right? So we're recording like, all those paints. We're recording also the interactions between the react profiler and the reactive tools. When we added reacts dev tools, there's like an experimental reactive tools as well that kind of like broke things for a while and we had to figure that out because when you think about everything that's happening in the browser, it's not just the JavaScript, it's everything. The paints, like all we robbing all of those Dom paints. So it's not just if it was just the JavaScript, then I think there are like, time travel debuggers for JavaScript. There's one called Wallaby, I think is the name of it, and that is perfectly deterministic, but because there's so much more happening in the browser than just that, it becomes pretty much like it breaks down if you try to be perfectly deterministic. They found. So the team behind this, they started off with Mozilla, they were the firefox dev tools team. So Jason, Laster and Brian are they the Cofounders. So they kind of started there with firefox dev tools and were able to kind of bring it over to replay to continue that work, which is also why a lot of it's open source. Yeah. So that's where that protocol, like, examples repo. Those are some of the things that we sample on the theme team have thought of that you can do with it. The Protocol API is publicly documented, so anybody can we're hoping people will take that and then run with it with different ideas. So replay does let you record your test execution. It works a little bit better with Cypress because the way that Cypress works with the iframe you actually are recording also the test code execution in the browser versus Playwright, that's happening underneath before it actually gets to the browser. So we're able to kind of show you all the test code that executed as well to help you debug anyone's use like, the Chrome DevTools recorder. Yet we were able to record a user flow and then convert that to Cypress or Playwright test code. So you could kind of similarly extract similar information from a replay. We haven't done that yet, but that's something that we're hoping people can jump on or give us feedback on what would be helpful for that type of implementation. So anything that runs in the browser, it can record. So regardless of what the framework is like with Framework Agnostic, it just captures everything. So the thing that will make a difference though, is source maps. So different websites will publish source maps or not. If it's minimized code, it's a lot harder to debug. I don't know if anyone's ever looked at that before. Jason can somehow do it. I think he's just looked at a lot of code. But one of the things that you can do is you can also upload source maps to the replay servers. So if they're not published publicly, we can reference them when you do open a replay on the back end. Yeah. Similarly, the note is via CLI and currently it does have to be like a start and stop execution. So you can't record something that's always on. Like if you have a dev server up, for example, but it does the same thing. It generates a replay with all the code. It doesn't have a viewer, there's no, like, something to look at. But it will record all the code execution that you can then go in, see how many lines ran at those console logs after the fact. That is experimental. Like I said, there's some quirks to it, but we have seen people use it for TypeScript specifically, because TypeScript is another one of those things that requires a lot of debugging sometimes. Okay, and then I think we have time for one more. And so I will take you in the front. Yeah. So demonstrating disk is a big one. So, for example, some people will use Replays to document pull requests, right? So they'll be like, this is what it was before, this is what it was after. And then you can add comments or extract the information of what was changed during the code execution too. So yeah, so if there is something that's specific to one machine versus another, you could compare those. Again, that's not necessarily built in functionality, but it seems like you can extract from that information or go kind of go hunting and add comments. There's also a discord replay IO discord that you can chat with the team and share feedback and ask questions as well. So thank you so much. I believe we have to get the next speaker ready to go, but if anyone has any questions, feel free to come chat with me. Awesome. Thank you.