Video details

Nx Cloud - Never Build the Same Code Twice Webinar | ng-conf & Nrwl | #ngconf


Jeff Cross & Victor Savkin

Learn how to maximize your code in this webinar all about "Nx Cloud- How to Never Build the Same Code Twice". Presented by ng-conf with Jeff Cross & Victor Savkin of
ng-conf is a three-day Angular conference focused on delivering the highest quality training in the Angular JavaScript framework. 1500+ developers from across the globe converge on Salt Lake City, UT every year to attend talks and workshops by the Angular team and community experts.
Follow us on twitter Official Website:


All right. Well, thank you, everyone. We'll keep the banter to a minimum so you can talk about more interesting things like Monori pose an X and fast builds and just a quick intro. Victor and myself are co-founders of Narval. We're the company behind Anex. We have a lot of background and angular. We both Romney and your team at Google. Now, we've we we've been that narrow for just over three years. We've got a team of great angular experts as well as react experts, people who just love Duke tooling and love Monori, those in love infrastructure and those kinds of things. And now we help clients solve these big challenges that we're solving with an axe and with frameworks like ANGULAR and having a lot of fun doing it. So we'll talk today. We're going to talk to an next. We'll talk kind of end to end an X, but we'll take kind of a shallow look at what that X is and in the Monori post set up and, you know, kind of how it works and then look a little bit more deeply at the Smarts. And X has about building applications, which is some some of the place we we've got a lot more innovation lately and something that solves some real pain points for a lot of teams. So there are four parts we're going to talk about Monori posts in general, like what is a model repo and it's up to us. What does it mean? And then what are the downsides like typical downsides of code to location, which isn't the same as what we talk about when we talk about moderate those. We'll talk about what some of the typical challenges of that. And then we'll talk about what an X is and how it addresses those challenges. And finally, how how that X makes builds fast. What's the what's the magic? So let's start with my. By the way, if you see that pig icon on the corner, that just means it's it doesn't mean anything special other than a note to me that I need to progress the slide manually. And because pigs are great, as Victor established Schroeter so moderate pose, why? What's the point of moderate pigs? Well, there are three things that I'll show. Why moderate. Those are great. One is the atomic changes. So having atomic changes and flight history. And if we look at example here of an app where we have a home page app and it depends on a common UI library, these are both split into two good repositories. And then we spot a bug in the common UI repository. A typical process might look like this. I as a developer on the common UI repository, I'll I'll. All right. Regression test. All right. A bug fix. And then I'll open a commit or a pull request or a change list, depending on what you call it. I verified locally the regression tests. Satisfied by my fix. I push it to S.I. And the test pass. So I merge it a target. And then I go home for the weekend. I go on vacation because I'm happy that I fix something and I deserve it. And then next Monday, then the home page app team comes and they see I've tagged a new release and I said that I fixed the bug that they reported. And they write their own regression test and integrate the new change. And they push it to see I and see that their tests fail. And so what's the problem with this? Make it back and read the Progressive. So the problem what's the problem with this? The problem is that now me is the guy on the on the common UI repository team. I have I've left. And the the change didn't actually fix. And now they have to the home page app team has to go back and figure out why, why it didn't fix. Get someone else to work on it. Wait for me to get back from vacation so I can fix it again when I've lost all the context. You know, I've I've drinking a lot of mai tais and I've forgotten everything that I did before I went on vacation. And now it's a lot more expensive for me to fix that change than it would have been if I would just got it right in the first place. So let's look at the same example. But instead of having two separate give repositories, we have a single repository. So having them by having the common UI and the home page in the same repository, we we remove that step of publishing and versioning something and then integrating it later to see if it worked. We're actually integrating it at the same time. So we write a regression tests for the common UI library and regression tests for the home page app where the bug was initially reported. And immediately we see that if we applied the same fix we tried before, we see immediately, oh, this doesn't actually fix it. I need to talk some more and and change the common UI a little bit more to actually fix the regression. I fix it and update it test pass and I merge it without any of the the integration delay. And yeah, the the way in integrating it for where I can see that it actually failed. So now it works. I don't have any of those problems. So that's why I talk. Just a quick example of why atomic changes are important, but also get shared code. Let's say you have this validator that is checking that a user name is valid. If you want to share this code in a poly repo like multi, get repo setup. You'd have to set up a new repository setup, the tooling, the build and test and S.I for that repository. And then you'd have to set up publishing. And also along with publishing comes versioning. And what what you're guarantees are about changes and breaking changes. Those are all things you have to figure every time you set a new repository. But if you have them in the same model repo, in the same repository, all you have to do is treat a library and that's it. You've got all the built tooling setup already. You've got the C.I. that setup you've got. You don't need to publish because everything's depending on the source of everything. So no versions are necessary in any time you make a breaking change. You just fix whoever depends on that change. And you can. You don't actually break. You just change your API and change whatever depends on it. So that's that's why moderate polls make shared code essentially free. What for what was complex before. And third, we have a single set of dependencies. And so one thing is that with it, there's this thing called the diamond dependency problem. And so that means. So when you have to when you have a library or a project, that depends on two other projects. And they depend on the same version of a of another dependency. That works fine. But when you get into issues, when you have two libraries that depend on a third layer. Sorry, let me back that up. So the yeah, you get into problems. So when you have application at the top, that depends on two libraries directly and they depend on a third party library. At the same version, everything works fine. If you have something where you depend on different versions of that third party library, then you get into a point where you end up having to have two versions of that third library deployed. And this gets into a real problem with JavaScript applications or really anything that has instance of checks at runtime like ANGULAR. If you have different versions of Angular inside your repository and they get bundled together, then it's not going to recognize other versions of ANGULAR. When it does instance of checks or or other, in addition to having performance issues of having multiple angular is included in your bundle. So I didn't do a great job of explaining. But there are good articles on the diamond dependency problem that you could look at to understand that a little bit better. And why having a single version of everything is better. So let's look at some of the downsides of the typical downsides of when you have code living in the same repository. So one is that you run unnecessary tests if you have any many things in the same repository and you're testing everything every time, that can get expensive. And if you have no boundaries between code, we'll look at what that means in a second. So running unnecessary tests, if I have this products home page that's shared on a that depends on a shared product UI. So that's a library then. Those are only two things in this workspace. If I touch the shared products UI, I don't need to test everything else in here. But if you have a naive setup, that's what's going to happen. You're just going to say, how can I know what needs to be tested? I'll test everything every time. Which is bad. That's going to take a lot of time, build time has a big impact on team productivity. And so the next is that there are there are unclear or nonexistent code boundaries. So if you have if you have some function or utility that you've written just for one party or code base, and you don't really intend to support that for other people who use it, there's nothing stopping people from just going in and importing it because it's all they have access to all your source and they can just import it in and use it. And then if you ever want to change that, then you have to work with them to either say, OK, I'm just going to delete it and you're broken or like update it or just create a new version of it because you don't want to change the old one. And then you've got more code and it's harder to maintain, et cetera, et cetera. So that's that's one of the typical downsides of code location. And then you have incomplete, inconsistent tooling. So if you have a bunch of different constraints on it, on your builds, your builds, and there's no kind of coordination or no kind of standardization on how your scripts are are run in the repository. Then you just end up with a bunch of like a bunch of different scripts that you have to figure out which teams is building what and what ways. And if you ever want to change something else or depend on something else in your repository that you need to run. Then you have to figure out all the nuance, all the servers. They have to set up the proxies and whatever else needs to happen to build it. So there can be a lot to figure out. So these are these are some of the downsides of naive could go vacation. But let's take a look at how an ex deals with these things. So if you're not aware and X is an open source tool kit for four, it would sound development. It's built off of Angasi light. So it's very similar to yourself. And actually, it was totally compatible with new OCO. So it has the code generation. We add some additional support for additional tools for you, like storybook react pretty or Cyprus, just like modern tools that most teams are gravitating towards for protesting. It's also the extensible. So we have we have features where you can generate workspace schematics so that your team can have consistent tooling and easy ways of doing common things without without breaking or without doing things differently. And we have a unified experience for four front of the back end. So we have just as much support for node servers like Néstor. Yes. Next J. S Express as we do for Frontin. But one of the key things about an X is that it understands your project graph. So it can it actually understands how your libraries and applications inside the workspace relate to each other. And that's what it uses for most of the advanced tooling that it provides. As I mentioned, it's open source. It's free goods. It's there's no like restricted enterprise version or anything. It's like totally free. We build it as we see new needs with the clients we're working with in the community. And and it's been a great thing for us as a company. We do a lot of consulting around it. So in training, so it's worked out pretty well without ever having to charge for it. So what it is, it's a basically breaks down to its code partitioning and intelligent tooling to develop at scale. So we've got an example here of like you can have, and this is to show that it works just as well with react as with ANGULAR, a lot of things have angular and react in the same workspace. So apps and lips are the main building blocks of an X. So let's look at let's get a little bit more concrete and see what it looks like in practice. So just like creating an annual Selye repository, you'd start by creating index reports. And you'll see project what you calling Yellow Sea life workspaces, I guess. So you'd start by running this command and just running. You don't have to install anything before this. You just run FDX Great next workspace. It prompts you to give it an order name and then you can use a preset like Angular or react. And then it will create an angular app for you. You just name it. Designers assess. And there it goes. So basically the same processes engineer. Of course, NPM install is the slowest part of anything. OK? So that is you forget to fix the timing on this. So that's the that's the quickest way to get started. Just running that script. You don't have to install anything beforehand because you're using pins. Script execution. So next is creating reusable libraries and libraries are are the main way of partitioning things. And an app is something that gets deployed somewhere. And a library is something that can be depended on or can depend on other libraries and allows you to partition your applications. And a lot more logical ways. Even if it's not something that's shared between multiple applications, libraries are great. A great way to partition things within your application. And this is also one of the key things to realizing some of the performance benefits of improving build times later. And so here we've created a library that is my library and just has the just configuration. And then once we have libraries, we can add components to that library. So they're using the same schematics, just as you'd be familiar with. It was a lie. So we're using the schematics, the anchor component schematic here. And of course, using an ex console, if you don't have an accent, so this actually works with an ex workspaces or with English like workspaces. It's a visual studio code extension. And as a UI are a lot of not just the UI, but a lot of, you know, I.T. benefits of working with angular CROI or an ex so that you can you don't have to remember flags. You can quickly run commands. You can explore your your projects. So so that's as easy as it is to add components to a library. There are all these other things that X does. We're not going to get into today. I'd encourage you to check out. Next stop, Dev. That's the home of all things and X, but we weren't now focused on how and X makes builds Fest's and all. I'll let Victor take it over from here. OK, you tonight. Yeah, because I don't use Maxo and I come out of, like in the browser. It's a disaster. So it isn't working. All right. So I'll skip the tie if you see me. Accident and started disclosers, auto timing. And I just need deposit. But you just keep going. Let's blaze through the presentation part and then I just want to show something. Okay. Place through and then you do some damage. Yeah. I'll just show some of my projects that like I'll show it's clean and I'll talk through some stuff a bit more in-depth. Basically the idea that normally you would have a look on one application that you would build like sort of a unit. But when you do. That's right. You don't have one application. You do have one application can pull it out of a bunch of libraries. I say this in this case, we have an application that has full library to be graded from. Right. And a normal size. And. So if you use an X well and you use it for a while, you tend to split your application to weigh more libraries in full. OK. But this example 44 is good enough. And once you do that, you realize that there are certain things you can pull out. Right. And at the end, build and test the Saturday night in isolation and a common things like validator, some basic rules or utility that you can share across teams, across lines of business, across applications. And another very, very common thing that almost all large organizations do in small villages. We have a design system internally, even though we are organizational. A very few people is doing the design system like sort of components that can be on a dude independently, but also can be used sort of in a very in a way to do just know components, for example. Right. And the design system use case in particular is very interesting because those type of things can be very complex. It can be very hard to manage it, how to build. And that's why, for example, where the support of things like storybook and storybook with a fantastical to do design system work much better than trying to do with the Cecava. Next flight. Now, imagine we have this application that libraries and a developer, maybe Jeff, changed. Let's see what's in a shot. OK. Ask him to speak slower. Because. So let's say, Jeff, change the new ticket feature. Right. And then you take a feature like them. I mean, the Pyar change something in there. Right. Normally, the way you would do it if you don't use an X or if you use an X incorrectly. Right. You would rebuild and test everything. And that works like redoing everything. That's a very simple mental model. Right. That in principle works. But it it has a downside. It's been very slow. So in this case, if we do it. I mean, they usually reviewed every cent and they test everything. Right. We are going to spend a lot of time. Right. Validation changes. Right. And obviously, the point of a molé point that you tend to build either large applications composed of different models in it or even models for applications. So at some point that obviously stops can become sort of hard to match that you're going to get a little local build long sii around to the. Next. And so the first thing that we landed. That was the original key feature of an X, right? The coaching gene on it. Feature where because we know the printed graph of your work space, we know how different parts of your workspace are wired up or they depend on each other. We did use this information automatically have to tell us anything. We can derive it by just looking at the source code. And if we drive it incompletely, you can command a few things. So obviously it maybe like 97 percent or the edges into the graph. We figured out ourselves on the left. The last three percent is what you have to manage to configure. Right. So the maintenance of this graph actually is relatively low, right. Given how much work we do. So but once you have the graph, it's not hard to realize that we know that if you change and you could feature next time. The only part that requires changing a lot requires validation. Right. It's a new ticket. Next life support ticket and the app itself. Right. So we don't have to check the live chat up or the knowledge base because we cannot be broken right back by this change so that it is hosted. Yes, we looked at the code change itself. Basically, we're looking to give Diff Lavely. We look at a good if we analyzed it right and we did only a fraction of the work. So in this case, it's sort of we tested every single project. We bring them into the project. What we're doing is we are testing and building a subset of the graph. Quite a few projects. Good performance. So and how much you save here really depends on the type of change you make. And it depends on what type of changes you make. And it really depends on how many apps you build. And it. So that varies. Right. But for let's say for mid-sized airports, for people that don't do very much, you tend to most PR center effect. A small part of the graph below 20 percent. And the largest ripple is the the smaller numbers. Right. Because you don't tend to change your personality. And obviously. So that's one part of making you build and test to perform. The other part of paralyzation, the next allows you to put your life stuff we like on the same machine or the same note. That works very easily. Just pass a flag. It's in parallel. It also allows you to build in a distributed fashion on many nodes, which is I will talk about in a second. I pluralization looks like. So as a result, what you ended up having is the slowest part of your of your bill to the floor was built. You're right. It is your total SII time, right. If you have an app that takes 10 minutes to build, well, you can make your C.I. time 10 minutes. It cannot be faster than that because in this case, it's an indivisible piece of work that you have to ramattan anything. But you can still get 10 minutes to the limit. Right. Richard Fadiga, click. The thing that is interesting here is that so like how can we make the ten minute individual pieces of work smaller, right? If it's a single operation that have run like it's one web I built that builds everything from source, we can make it smaller, right? That's that's it. Ten minutes is the limit, which may not be a problem when sii that much, but it's more on the wings when you do it locally, which I'll show in a second. Right. Where if you have to wait for many minutes to solve your application, that sucks. Right. That's not a good experience. And if you're building a very large application. If you're building a massive app, it's not uncommon to wait for five minutes and you sort of to show up. Right. Five minutes is a lot of minutes. So when you kill you just or by accident, you get through to frustrated. So you have this fear of quite accidentally closing the terminal because you went to waste five minutes waiting for it to reappear again, like. The solution to this is to allow for incremental delays and they give this kind of criminal builds. Is that what I want to solve my application, for example, build my application. I don't have to redo everything from scratch every single time I can do different parts of the story of at different times. Right. And then when I sold my app, I want to redo the part that I need to redo and the rest I'm going to use. In this case, imagine that validators of the design system libraries can be built and packaged in advance. Right. Such as when I bought my laptop. Do something with my ticket. And so my app, I don't have to redo the work regarding others in the design system. Right. And that's the idea behind elemental beauty in this case. What you're basically saying is that my edge is sort of on my own iteration, Iran. It doesn't matter how big my app is. By and large, I mean, I've gotten out of it, but, you know, it not as much how big my app is. What matters is how big is my change. So if I pull M.. And I sort of my app because I have no changes compared to master, my search should be close to instant. Right. Maybe take a few seconds to do something. But if I change a particular one library. Well when you do that library. So might change the relatively fast. Maybe I should wait a few seconds. Right. But if I change everything then yes, that may take a long time. Right. That is the idea behind the mentality that we want to make sure that the apparatus we invoke are proportional to the size of the change. And then the whole idea behind it in general is just something like Watpac were hard to do to bring that unit. A single attribute. Was have to split the two. And the like. Now we can split. That will show you again after the presentation is over. So incremental is what you get essentially is saying that some part of the computation that would have happened before, does it have to happen because we can retrieve it? It's stored somewhere. Right. So in this case, the total time we have can be this can be even smaller. If you don't make any change, it can be close to zero. Right. And that's that's what they do behind mentality and that the causes for this to work when committed to the work really well are what you what you need is the ability to share your computation with your teammates. Everything you have is is happening on your machine, on and everyone's machine. A co-worker and she ran stuff on her machine. And if you don't share your machines, that can still be very valuable because you can like every good is incremental, as in so far as whatever your machine is saying. But if you share it suddenly, most of the build you do, we perform. I want to have to work a small part of the cultism. So in this case, you can just blame for this slight. And it's like so the little we have to to enable to show we can help. What a in a second. It's go a nice cloud, which is essentially a way for you to share your computations with your teammates. With this, if you have that suddenly in this case, the next cloud section is basically a cash section that came from elsewhere, that came from other machines that had someone else friend. So it's still cash. There's just it's not your machine that performs the initial computation. Does someone else, maybe your C.I., maybe your your co-worker or whatever? I think that the. And with that, you can easily reduce the build time to almost nothing. The most operational run unless you actually perform in like a wide range, like a giant. The fact that the most definition that you would you would have to redo a small part of the computation that they would have had to do before. So with this. Right, if you look at this, those bias, of course, it's like pseudo data. We don't know what those parts correspond to. Right. But it's not unthinkable to imagine going from one AEF workspace, a situation where you can actually wait for like 10 hours to validate if you are to give a large workspace. And if you do it naively, then I would slip. Is it time? Right. If you go to a fax, you can reduce it to maybe an hour or something like that. But if you push it down. Right. You can reduce the average time of your Pyar to just a few minutes. Nikolai. Yeah. So this guy in this case is something that, again, showing the second when I actually show you what your project can be reduced by 10x with some caveat that if you have like a 20 minute impeachment's tall, well, then obviously the next class is going to help that. Right. So you need to address some other parts that may maybe differently if you want to really use your leverage item to save like a minute. But, you know, 10 minutes of my time or seven minutes. My time is very realistic without having to put invented anything by just using computation from. Nicolai. Yeah, in this case, we just joined that, you know, Google person to see I can share the same cache so they can collaborate. And so it's not just one developer. It doesn't have to reboot something positive, open or rebuild this guy. And every CIA agent is a separate developer, right? Every CIA agent is separate context. Being able to share stuff between different CIA agents can be a big deal for. Yeah. So can I share my screen and I just want to I really want I want to walk you through some sort of a dual project so we can see how easy it is to to make it to make it work. And they can talk about some stuff a bit more in-depth, this case. OK, so where I am. And want to share my screen. Don't be alarmed. OK. Can you see my skin? I get it. So I have a small X workspace, right, with one app called Shop. And. Oh, great. Uh. Let me let me try again. Suddenly my computer stopped collaborating. Just give me one second. Tim. What the show is basically about, everything floating things is like this sort of the basic example of like what it means to share computation, which is well and good. And then I will show you like an advanced example, like if you have a very a lot of job, what happens at. Okay, maybe this is good enough. OK. Let's try again. OK, can you see, uh, similar expect different, Ed, so, uh, the simplex workspace as a. An application here. A plain application. Nothing interesting about it. So if I do something like an X built shop or an X build on a jubilate, almost the same, except one thing that I sure. And second. But they're going next door shop. All right. And if I do it the first time. What is going to happen is I'm not going to reboot and probe because it's gonna take forever, so I'm doing this in a fast way. And we'll have to wait. The idea behind a common. This is actually a great example of why the discussion, OK, so it took 20 seconds to build it. And this is a demo promo to be. I mean. So I do this at. If I ran that again, right, the result would be instant and I would see this. This note saying that it has been rity from cash. So any duration the next works like that. Right. If you're using a newer version of Onex, if using eight twelve, you can opt into this. Right. If you're using an older one, there is a very easy way to make it. That's OK. So every iteration you run, build a test does matter, right. Will be retrieved from cached at the way next. As it is, it looks at the source code, not of your whole workspace, but of the part of the workspace relevant to the operation. It knows what because it knows how your workspace is wired up. It can deduce which part of the workspace it need to look into to see what's around the operation. Look at that. It looks at what you're trying to do. For example, next build shop is different from you probably say. So what we're trying to do and it looks at the environment, for example, maybe your OS you can prevision what that means for you to have invited OS version of know things like that. It take the three pieces. Right. And it creates a unique sort of value for computation. And they check locally if like do I have a computer the same stuff before. Right. The answer is yes. Instead of computing it, which we know takes the 20 seconds in this case, it can show you the result instantly. And it doesn't just printed it also. If I remove this threat and I run it again. As you can see, the gist, border guard populated correctly. As well. You know, it places the finals where it should be placed and it replays the terminal out for you. The result is basically the same, except this uses, you know, know that, hey, you know, which we assume the result is the same. We played it correctly, right? We didn't compete with it. Give it some cash. And so this is a good deal. The kind of reputation cash in an X in general. By default, it works locally, like on your machine. So if I run it twice so they can see, you know, second timers free. And it works totally on your machine. That's the input. But nothing to note here. That is computation. Cash is different from like an artifact. Right. There are no versions here. Will it push anything to do some artifact that factory to say this is version 1.0, all of this appetitive? Are there other versions? This simple date, a better way to think about it? Like how it works. Well, give her a shot. Right. The Chakri wants to some state of your of your repository. So when you can uniquely do short to the setting the char. Right. This is somewhat similar. So this is computation cache. It works with any point of operation if I do the same for show testing. I mean, testing tends to be quite fast because we just. But, you know, you in some cases can be up again. It is much faster when you're in second time. In some cases, it can take a lot of time. Right. Especially if you have integration that somebody's heavy. Right. So. So that the catch a problem with just doing that is if Jeff wants to build the same shop that I've already built. He cannot benefit from my daughter. He has to build it himself. So we basically have to compute the same thing. My dad stated in this simple setup, what we get is never built a test. The same code twice for the same machine. So on the same machine. The same laptop. I never have to redo the same work again. Right. Which is good. But I would like to share this work across the team, you know, or at work or if it's an open source project and you use an X for that across the whole community. And to do that, what we need to do is, is me. Can you see my brother? OK, cool. What we need to do is we need to connect these people to distributed cash. I can I can just do a. And I can do it like this, if he had tried a bunch of things that are too. I just need to basically copy this command. Right. I need to see if I'm using P.M.. P.M. and. And I need to run it. Let's go now. Over here. And I love this. Comen. What are actually going to do is going to install this package, the next cloud package is going to update one file to basically said, say, if you couldn't find anything in your local cache local with aerosol. Keep looking. Clear it. So now nothing else is required. That's it. So you need to run one command and you're connected to the whole thing. So the difference here now. So between what I had before. Right. And what I have right now is that lets me delete my local cache. That's why I start from scratch. If I just pulled the sequel. If I can build my app, it's going to take some time because the local cache. It will take 20 seconds. And that's the problem with lot of cash. This is basically either my being a different person or maybe on a different machine. Oh, maybe this is your. It has to do. It sucks, right? And again, we have to wait. This is actually like the week in how long it takes. We're back to build. Is a like an advertising for computation cash. You don't even need to do anything apart from like that's wait too long. OK. It is actually if on the. Jesus Christ that's gonna take forever. So if I do it again, this is local cash. Right as before. But now imagine. I'm Jeff. A handsome version of Jeff. Right. And more handsome. So. So I'm Jeff. I don't have local cash anymore, so I just feel the same ripple. And how can I, you know, benefit from having Victor as my teammate? Well, one way to benefit is largely to compute a bunch of stuff. So when I tried to compute it. What happens is, you know, Jeff, in this case, I would get results instantly. I get replayed. All files will be the right place, as if you ran it locally. But from what you say with cash and you can see the results, you're saying, hey, we got it from an expert. Right. So this is not a lot of cash. We got it from from a remote location. I identified it again. Now, I this gave second time. It doesn't have to go anywhere because I was having it locally. You know, you don't have to hope cross enough to find that idea behind the stables action that when you can find it locally, you can find it elsewhere. Right. And the YouTube might you see, I can share the same location where you can get the cash artifacts or like computation artifacts from that I get the works would be tough when you come in. It's not specifically. Specifically. Very different from the fact. And that's why we're cash in computation files. OK. So what that happens and now I have what I have here is a very simple project, which is to go up and you can say, well, if I have a single app, how useful is that? And I think you could say and because, like, I'm doing the same twice, what? Like I do it right. You could say. But the answer is actually you don't get around it surprisingly many times. The same thing you gotta do over and over and over again. And the example would be right. You UCI when you sell the Pyar to UCI to look, I don't know. Jenkins said Jenkins would run your PR built and tests. And even if you use affected, let's say you the fact that you want to rebuild 20 percent of your work space in your life, then you have to do this PR again because someone asked for a code. A small change. Maybe you have a thing we have to be up to date master to have to rebase and rebuild it again. Right. Yeah. So you'll have to rebuild the shipyard again. And musicians, the same PR as had been built is even times, sometimes 20 times before the land of the master. It's done it managers with the master. Presumably, whatever emerges to master has been getting a Pyar, so justice either the same or very similar. Right. So things like that mean that a lot of runs. You do. And you see I do the same computation over and over again. And not just what's often dozens of times. Right. Dozens of times. So if you enable even you you have a sort of competition of not very granular. Do we get an app in this case about the lot of computation has to rebuild the whole house. Right. Even that alone can reduce your SII time like this essentially by a factor of three, for example, because most of the time when you're rebuilding this app, you can just say, hey, I followed this exact deal before, I don't have to run it. That's why. So the replay. It's not something that could be replaced with Slick's effect. Right. Because from the second point of view from the code change, a nice just point of view every time you do your PR same app has been changed at the shop app in this case. It's changed every single time. No difference. Right. What is different here is that the second time around the Pyar the computational D. Do it all. It would be used. Right. And a side effect. The nice part about that is if you like, send your Pyar and you have like a hundred links there. That's a fact. And like one of them failed. And you go back and you're like, oh, you know, now I have to copy and paste the leaves that fail to run them locally to troubleshoot. I don't like copying pasting. I always make the I don't like it at all. A I'm seeing the small annoyances that make my life a little bit worse. You can always do things like that and say, I actually want to. Confessed. Establishing. I mean, in this case, it is not much of an example, the consent. But the idea would be that if I like, I just tested my thing right. And if I do it again, most of my test would be instant. And all is the ones that failed. I would be richer and I you that I don't have to pick the project that I need to attend to because what I did was successful. And see, I would be instant to my machine where it wouldn't successful. I have to do anyway so I don't have to copy and paste if I can always think about my report. As like I can do everything we do and everything becomes very cheap because most of what I'm trying to do has been done already. Now let me show you something that is much more interesting. This is a very basic report. Basically says that if you have a single app in your in your workspace. Right. One application, we have an English seal and workspace. And if you move it to an X with a single app without doing any factoring. By just connecting the cloud, you can get some benefits in there. But you cool part like that. It's a game changer for a sort of larger projects where you sort of have no choice previously or as you described. It is some some. To I have a report here called Incremental Large Ripple. Right. And what it has and what I have in this report is two applications at zero and one. They are the same, except they watered up different. Right. We should in one minute we should move to the keyway for showbiz superfast. It will take a minute more us and each of those app is composed of many, many clips and each leap has a lot of components. OK. So those are very large apps. Zero one by very means. Somewhat large apps. Right. So if I pull my try to run one of them, let's say I'm doing an X sort of Etoile. What is going to happen is energy sources say, again, energy and that's the same, except next adds cash. Everything else works exactly the same, identical. So what I'm trying to sort of it's going to take some time. And as you can see, we'll probably take a. I mean, I saw something. Let's wait. I just want to show you the difference. So you see why instrumentality is so key for larger. This is a this isn't even a large larger. There's just a moderately sized up. Right. I'll do a little plug for an extra dev because we're going over a lot of the cool with the experiences and what the output is. But if you don't understand how these things work in more detail, we have good guides on an expert. Like if you look in the CIA section of the docs and it talks about mentality. Can you give more detail? Yeah, exactly. Yeah, it's a good point yet. Check out Michael. There are a lot of we revamped a lot of guides, tutorials, updated video. So there's a lot of fresh content out there that reflects the current state of it. And what can we do? We can look how bad this is the basic typical wetback stance. Right. Like, you have a mother the that you can go to the washroom and come back and still be serving your life. This is not a very good experience. Will take forever. Without Zoome that it almost c.p.u. It usually takes about a. But now let's say I sort of my app zero. How long would it be? Oh, shoot. Can just keep. Demo error, because we are schooled loads context. I don't explain just that. Just give me one second. Shoots. Sorry. OK, let's go. OK. Same app. Again, if I do, next stop, have one to sit here till tomorrow. So if I do the AB zero. To make it better. And so it's going to do the same up with all those dependencies smartly. And it's sort of it took a few seconds. Right. And the important bit here is that if I I already had local cash. That's why it was so fast. What do you mean? Well, cash. Right. Let's say I'm Jeff and I do the same. The results will still be pretty good. We'll take a few seconds, because this is a large op is lots of large components that it has to fetch. And. And now it's rain. And the second time around it again, it's Boston. So basically we reduced our time from results to when I actually have my city, right? Well, from one minute to about four seconds. Right. And the idea here is that if I decide on my application, basically doesn't matter to how expensive and you chose service right side doesn't what matters on days like this initially is or should always be about constant, you know, just a few seconds. And then if I make a change. Right. On the change, your library locally will then decide that the cost of this change, the time I have to wait is proportional to the size of the change, not the size of application. It is not exactly two in here, you know, because I still give up, I kind of could, but it's more true than I deserve. It allows you to to actually solve substantially larger applications. We do have an energy store now that takes 10 minutes or five minutes. I can guarantee you that by spending, I don't know, an hour and they convert into something of an hour would be it. But they might be spending some time converting something like this. Right. Which is not that difficult. You can reduce it. You should just sort of from five minutes to, you know, five seconds so that at least you took five minutes. But at least you answered one of the questions with that demo, which was, does an extension work with energy serve? So that is a good point, let me actually clarify this, the enter source for this report, which is a public report. And I can share it with you. He's done differently because, well, I guess it doesn't work discussion. So I had to when I I stood on it and extorts go yourself because you don't even care what sort of. That's right. You're right. You know, just sort of does something right. But I'm actually doing something else. I am building my stuff. And then I'm sort of in the folder. So I, I do something else in that. It's sort of an easy way to make it cash friendly, that unless you've waited even more performance than what I'm sure to make it naturally. But this in a way. So if you have a small app, if you enter sort of takes 20 seconds or like 30 seconds, like you maybe don't bother. Right. You know, you are you're in a fine spot. Not a problem. It's more like when you're going into many minutes, then you can say maybe I have to abandon the default pathway of doing that, because what punches doesn't scale that well for large apps. Right. It's sort of not it it's usable, but it requires you to sort of abandon some of the more common ways of using where I can just tweak it a little bit. But I think this report that I have, it's called an X large incrementally for large is a key part. Right. It shows how it's super hard to do it. If you can do it through the. Well, maybe it's just the. Yeah, let me post a link and a chat. I encourage you to look on it if you beauty, not overlarge. Right. If you have a KNAB that is like you feel like to that experience is not that good because it's so slow. Well, I mean, it's actually not super hard to make it reasonably well performant. All right, couple. OK. And a couple more questions. Let me I'll I'll recap some of the ones that I answered and victory and sort over text first. All right. So there was a question about if next sports microfinance, which we have a great blog article for. I got the answer with the blog article. It focuses on react, but there's there's nothing really specific in the article that we were getting asked for more people in the community about it at that time. Can I have one point to this? It's not just Anex support micro front-end. Well, I can dip a microphone into a lot of things. Independently deployable unit, Fred. It's like Munadi was actually make using micro frontman's a lot easier. My iPhone does have downsides. And this article touches on that, that a lot of those downsides either go away, become a lot less severe if you have like a more sophisticated bill to Lykins. Yeah. Yeah. We've got clients who are who are using it in production and it's working out. I mean, there are challenges to it mostly. But the challenges aren't related to Monori process. It's related to how you share things and your deployments and how you how you know how to how you know what needs to be redeployed. How you constrained your team to do more commercial deployments. Another person asked if we have integration pointed done at core, which is being worked on. We just saw the proof of concept face right now with our team. That was it. That's important to us, though, unlike any sort of already. Yeah. So that's important part. That basically is a different experience you can have with an extra basic experience of like, okay, my cash and warnings, the fact it works, that the basic stuff works. Let's work with any technology already. It's tech like an axiom. All the work the next does over there. It's sort of process oriented. So it's all you can process. It can be anything. OK, so abutments core java go does matter. The next level of experiences do I have Flecker, a good collection of schematics I can use to generate artifacts like someone? Because right now we don't. You have to, you know, either also files yourself or like create your own schematic. So that part isn't there. So if you if you have a few apps that you want to just manage yourself, that will work already. You don't have to wait for anything. But if you're thinking, OK, I'm going to create a hundred core micro sources and I need our help, the schematics and stuff. Okay. Maybe, you know, like you have to put some work into making it extends good. Exactly. Another one asked about integration with the Azure dev ops. And I just posted a link in our chat that has an integration with that. It shows off parallelization and affected builds, but it doesn't yet have incremental builds or caching in it. That's something we could update soon to connect with the next cloud just to show some of the more recent improvements. Yeah, I agree. Someone asked about an X support for Basle. The short story is like I mean, we have a storied history with Basle. Victor, I use Basle quite a bit and we work at Google and liked it. And actually originally we're building an X as something that is built on Basle ended up backing off that mostly just because baseball was pretty rusty. And, you know, it's it's come a look come some way since then. But Enochs now supports a lot of what Basle does just in a different way. So, like, it builds caching, parallelization those kinds of things. It's well, it's done in a way where an X already understands how your projects relate. And so it can do a lot of things automatically that you don't really have to configure like you would with Basle. So. So there's four teams who need more granular control and need support for four languages that Basle supports than it might make sense for them. And so that's kind of why there were people working on it and we were supporting them. But I think that effort may have slowed a bit. And the reason why it's slow, it is basically a beta has some Lecrae properties. Right. And we essentially visit our cash and stock. We try to implement sort of the 80 percent that is good enough for pretty much everyone then, man, and be hundred percent efficient some situations. But it's like it's close enough and it doesn't require the configuring. But the the thing is basically that as with any technology that was developed internally at the large company for like 10 years. Right. And it was tailored to the internal customer set. So what a fool would have in 10 years. It's really hard to make it palatable in a broader like a work in different tonight. So it is not the same. But if you look at the closure compiler story, it was kind of similar, which it still works perfectly well internally at Google for a lot of projects. And one can argue you optimize a you bonda better than say I. That's right. But it's because it's so challenging in terms of like it's kind of hard to use. Right. It didn't really like the tradeoffs on that. I'd like to get this like five percent improvement. You have to really struggle to use the compiler. And this base, I feel like it's somewhat similar and that if you are able to make it work on Europeanisation and maybe already helping to stick with it, but the act of making it work is actually its challenge. Like what if Europeanisation is able to do it? We had different ways we could integrate this base. One of them is do we still have it right? Will we use the new base or linker to go binaries, you know, from one? In this case, you know, next few days. But I honestly feel like the number of people for whom this is beneficial is very small. So if you are one of those people, Q3, we have a package and we can explore it, whatever. But it's like you like something about your organization has to be unique for it to be worthy. For example, you already use basic OK, if you were to have a little bit of expertise. That's right. Sure, you can use it if you don't have basic expertise. It might be a bit more challenging than you might imagine to make it work. And I have. From what I understand, D'Angela team can only move to Angela JLI Basle integration of the books and stuff or removing it currently. Right. Pyar Open because it's the market for this is useful. That is. So, yeah, like Victor said, we're big fans and like it in all of what's an axis inspired by Basle like that. So we've just done our own twist and giving it much of the value. And so it's really those people who need that last 20 percent of control that. Yeah. I would say, like, undefendable many things. I like Haskell the way I would want to use HOUSECALL to write applications. Right. So think an X has been typescript to base be been Haskell. Right. Where maybe a conceptual housecoats is better in some ways. But it's a it's a challenge. It's not a failure. Right. If you can use it. Good for you. Multiple cut. So we're a little bit over time, so I'll just I'll just recap a couple of the other questions on here. So, yes, someone asked if there's an on prem version of an X cloud, which is. So we're working on a private cloud version of an X cloud. Just shoot me a note at Jeff at Narval that I know if you want to stay, if you want to get notified as that becomes available, we're working on summarizing that and putting together some info to send out some of our customers. We're working on setting it up. So there'll be something soon. We've gotten a lot of demand for that. And other people and one another person asked about react angular and the same on rebote. Yeah, lots of people are doing that. You can. Sure. It works really well between them. Can I add a quick point there so I can seconds, 10 seconds so they can work as an angle. I feel like we think less yellow. Do you think honestly, we can work with its own CLIA, which functions basically very similar to the Excel. So which often is the case. We have the crummiest workspace. We have the react angular, a bunch of nodes. Some people, like the article, may not be happy in walking. Then G put just emotional reason. They like something about it just feels wrong. So you can use a different flavor. They won't even know the difference. One character is different, the rest is the same. But everyone else will have to see Angel every time they try to build the reactor. Maybe a different. We have the works based on Jason instead of angular. Jason to your. Yeah. Exactly. So if you want. You still having your dad, Jason, in its workspace. Exactly. So that's just to think that if you have like a 50/50 split. Right. Consider that option. And someone just asked about Gatsby app. And that's something our is working on first class support for. We have a package already that works. We just don't publicly talk about it too much, which is like battle tested. Well, we talk about it. It actually works for a lot of folks that we react with. In general, we have support for the act itself. Right. We have really good support for next much, much improved. Now it's like really. Well, I, I enjoy using it. Gadsby's. Another piece that's come in a game is that we just don't talk about it yet and react native is going to come in sort of maybe a dead of or something that already has some spikes. We know that it works. And so it's going to happen later. And someone asked about Schooley, which I imagine is I mean, the Schooley team uses an X and we are great friends with them. So is I think it's a matter of time before it was official. I don't know that we need to add much for school. To be honest, if they have schematics, I'd have to. Yeah. Agent, you just saw. So you used the same schematics, API and everything. So angular Universal's the same like like you can use the Universal Schematics an X. I know people are doing. I think it works out of the box. I don't think like I think the idea is more comfortable enough that they don't endorse your idea. Oh, OK. All right. I think that's enough. Someone also asked about Ionic, which there's a community plug in for that. Seems pretty good. We aren't currently rendering first class support for it. What can I can I want to add one more point. Sorry, I don't we over time. I do the. Is that so we have we're trying to make an extra bleeding like a pluggable with like a V is or little. So what I want to ask you. Yes. Yes. Most people think of. Yes. Go to something you can just put blackens in and it works. And we have a special command would create an X plugin that creates the layout for the plug. So you can also be on like. So if you have your own technology that you want to support for, like schematics builders and Adam to test for those comparison builders, we have really good support right now. We can do it. We have a video that walks you through it like a funny video and like with humor and stuff. Right. You have a good guy. Once you have decided to offer the high quality, you will be added to the community plugins page. So if you're interested in the one thing in the plugin for your favorite technology. Talk to us. We have a special flag for people who also like inside and basically blows equation. And if you go to an next dev and click plugins at the top, then it shows like we actually you can list the plug ins to see what's available. And in the plugin API is also being extended to allow even more more cool stuff with an. Someone asked if we could extend it. Which are O bravery. Answer that. If we could extend for a couple more questions. You're more than welcome to react. People are interested. Go for it. Okay. So. So let me see what I mean. Look at what we've already answered. Live. And Victor, maybe you have a hard stop, but I know it's tough. I'm happy, too. Someone asked about if if an X produces two telemetry and how it can be accessed. You mean how long every build and desperation took us, stuff like that? Oh, look, can you be more specific about what type of telemetry? Their plenty like I think about something limitedly. So I next I next by itself doesn't produce anything. But if you add on its clout and clout in this case, you know, we're insure an X Club or the absolute. Sure. Let me see. Just let me make sure I can change something that is. I'm gonna talk to it, by the way, next cloud is something that we. It's now it's something that there's now a tier for like an ongoing free tier of an X cloud where you get several hours per month, five hours per month of free cash. I mean, when you say now, it's now tomorrow, tomorrow. So right now, if you sign up, you get like a five hour coupon so you can try it out and see how it works and how we charge for X cloud as we charge per hour saved. So we take the original time for running a command, are running a task, and then and then we subtract the time from running it with cash. It's based on liberty. And and calculate that and calculate how much time you save. And so just a dollar per hour save. And so. So you get five hours of a free keep on now. But as of tomorrow, we just have an ongoing free tier. So if you have a small workspace that could just benefit from some of the some of that build savings, then then you can have ongoing. But in addition to the like the cash savings, and we're going to be adding more to an X cloud. That was just the first thing that made the most sense to invest or distribute cash. But we also are building more robust analytics of workspace performance. Right now, it's focused on the cash that a company to have to actually. Go ahead, Victor. So basically now we collect more data, like we collected quite a bit of interesting data. We just don't present in any meaningful way yet. But, you know, it's kind of fun. But basically, if you open a workspace, you would see that you can use your teeth. And sun exists in this case for, let's say, for the build. It will tell you that it saved 18 hours and built. So it still took 15 hours over the period of this time to build different projects in this workspace. But it would have been about 60 hours. The caption wasn't on. And you can go in here, you know, and get, you know, the results if you care about and, you know, look at each individual project and see how every project contributes to the total time. Right. A not a little data expert provided, but this is something that we show right now. This this change way could have been a lot bigger. The reason why they're not that epic is because this is our internal report and I will frequent it, push the master so we don't do that for them. So we'll give them to the ones that most. OK. So you fill out the report. Well, we don't have this practices where we define glassful for that lawn or whatever. We have substantial things we can see. Right. That's not that's not nothing. But, yeah, that's you have to data this little empty we collect. And we also we've been like Victor. So we've been at we've been instrumenting more to collect more interesting data. And we've got plans for for how we're going to show interesting things that help you understand where data are, where time is being spent to more granular degree, different environments, and and helping you to understand that, like insights into how you could partition things better to take more advantage of of actually, can I show something else which is I think would be critical. Mm hmm. And we'll just Google just a few more minutes. Yeah. Well, let me let me see if I can. This is the thing that I particularly, uh, I will say passionate about the word passion just irritates me. But, you know, I, I care about deeply. Okay. So which is. Steve, let me see if it works. OK. Uh, an ex. Proost. Basically, the problem I'm having with trying to help folks used cloud is that if something. Doesn't work. All right. Is there a little guide on the experience? Cash hits? It's kind of challenging to help them out and say, OK, so how can you make a build faster by the use of more computation? Right. And the idea is that we should be able to tell you when we when you run your stuff. Mm hmm. It's going to take forever. And I just wanted to talk to Toxoid because he's going to be much faster. But the idea that instead of. And instead of you're just saying, hey, we didn't find anything in the cash, you know, the end. There is nothing you can do about it. What we can tell you instead is things like, well, we didn't find anything in the cash. What you're trying to do. But someone else did something very similar. Right. Everything was the same, except one file was different. We don't actually store the contents of the file. So we just know, like, sort of the cash against. So I identify the theft and they can tell you that if you build into your app, we didn't find anything in the cash for this app, but we found the cash result for the same build come. But someone passes extra flat source maps, false or something like that. Right. We found that. So when you start, like, figuring out how to optimize your flow, especially in S.I. Right. You can look at that, those sort of debug information at that stuff to see that to find those things where for some reason, maybe different CIJ jobs do things very similar, but not exactly the same. It's like, oh, maybe you keep changing the same file that you may have implemented some version somewhere. Right. And yeah, that's got to break the cash. So the commander will tell you, hey, by the way, you know, that file is different the rest of the same. So you can look in the file and see that maybe you incrementing the version in there and you shouldn't do it that way. Maybe shouldn't come into version and the output instead of in the in the source itself. Things like that. So that's one of the things we have been working on the last couple of weeks, and it's already out. So if you process the Capeci flag, you're both looking it will give you all this information. Basically, we're trying to let people know just use cloud, but sort of maximize the number of caches. Right. So that is their looking position to have to. You to. All right. Well, that's right. Thank you so much, everyone, for listening and sticking around and asking so many great, interesting questions. And thank you. Bravery for. Feel free to reach out to either of us on Twitter. Victor doesn't respond on Twitter, but I do. I saw you in his pink page to go to my Taig Instagram GFP pics. And you can also e-mail me. Jeff at Narod at AOL, especially if you're if you're wanted to on board up on the next cloud or try it out or have questions, feel free to e-mail me or Victor and we'll answer questions and help you get started.