How long does it take for one to code a BI analytics platform? Using atoti and ipywidget, we will see how we can create a small application that allows us to spin up an BI platform from a single CSV with just a few clicks. Learn more about multidimensional analysis through interactive visualizations and storytelling with atoti dashboards!
Thanks, Martin. So now I'm actually very nervous because coming after Mabel, right? Pardon my coatinger. OK, so it may not be the best coat and cleanest one. There's no type scripting. Pardon me. Okay, so as Martin introduced me, I'm a bit of a jack of all trades, but myself, none. Okay? So my main objective here is just to get it to work. Okay. I feel like a student getting marked by okay, so the topic that I'm going to touch on today, right, is the automatic cube creation. But actually the whole idea is to have a small program that will allow the end users to interactively create a Python bi analysis platform so that they can create dashboard, perform some analytics. Well, skip the technical part. First, let me show you what I'm trying to say. So I have this particular notebook that I've already run through. Ignore all the coding. So here I have a button. What I'm going to do is click on it, select a data set. So here I have predownloaded some data sets from Cargo. So I'm going to choose the Avocado CSE. Okay. And here, once I select that, right, I have a list of columns that is from the data set. Okay? So the next step now is to choose a set of keys. So what are keys here? So in Atomic, keys are meant for us to identify unique data roles in your data set. Why is that important? Okay, so first of all, when your data set is not unique, when there are duplicates based on these sets of keys, right, the last uploaded record will actually overwrite the first previously uploaded data. Okay? So we only keep the latest unique data set in the queue. So the second reason why we have the keys is that because RT actually decentralized based on this key so that it will speed up your query. The query performance will be much faster in that sense. Okay? So in case you don't know anything about the data, you can always choose none. What does this mean? It means that all the columns are used to identify the unique rows. Okay? So if nothing, choose none. Okay. So in this case, I'm going to choose date and then type year and region. And once I submit that, you can see that my program starts creating a session. And then it will actually take the data that I have uploaded, right, and create some tables, authority table. Give it a little while. Then here you can see that I actually have a progress bar that is moving as my program creates. So then once they are created into the cube so now I have a bi analytic platform from one CSV. Okay, imagine your user is not very savvy, but they have a data set that they want to analyze, right? So if you create this small little program, you give it to them and say, hey, now you can actually analyze but they Said this. What do I do next? Right. I have agree here. So next we go and create a new dashboard here. Okay, I'm going to zoom out a little bit. It's very big. Let me just reset it a little bit. Okay, so here, right on the left hand side, we have a couple of drawers. So we have the content editor. I'll come back to this later on. And then we have some filter editor widgets as well as style editor. And on the top here we have the reburns. So I think it's pretty intuitive because it's similar to how your excel works now, right? You have the reburns. So then you have some editor on the left. So what I'm going to do now, I am going to say I'm going to create the years. Let me select the years. And then I'm going to say, I want to know what is the total volume of avocados I've sold across the year? So now I have a table. And if you're not happy with the table, I'm not happy with the table. I want to see a trend. Let me switch over to a line chart. Easy Peasy, right. Okay, so now let's say your user says this is not very interesting. I can split it again by the time So now I have the trend of the conventional avocado versus the organic avocado. Okay. And of course, you can drag and drop even more of them. Let's say, for example, I want to compare across the years. I say I want to compare maybe the sales of 2017 against 2018. Okay. And then I say, let's look at the total volume. So now here you can see that we have the difference across the two years. So the dashboard is yours to build. I mean, you can play around with the different kind of visualizations. You can have a pivot table that allows you to drill down. Let's say, for example, I can say for each region I want to look at the sales, the dates. So maybe I want to see the large backs, small bags and the total backs. And then let me just collapse this a little bit and we can add a bit of storytelling. Some interactive component for your end users. So let's say, for instance, now, I select region. I can multiply select. Like I want to add in. Let's say California. Chicago. If you don't want to show them as a multi select, you can always change it to single select. For instance, that at any point in time they will only be able to select one. So the question now to you is, how much time would you take to develop this as an application yourself for one data source, for one CSV file? Is it worth the effort to build a whole bi analytic platform? Not really right. So that's the cool thing about Python, because there are so many libraries out there, right? So you just need to have an idea. Look for the correct library, fix them up like Lego, and then you get something like this. So now I can go back to my notebook. I click on upload again. So another data set that I've downloaded from Cargo DS Celery. Okay, so now I reset the whole program again. And then I say, okay, based on the work year. Or maybe I'll just choose none this time around, I don't select any key. And you see here, I'm deleting the existing unnamed session to create the new one. What this means is actually because I'm a bit lazy, like I don't perfect it, but rather I just want to explore the data. I want to allow my user to explore the data anytime they are ready. So I will destroy the previous session, reinstantiate, and create a new one. So now the users can actually go in and then look at what is the salary for this job title, for instance. Okay, and then maybe now you see the problem here now is my work year becomes a sum. I have a mean and a sum. I don't know if it's big enough for you to see. Okay, so I have mean and sum for every column, right? So in RT, when you have a data set, there's two types of columns. We have, okay, numerical and non numerical. Normally, when we want to look at the business metrics, we look at the figures, but along some hierarchies, right? What hierarchies? Like, for example, company location. Right? So in this case, year should be a hierarchy, but it becomes a metric. Now, that's because I didn't select it as a key. So theoretically, I should have selected it as a key. Then it will be created as a column. Then I will be able to query it. Okay, so far okay. Am I boring you? All right, so as you can see, we can easily build everything together. So now let's take a quick look here. You can see that I've created and saved a dashboard. I could actually save it. So just I didn't perform a save. So it actually becomes like blank here. But let me just go back to the technology behind this. Let me just zoom a little bit out. Okay, so in this particular notebook, I used a couple of libraries. But most importantly, I want to highlight two libraries. Okay. I guess for the interactive component, can you guess what library I use? No, no? I use iPad. Okay, so iPad Ridget allows you to have this kind of float bar, progress bar, the multi select as well as the upload button, so and so forth. So check it out. The iPad widgets, there are a list of interactive components that you can actually put into the jupyter notebook to have this kind of interactions here. Okay, so that's the first part to having this program here, to allow your users select their own CSV and then create a bi analytic platform. So second part easy, right? Exactly. That's why I'm here today. Okay, so Aperture is actually a free Python library. So you can download it and play along with it, around with it. So now let's take a look. I won't go into the details of how we integrate the iPad. So long as you know how to retrieve the data source, pass it onto the next function, then you can pick up this whole program, right? And anyway, it's available later on. I'll be sharing it with you. Now let's look at the creativee. Let me zoom a little bit. Okay, so it's very straightforward. Step one, we create a session, okay? A total session. So in this session, what I've done here is I've fixed a port to 9090. You can change the port by default. You can don't even pass anything to the variable, okay? When you don't pass anything to the session, it will create a random port number for you, which is bad, okay? If you're just playing around trying to explore some data, it's fine. But if you're going to share your dashboard with someone else, okay, say for example, the dashboard that we created earlier on, see, now is broken because I recreate the notebook. So the data has changed, the column has changed. But suppose if I want to share the dashboard that I have here, I can just send this URL to someone else, provided they are on the same network. This is local host, so nobody can access my machine. Okay? So then I will want to fix the port so that they don't have to reboot my dashboard every time, right? Secondly, Firewall, if you're going production with an extra project with this, then you need to fix it for the firewall, okay? It's more controllable when you fix the port. Then step two. Of course, just now, earlier on, we have selected the CSV upload into the notebook, right? So in this use case, I have converted into a pendant beta frame, okay? And of course, the key here is the key that you have selected earlier on to identify your unique data set, right? So now, using session Read Panda, I could actually create an article table and load the data into the table. But of course, in a proper project setting, we would first of all create a table, then we load the data in, okay? But for this case, because I want it to be dynamic, so it should create based on whatever the user uploaded, right? So using Read Pandas, it will automatically create a table structure based on the available column in the data set. Okay? So the next step is to take this table and create a queue. So anyone play with a queue before or that queue? No. Okay, so basically, IoT create an in memory data queue that would allow you to view the data in different dimensions. You can actually slice and switch your perspective around, like how I drill down just now. I can look at the year, I can look at the company, I can switch my view anytime I want. So that's the beauty of having a cube. But of course, later on I'll tell you more about our cube. It's different from the typical or lab that you have. Okay, and then finally here, I'm just calling the I'm just using web browser to open a URL. So using session link, okay, so this is actually it should be a private function, but here to facilitate. So I'm using the local URL, but with Session link, actually I will be able to get the URL to the web application so that it will launch it. Okay, then you can start building it. So this is very easy, four main statements to create the queue and then voila, you have your bi analytic platform. Okay, let's go. Okay, so now we are done with the basic crash course. Of course, if it's so basic, then my company wouldn't want me anymore, right? So I will say adios. Goodbye. Okay, so now I've created an advanced version just for this session. Okay, so what can I advance on? Okay, before I do this advanced mode, right, let's have a quick recap on what we can actually do here. Okay, so basically the idea is that we create a session, okay? And then using the session, right, in the session we can actually do data loading. So we create an IRC table, we load the data, or the other way around, we could use the connector to directly read your data source and create the table. So there's two ways that we can go about it. And we have a few data connectors such as SQL, Spark, Pandas, CSV, non pipe for instance. So. Another beauty of Python. If I don't have the connector, I'm sure you can load into Pandas, or you can load it into NumPy like Spark data frame, as long as you can look into a certain format that my connector can connect, can look, right. Then you can use auto T, and I'll show you the simplest basic format, which is one single table. But if you imagine like a database, you can have multiple tables depending on how your data source is being organized, right? So you can actually join this table together based on the columns, similar columns, you join them together, then you have a snowflake schema. Okay, so by snowflake schema, what we meant is that we will have a base store that contains the most granular level data. So we will use that base table to create a cube. And what I've shown you just now is just a single cube within a session. Within a session, you can have multiple queues if you want. So again, that is a little bit of how you plan your data model, right? So you would group data of the same structure together the same storyline. If your company wants to see the PNL versus intraday liquidity or whatever in Financial, then you can have different quote and then they are all accessible within one bi platform. Okay? So it's one session, one Bi application so far. Okay? Okay. Then let me quickly show you again. What is the difference? So, same thing here. I scroll right to the bottom. I do the upload. So now I will use a financial data set, the V AR the data set, which is the value at risk. So first of all, you can see that now I explore using a different iPad widget. So I use checkboxes for my key and you can see that all the columns are being exposed here. And I have a drop down list here. Can anyone guess what is this drop down list for? Exactly? You listen to Maple, right? Okay, so let's take a quick look at my data. Okay, so my data is set because it's a financial use case and in finance, right? We want to know the daily profit and loss. So can you imagine, I have a figure per day across 300 over days, right? And then if I were to store it as an actual data row, then I have to identify the key. Let's say I have instrument code book ID, then the day, and then I have the value. So then this is multiplied by trend 65. So your data set is huge. But by having a list to store each day, the value for each day, right, I compress my original data set, right? But if I were to load this data set into Pandas, right, what do you think will be the data type inferred string? Unless you cast it right? So it will be treated as a string, which no, I wouldn't want at least to be treated as string, right? So that's why over here I could choose my PNL vector as a boolean array, okay? And then let me select my keys to the data set and then again I submit it. Okay? So there are some other slight differences, but let me just show you once this is created, so you can see that here I have flagged out all the numerical columns. Because later on we will see that I actually created some additional measures in this system. Instead of just now, we thought we have default, just a mean and a sum, right? When authority creates it. So can anyone spot the difference? Nobody? This is the default landing page. So the main difference is that now I have a demo folder and I have data exploration, right? Let me go into the presentation mode, just not earlier on when I restart the session. It was a clean page. Nothing persist. Even though I created some dashboard, I did some things right. So by default, IP will not persist anything. So the data queue is created in memory. Your tables are in memory. The dashboard, the widgets, the filters, whatever you created is in memory. So when you reinstate a session, everything is gone, right? But we can actually persist it if we want. So that is the first change in this advanced notebook that I have done. So let's quickly go back to the quote again. Okay, so you can see that in my session function, other than the port number here, I have also created something called I have used the parameter user content storage. Okay? So in this content folder here that I've defined, let me go back to my root folder. You can see it here, and I actually have this contentmv DB. So this is actually a H two database, okay? So this will actually store the widgets that you create or the filters that you have saved, as well as the dashboards you have saved. So now if I want to share with someone else, I could actually copy this. I do an incognito window. Say, for example, paste this, and then you should be able to view the exact same dashboard. So if you put this on the cloud, then you have a public URL IP address. Then you can actually share it with any of your collaborator. And the good thing is because everything is created on the fly, right? So whatever changes that I do in the Jupyter notebook, the other side will be able to see the changes immediately. Okay, so far, the first change, okay, so now that we have persisted, the next thing that I want to show you is that as a user who doesn't really quote and then I want to do something, right? Do I have to go back to it? I just want to compute something new if I have to go back to it. Then you go through the requirement gathering. You go through the development and then the S It UAT production. Then I got my final project one month, three months later. So now there is this function called new calculated measure. Okay? For example, I'm going to do a PNL, okay? So square bracket. This is basically the syntax for OLED, okay? So I can do and then you can see that it's also suggesting what are the available values. So then, now here I'm going to take my P and L value, okay? And then I'm going to multiply it by another measure. Say, in this case, I want it to be quantity sum, okay? So I can add it to view. And then I've added it to the wrong view because this is actually my selected widget. So you can see that I have my PNL here. Let me just do a small change. I'm going to save let me go back a little bit. Sorry, it's a little bit small. Let me just undo it. Okay, so now I have my PNL here. Okay, so I'm going to do a save. Okay, I saved my calculated measure. So this is where it gets committed into the H two database that I told you earlier on, and then I could actually select the correct widget now, and then I can go to my file and then my save measure. I select this, and then I apply it. So let me go back to the presentation mode. So now I have the quantity multiplied by my PNL value to get this PNL here. Okay, so your users do have some control over some measurements that they want to do on the fly without going back to it. Okay? Now, any questions so far? Okay, nobody's asleep yet, right? Okay, one question. Are those sessions all shared by one login or like, different users have different settings? Basically, you view this as an application. So many users assess the application, so there's only one session. Of course, you can create multiple sessions within the Jupyter notebook, but each one will have their own web application. Wow. Theoretically, that is a process outside. Maybe we can advise on that, right? Because then that's where your Gig Hub control a bit bucket comes in. You have your commit and et cetera. And theoretically, you shouldn't be sharing the same notebook because there is one kernel only for each notebook, right? So if you restart the kernel, then the session is gone. So theoretically, you should be having something like maybe Jupiter Hub, where everybody can spin up their own instance. They copy a version of this notebook or debit to their own use, but as a project. Right. If you're running it as a project, typically neighbor is here. Typically you wouldn't run Jupiter notebook in production. Yeah. So theoretically, you could extract this out and then put into a Python script, right. But then you will lose the iPad widget, interactive components. Last thing first, provide hosting. No, it's just a Python library. So you can use it in Python project or in the notebook. Okay. Yeah. So of course, basically, fundamentally, RT is a bi analytic platform. It provides you the holistic solution. So you should be able to create your own measurements, your own measures, KPIs, based on your formulas computation. So here in this program, I only incorporated a very simple measure, which is called a single value. So then, of course, this is where I bring you back to my product. Okay. Going back to the documentation, right, if we go to the reference, of course you can install and then go through the tutorial how to use it for your own project, or for this particular one under the API references. Let's say, for example, I just look at the aggregation module, okay? So then you see I have long max. Max member, mean medium, et cetera. A lot of functions available here. And then not to mention that I have like, if you want to order your data by some scope, you want to perform some functions like date difference, or shift the date, or look at the parent value, so on and so forth. So if you have a formula, then it's a matter of how you use the various functions to put together to create your formulas, to change them up. So it's pretty much like in Excel you have your Excel function and then you formulate them bit by bit and then chain them up together, right? But however, it's in Jupyter notebook. So in this use case, right, what I am using is the single value. Maybe it's not very intuitive. So basically the idea is that for a members of the level, if you have the same value for all the members, it will return you that number. But if it's different, then it will not return you anything. Say for example if you go to a shop and then for all the hitachi they sell hitachi TV, let's say they sell $500, okay? But then there's one particular model that sells for $99. So it won't be able to return you the value for Hitachi TV. But let's say for maybe Samsung it returns 999 for all the TVs, then single value will return you 999. Okay? So it's very minute but then you'll find that in some cases we need that just to demonstrate that we can create some measures. So you can see that here, depending on how you want to how dynamic you want your queue to be in this setup, you could actually add it on, you can actually expand it. Okay, and to go back a little bit to depart on the data type. So, because Auto T will inherit the data type from Depender data frame. Okay? So we are actually taking the value that you have selected earlier on and cast it. So if you didn't select anything for the data type, we will automatically take the value that is inferred by Pandas. Okay? Yeah. Okay. So I think that is about it for the Auto part. So if you guys are still with me, I can explain to you the technology about I mean behind my product. Okay? So it can be a Python project. So you just take the library, you take your own data, you take multiple data, create multiple tables, you join them together, create a cube, then you perform your computation, et cetera. Or you can make use of Jupyter notebook jupiter Lab like what I'm doing now. So RT has some custom features that helps us to leverage onto predators function to perform, to explore your data. We'll see later on to make prototyping much faster and for interactive experience. Okay, and underline is actually Java. So. Actually, I'm a Java developer. I learned title only three years ago. Okay, so underline is Java and then we have the in memory data queue with a bi analyst platform. So the history of the whole product is that this was sold alone and this was sold as another product. And then now because we have a Python rep and we decided to have an open software. So the entire thing is free. All right, okay. So feel free to use it. Of course. There's the oil. Check out the oil. Laughs okay, everyone still with me? Can I continue a little bit more? Five more minutes, I think. Okay, so now to tell you a little bit more about the data queue, or lap, right? So typically or lap, let me just do this. Okay, so we have some dimensions and we have some measures. So as I mentioned just now, we create only one RT table. So therefore I only have one dimension, which I mean table, by the way. So it's a silly name, but sorry. So this is actually the parent table, and this is the table that I created. Okay? And for each non numerical column or key column that we have selected, it will be created as a hierarchy. And beneath each of these hierarchy, right, is a single level. So by default it's called multi dimensional thing. Exactly. So each of this is a dimension, it's a hierarchy rather. So then each table that you create is another dimension. So dimension actually groups the hierarchy, and then hierarchy is the so called query that you will query your business metrics around, right? So let's say for example, which building what time and then maybe who. So these are the dimensions, the different dimensions that you can look at. And then let's say, for example, quantity, all these are measures. So all the numerical figures that you are interested in in statistics are measures. So for utter t, by default it will create them as mean and sum and then we can actually have thought value. Okay? So this dot value is the single value that I've created earlier on. Okay, so basically this is the structure of a queue. So now with this structure, right, let me just quickly show you. So this is a customized feature for Autot, where we have the utility editor here that will allow you to interactively build the same thing as what we have done earlier on. So I can have my underlying quote, and then here I could actually drag and drop into the table. And then I can have a collapsible pivot table here, right? And then I will say, okay, so I can look at maybe say, the quantity dot sum for each of this, right? And then maybe I want to look at the PNL vector value as well. So here you can see that my PNL vector, I have 372 values in my leads. Okay? So let's work a little bit more with the vector, with the cube rather. So in order to work with the cube, I can call cube hierarchies cube levels or kubernetes to start working with them. So these are the three attributes of a cube. So then say for instance, in this case, right, I have my PNL vector, okay? And I'm going to scale it up with. The quantity sum. So it's as simple as taking the measure multiplied by this. So again, going back to what Mabel was saying. So if I do a tap here, you'll be able to see the auto suggestions on what are the available values that you can use. In this case, I'm going to use the value p and L vector value. Sorry. And then just to move on again, let me just collapse this so that we can see it properly. Okay, so now that I have my skill PNL vector, I'm going to perform an aggregation. Okay? So there's two dimensions here. So on the scope level, the instrument code and book ID. And anything that you query below these two level, right, I will take the value of my skilled p and l, but for anything above it, I will perform an aggregation that is a summation. So what do I mean by that? Okay, so let's do another visualization. So now this is my original vector, and then this is my scale one. So let's look at a simple number. Like for example, this one, it was negative, and when I multiply by the negative one quantity, it becomes positive, right? So at the instrument code level, which I stated under the scope, right, you can see the value is exactly the same. But now what I have is I have a summation on top. Okay? So this is where the aggregation function kicks in, right? So I have a value on top. But we don't really want to work with vectors, right? I mean, it's not really readable, we can't really use it. So what we are going to do now is we are going to create value at risk, okay? That's the main purpose here. So for this value at risk, I'm going to use the array function quantile to take the zero point 95%, the 95% of my position sector. Okay? So now I can actually do a visualization. So let me just check this a little bit. Okay, so then again, I have my instrument code, and then on my P and L, I have my vector, and then I have my VAR here. So you can see immediately it appears, right? So if I were to go back to my dashboard here, you'll be able to find the bar as well, right? So whatever you do on the notebook site, it will be available on the dashboard site as well. Okay, so your end users will be able to see it immediately. Okay, so now I actually have my value at risk. And then I could actually, let's say, for example, I put my book ID because I want to view each one by book ID. You can see that here when I sum up all the instruments under my book, right, the total value should be 1012 7.939, but however, the top value here shows 47,080.6ft. Why is that so? Okay, why? Because we are summing up the vector and then we are taking the 95% of this vector, okay? So this is something we call nonlinear aggregation. So you can actually decide what formula to apply at what level, and when you query it actually change based on your query and everything is actually computed on the fly as you compute as you query them. So there's something different from Olab because OLAP actually you have to perform the pre aggregation first and then put everything into the LDAP right into the OLED. Right, sorry. And then when you query it, you have it, but here we define the formula and then as you query it, then we compute it on the flight. Okay. And not to mention that we actually support incremental data loading, which means that when you have new data coming in, I could call the load function loaded into the table and immediately you will be able to see it in the data queue without having to restart it. So in the typical or lap situation, you will have to restart your data queue. Okay? So finally, last portion here, which may be interesting for you. So you can see that here we have the formula. Let me just collapse this a little bit. We have the formula that we are taking the quantile of the array by zero point 95. So I'm going to create a parameter simulation on this zero 95. So in this case, I create this simulation where I create a measure called confidence level, which I defaulted to zero point 95%. I call this base scenario as 95%. Okay? So then if I were to query it now, it shows that I have this measure which is value zero point 95, I could actually output this to a data frame and then I could actually query it along some other levers as well. Let's say, for example, I have, let me see, book ID, okay, and then I can also have my instrument code, right? So if I output this DS hate, so actually you can do your measure computation aggregations into a pivot table, you query out and you can output it downstream. You can do further computation, you can merge it with other data, create another queue again, so the imagination is yours. What you want to do is the data. Okay? And now to go back to our initial formula. So now earlier on I used zero point 95 in this definition. So now I'm going to overwrite it with my new parameter simulation, okay? So with confidence level that is defaulted to zero point 95, and then let's do the visualization again. So you can see that now I still have my VAR, and then I'm going to create two more simulations called 90% and 98%. So with the value 0.9% and zero point 98, for instance, okay, so finally I will be able to visualize them side by side in this manner, so I can actually have my computation with the 90% 98% compared against my 95%. So if you look at the editor here, what I simply added in is the simulation hierarchy here, confidence simulation. Basically, the small programs I created is just to facilitate users to quickly do an analysis around a single CSV data source. But the library itself is not limited to that. You can actually expand on it depending on what you want to do. So you can do things like this example. So now, actually, the key point is that with the vast amount of libraries out there, actually you don't have to quote everything yourself. It's just a matter of imagining what you want, the angle that you want to achieve, finding the right, correct libraries, put them together, you get a new product. Okay, so instead, I end my session. Any questions? Well, the limit is your machine size, the space that you have, the Ram that you have in your machine, that limits how much data you can load into the cube, et cetera. So by default, I think we set a limit to use about 25% of your hardware Ram. But you could actually adjust this using the Java options. Okay, so in fact, if you put it on the cloud, then you could actually scale it according to your needs as well. Yes, that's about it. I'm not really a power bi user, but I know the sharing is a little bit more district, because actually, I think if I'm not wrong, you have to engage another product of power bi to be able to start sharing. You have to hold it or something. I show you how I share it just now. Right, so long as you host it somewhere that other people can use, even on your intranet. If someone else can access your machine, they can use the IP address to assess your dashboard well, so that's where the paid version comes in. So basically it's without security, right? So if you want to implement security, like lock in access, and then who can actually assess the various data set, then you have to go for R 30 plus, which is the paid version. Then you will be able to connect it to your LDAP. Or if you have OIDC, you can actually have an authentication mechanism implemented in it. So then your users can actually log in, and then based on the roles or user group, you can actually configure whether they can assess certain files, certain folders, certain dashboards, or even up to the data layer. You can actually say that team A can only assess country A, team B assess country B. That kind of possibility is there once you have the authentication mechanism. But I think by itself, the free version, you can do a lot of things. Everything that I've shown you inside is available in the free version, actually, yes. So you can actually find this use case right, in the GitHub gallery. So under the you can go to GitHub. Comnotebooks. Then all the use cases are here and then in fact, if you expand it into the notebooks folder, then you can see we actually have a lot of use case. The airline industry is contributed by a user and then the rest we created it on and off. So the auto queue is the one that I just demonstrated to you, domain advanced and domain. So it's step by step guide. So actually you will be able to follow through it yourself. And then in fact, if you are more interested about this, you can even go to Medium Auto T, whereby usually each use case we try to create a corresponding article to explain how it works. So then you should be able to learn and pick up the tool by yourself so you can see the different ways to secure your IoT session. For instance, you said you wrote a wrapper Python wrapper around your Java library library. Did you use reflection? And how did you. I won't be able to tell you. I'm not from R amp D. I would love to tell you, but then sadly they think that I'm too talkative. So I'm in the evangelist job and not R amp D. It's not open source. Yes, exactly. But the free version at the moment at least, I think in my perspective you can do all kinds of aggregations already. So the catch here is of course in the oil is that normally we say that you could only have one builder and one reader, meaning that you can't really share a dashboard with a lot of people. We set some limitations here. So initially when we first started out, we target the data science domain because a lot of time we find that data scientists will find this very useful in prototyping, exploring that data, build up the model, et cetera. And then to run simulation as well. So every time you have a machine learning algorithm that outputs some values, the question is how would business find value in the prediction? So then you put your prediction into authority where you already configure the business KPI. You create it as a scenario. You show the users, the business users, what is the original value, what is the predicted value and then when you have the actual value coming in, you could show what is the actual value itself. Right? So once we have set up the model itself, we just need to fit your machine learning prediction into the system. So likewise for financial industry, you have your own risk engine, multicolore simulations, et cetera. You could actually load it inside, run through the business KPI and see how they differ from one another. The kind of idea. For example, the data has changed. So basically the idea is that because IoT support incremental data loading, right? Which means that you can actually add data on the go without having to restart and then your users will be able to access the data immediately. When they query it. So if I were to go back to the dashboard itself here, okay, let me go into presentation mode. Notice here there's a small icon here you can actually turn on real time mode. So meaning that as data come in, right, your query will refresh and then you'll see the latest value on the widget. So you can control them widget by widget to show the data real time, but depending on your business use case, because not everybody require real time because we saw more intensive, right? So anytime that you want to get the latest data, you could actually just right click and refresh the query. You get the latest data. So then it's a matter of how you organize your data. So for example, in bank you have your s of day one, day two, day three, day four, and then you just have to set the slicing so that every day you just see a single day. And then you can order them so that you only see the latest date, for instance. And then when you want to see the previous data, you can always switch. You can play around with the quick filter, for instance. Or here, even in the filter editor, you have like page filter, dashboard filter, this widget filter that you can actually apply so that you can look at different data or date range, for instance. I would suggest some of you have more questions, just come forward and ask. We found it directly, maybe directly. Rest of us who doesn't have questions should finish up a pizza and it's quite late already, so, yeah, once again, thank you for your time. Very nice. If you have more questions, just walk around, ask people and help me finish the pizza.