Video details

Creating a Testing Workbench for Computer Vision Projects using Python

Python
09.18.2022
English

🖥 Presented by Women Who Code Python 👩‍💻 Speaker: Darshita Chaturvedi. Moderated by . ✨ Topic: Creating a Testing Workbench for Computer Vision Projects Using Python
Computer Vision projects use libraries such as OpenCV, Matplotlib, etc. to display image or video streams to verify incremental work. However, these libraries are not suitable to display media because the output differs based on the end user’s operating system, package distribution system, etc. By contrast, browsers provide very superior and stable support for IO devices. Until now, it was not possible to utilize browsers to display media using Python (outside IPython like notebooks).
In this talk, we will use an open-source full-stack web development framework for Python developers to achieve this objective. To demonstrate this, we will create a testing workbench for a Computer Vision (CV) project and run it on a browser.
This talk is suited for anyone who has beginner proficiency in Python. The examples used and the context of the discussion is around machine learning, but the knowledge gained and open-source tools can be used in any Python project.
Resources Atri framework - https://github.com/Atri-Labs/atrilabs-engine Testing workbench - https://github.com/Atri-Apps/cv_workbench All Atri Apps - https://github.com/Atri-Apps Atri utilities - https://github.com/Atri-Labs/atri_utils Documentation - https://docs.atrilabs.com
About Speaker: Darshita Chaturvedi maintains the open-source project Atri engine, a full-stack web development framework for Python developers. She is also the Co-Founder & CEO of Atri Labs, the company behind this project. She has spoken at Python conferences such as PyCon Latam and JavaScript conferences such as React Native EU, React India, etc.
For our 💬 slack channel, 🎥 previous event recordings, 🗓 upcoming events, 💻 GitHub repo and more check us out on https://beacons.ai/wwcodepython
___ 💻 WWCode Digital Events: https://www.womenwhocode.com/events 🔎 Job Board: https://www.womenwhocode.com/jobs​​​​ 💌 Make a Donation: https://www.womenwhocode.com/donate

Transcript

Alright. And now I'll be handing over the presentation to our speaker testing workbench for computer vision projects using Python with dashita shatteredidi and handing it over to you. Thank you, Eliza. I'll share my screen now. Let me know when it's visible. Can you guys see my screen? This looks good. Okay, great. Thank you. So thanks everyone for joining this talk. And special thanks to Eliza, Stephanie and the entire Women Code Python team for organizing this event. I am Darshata, and I'll be talking about creating a testing workbench for computer vision projects using Python. I'll use the examples and the context of the discussion that I'll use will be around machine learning and computer vision specifically. However, the open source tools that I'll reference can be used in any bike and process. So, a little bit about me. I'm the maintainer of the Open Source Project, a three engine which is a full stack web development framework. You can think of it as a successor of Django in a way, but much more powerful. A few weeks ago, we released a beta version of this framework. I'm also the co founder and CEO of Three Laps, which is the company behind this project in terms of Python. Specifically, I've been using Python in both academia and industry. After studying engineering during my undergrad, I worked in Quantum finance, where again, I was using Python a lot. And then I moved to academic research at MIT. I later dropped out of my graduate program at MIT to work on my startup full time. I had never really thought that I would be never really thought I'd rather plan to be account it. But what motivated me to drop out from college and start a deep tech company, it's an interesting deal in its own right, which has reserved for a different time. One personal project that I'm very proud about is that I'm editing and managing the publication of a set of novels that my grandfather wrote over the last five decades. That's pretty cool. Family history and pretty cool. I'm so proud of him, so proud that I'm able to help him in any way I can. And lastly, in an alternative reality, I'm almost certainly an art historian. I've studied engineering, Stem, math, physics, all of those things all my life. But I kind of feel like especially now as a founder, I feel like I'm endlessly fascinated by how artists of centuries, they had their own unique creative vision. They had the courage to break traditions and build something that endured through centuries. And when people like you and me go to museums and look at something that was built like in 1716, hundred and so on. So yeah, that's all about me. Let's dive into the talk. So this is a bit of a controversial statement, but let me clarify. In our web framework, we have a high level computer visual ID. And it's provided as a utility function. And by building and testing this library. We use OpenCV a lot and of course we love it, as I'm sure all of you here who have worked in this field. But several problems arise when we forcefully use OpenCV for something that it never intended to do and consequently those features are limited and buggy. The most significant one that I focus in this discussion is how we use I Am Show Windows to display media and this is largely the motivation for this talk. The three boxes that you see here list some of the core issues that are also highlighted in Opencase GitHub repository and the category of issues that we have faced as well. And I'll be referencing them to highlight how cross platform testing becomes extremely difficult for computer vision projects when we use im Show Windows. So let me take an example. My team and I were building an algorithm that detects the position of the mouse cursor on an image and accordingly zones in. However, OpenCV was exhibiting irregular behavior with different input devices. So if we used a USB mouse, the cursor was detected as expected. However, by using the built in trackpad of our laptops, there was significant delay in processing that information and consequently significant, significant inaccuracy in detection of the mouse cursor. Moreover, for certain media types, and especially for video, if you use I'm sure, Windows for testing, then it becomes quickly irresponsive. Sometimes it's easy to fix it through inefficient workarounds such as increasing wait time. But at the core the issue is that OpenCV gets stuck when CPU resources are tight. Besides, there are significant discrepancies around which piece of code works in which operating system. So today, for example, I had prepared like I was planning to use a demo which I had prepared 14 days ago and I've used in pygon Latin and everything went well. But when I was basically testing it out today in the morning, it suddenly stopped working because there was some change in the package distribution system at prepaid and OpenCV had made their version, had a lot of bugs and so it was quite a stressful day. And if we extend the example that I was offering earlier here, there again when one of my team members extended the zoom effect algorithm to detect the keyboard cursor, the code was working perfectly in his window system, but none of the keyboard events were being detected in Mac OS. So cross platform testing becomes really difficult and you're always unsure if your code will work even on your team member system, let alone your end users system and so on. One common suggestion that is given to fix some of these issues is to create custom leads and build OpenCV manually. But of course, this is not everyone's capacity. So now let us assume that we were able to surpass all the prior technical challenges. Maybe we were lucky or maybe we just persevered. What comes next is an organizational problem which is how do we collect feedback from all stakeholders who are both technical and non technical? We face this problem most severely. While working on an academic project with the US. Department of Defense. We started sharing output snapshots over email, and this quickly balloon into what I like to call snapshot jungle. There is no systematic way in which you're collecting feedback. It's very difficult to give feedback corresponding to each image and organize different people's opinions. It becomes a long, messy email thread. Moreover, there's not sufficient version control of the test output from the previous model. So if you don't know if certain test images did really well in this model, would it really like you don't really know if it's worth moving on and worth not talking about in this current model? Bed how did we actually address these problems? So we created a testing workbench that runs on the browser. I'll discuss why we think this is a good solution, but first let me show you the workbench and describe how any technical or nontechnical can use it for a computer vision project. So this is a multi page testing workbench for a driving license OCR project. If we click on any image, then here we can see that basically what this workbench is doing is that it takes in a driving license as an image. Then our model basically extracts all the text fees, all the labels that we were interested in earlier. Here we can see that it's extracting name, license number, date of birth, expiry date, and address. Our deep learning model also adds bounding boxes over the image and around the fields that want to extract. And the extracted text is displayed here. And now you can review the results image by image in a much more accessible manner. So you can see that in this case, for example, the license number, it's not exactly right. This license number starts as I one and so on. This is mostly correct, but then there is also these initials that indicate the labels that are being added to the extracted text. This model is trained on US. Driving licenses and there are significant differences in the format of these licenses depending on which state issued them, when it was issued, et cetera. So, for example, if it's a California driver's license, the address is here, but if it's by another state, the address might be here. There are many ways in which you can get an incorrect output, many reasons why you can get an incorrect output. If we actually go to the login page, my credentials are already saved. So if I log in, I can see all the test images that my team has tried, and there are three categories of reactions corresponding to each testimony. So correct, warning, incorrect. We can also see the number of comments that our team has left for each image. So basically, we get a sense of the performance of our test images just by looking at the reactions, but we can also review them individually. We kind of did this earlier, but let's do it for maybe this one. Again, license five. Here we can see the feedback that our team members have provided. And if we dive in a bit more deeply, we can see that in date of birth feed this one, there are two problems. One is that the model seems to be getting confused between the letter B and the numeral eight. So it's it should be D-O-B. And then second and more important thing is what has been highlighted in this comment that we should reevaluate the training data itself, because it seems like whoever labeled the data also included this text. The bulk box should be tighter and it should only include the data that we want to extract, which is basically starting from this .0 831. Similarly, we can add a new test. So if I click on Upload image and let's say I upload this license and run this test, then it's basically firing up the model that we have. And once the model is run, we get these results. We can maybe say that it's not satisfactory because let's see, license number is not exactly correctly extracted. There is this extra information here which shouldn't really be part of our extracted value. So the license number not correctly extracted. Once we save this, it becomes part of the list of tests that we have here. So in total, we have 14 tests here. This is basically what we mean by a workbench. So this is again a very specific workbench for a computer vision project. Let us now get to why we think this is a good solution. So one of the significant technical challenges with OpenCV Windows that we discussed earlier comes because of operating systems. And Peter Wang, the CEO and co founder of Anaconda, addressed this problem more broadly with any Python project. And he claimed that browsers have won the Osborne since they offer much more reliable support for running software. Specifically, for our problem of displaying media, we know that browsers provide far superior and stable support for all input output devices such as keyboard, mouse and camera. So you don't really see this kind of issue. You don't really see your web developer friends, for example, complaining about an issue like this, right? And this statement was basically the inspiration behind this project. But the next question is how do we actually utilize browser using Python? So that is where a Thrift framework comes into picture. It is a full start web development framework, and we can use it to create multiplayer applications, computer vision workbench being one example, whose front end is built visually and the back end is written in Python, as I mentioned, up front. To put it in the right context, you can think of other comparable frameworks such as Next, JS and Django. In the Python world, this framework abstracts away many other time consuming and difficult tasks. For example, the Request response model is provided by default. So in other frameworks, for example, back end teams have to spend a lot of time creating, documenting and updating the test API to share with their front end team, which is significant overhead for both back end and front end teams. But when you use a framework like ours, it enforces and our framework specifically, that's the unique value proposition that it enforces an object model that basically ensures that you get many other benefits such as security. So one common scenario that people and engineering needs to worry about is what if back in team mistakenly sends sensitive user details to the front end? Or in a few framework there are prebuilt guardrails that prevent sending any information that is not part of the object model and as a result there is a single source of truck. There are other benefits as well such as the front end is really lightweight and superior fuel performance and so on. The other thing to note is that it's free and open source and if we check its GitHub repository we can get a bit more idea about exactly what we can achieve using this framework. I talked about front end and back in development, but another important thing to note is that there's a lot of deployments to put as well. So there are command line tools that help you in one click deployment at your platform of choice such as GitHub pages, AWS, et cetera. Again, I use the example that was I used Computer Vision Workbench as an example, but it's a general purpose framework that can be used to build everything from ecommerce websites to internal applications. This is a really cool personal blog that I have been creating using the framework and I have a really long name so I also added a logo for myself. So this is a blog I'm into theme all around. When you click on it then you get this like it's dummy text for now, but you get minimalistic design for personal blog. Everything built. The front end is built through the visual, back end is written through Python and deployment is also done in GitHub pages as you can see by the link here. And it's done using at least VLR. You can also think of smaller internal applications like this data. This is something that researchers at Georgia Tech are using. Basically you are reviewing a large CSV which has 3.2 on rows. So you can basically select specific roles, mark them and then add them as comments and basically say that you know what, this road definitely looks right and someone else can maybe work on it and once it's done, they can mark it as a result. So it's a range of examples, but I hope it gives you some sense of what we are building. I think we have some times where I can also show one quick example of how to use this workbench. So with every application that's built, especially all community applications that are built using a three framework, they are also added in this in repositories here at three apps. And if you want to join just used, for example, this personal blog, then you can go here, clone this. If you need to make any changes, you can make that, but otherwise it's good to go. Like all the setup related steps are mentioned here already. So in coming back to a previous example of OCR workbench, if we discuss just one customization that maybe we need at our end, which is we want to add one more field here. So it's currently detecting and displaying five fields. Maybe we want to extract one more field which is the gender of this person through the same OCR model that we have built. So to do that, we'll have to make just slight changes in our front end and then feed that data to our front end component through our python back end. So let me show you how we can do that. So this is how the editor looks. There are different pages here. So this was the home page. This is the view test page, new test which gets evoked when someone clicks on this button, new test result when someone runs the run test button, and the login page. So if I want to extract the new field and display in this OCR workbench, all I have to do is go here, select this field, copy it and paste it. I have to rename this. Let's rename this to gender. And this text box, it's called text box 221. And you can see that all of these text boxes, they all have a specific name. And it's important to pay attention to this name because this serves as the alias between your front end and back end. Like the name that you give here itself as a variable name in backend as well. So it's important to even if you don't rename it, it's helpful to know what this is so that you can make modifications. As I made modification, I also published this. So now if I refresh here, it's basically part of the front end. But now I have to provide some data here, right? So this front end is basically just displaying that there are all these different text boxes with a placeholder data of Janet. So to do that, what I have to do is go to my code editor. I hope you can see my Visual Studio code screen. So what is happening here is that everything that we do in front end, it creates the front end code automatically for us in our repository. But to make any change to back end, we only need to be concerned about this folder controllers. And within controllers, there is this folder called Route and it shows all the pages that we have in our application. So if I want to make any change. In this specific example, I want to add a new field to the extracted text. So I will go to main ebay and again you don't really have to worry about any of these Python files, but just go to main PY if you look at what is happening in this file. Okay, maybe let's start from the beginning. There are three functions that are provided by default in it state Handle Page Request and Handle Event. In it state is basically like anything that you want your application to have by default and you want to manipulate through back end goes here. Handle page Request is when you want the page to reload. But Handle Event is something that we are most concerned about right now because we only want the model to run when the Run test button is clicked. So in order to do that, all we have to do is here it's saying that in upload one button if there is a change then take these files and do something. But this is the most interesting one that if the Run test button is clicked then you have to do something. And what you have to do is you have to come here, run this model which is called Run Driving Detection model, extract all the output fields and this is a simple regex happening here that if the name field exits exist in this query in this dictionary then extract that information, otherwise display any. We're basically checking if the model detected this field for this image or not. So consequently if we want to add a new feed which is gendered, then all we have to do is copy this past it here. Never forget about indentation in Python. Let's change these fields. So if a field name sex is provided by our OCR model then do this which is doing this three x operation. If you do not find that field then just display any and add it to the query as well. Just as all the other fields have been added. So let's rename it again and save it. So now when our Run test button is clicked then a new field would be extracted. But we also want this information to be displayed in the new test result page that opens. So again if we go here then all we need to be concerned about is main PY file and we'll just look at how we are doing it for the existing field and we'll just have to copy this and change it to a new variable and do the same thing here. So now basically what happened here is that we had the query, we extracted the sex field, all we have to do now is provide that variable to the text box that we created in the visual editor. So we'll have to copy this and that's it. Now we have passed that variable to the text box which was called like whose variable name was to find us text. Any custom property that needs to be changed always can be changed in the same API by following the same API that you write. Et, variable name, custom, whatever. If you want to change the text, you write text here. If you want to change, let's say, the color, then you basically follow the similar pattern and there is a really cool IntelliSense built as well. So if I write at dot, then it's showing me all the front end components that exist in my application. So I just need to know that I have to pass this information to say button 25 and then same thing here, button 25 dot customer maybe I want to change the style, so I'll do styles dot, maybe change the background color and I find the color, like whatever the color is. Let's say. Intellisence is also something that's really important for any library in general. You don't have to keep flipping through the documentation to actually do anything meaningful. So I saved it here. Now let's go back and let's refresh. Let's add a new test and see if it was able to detect the new field or not. It's running. It will take a while. Yeah. So now we have this field here, right? We were not getting this field earlier, but now this field has been added. And just to recap, what we did was we added a new text box and we told our back end that provide whatever the model is returning, send it here. In this particular case, the model then actually detect the data accurately because you can see that the bounding box was this wide and this long. So this field itself was part of its extracted text. But we can say that here that, you know what, this extraction wasn't really correct and we should keep working on our model, save this and we're done. Tablet here. There are many other things that can be many other modifications that can be done depending on your use case. You can think about that and you can similarly do it through Python, through front end, depending on what that exact use cases. So the suggestions we had, for example, were adding new reaction categories, maybe adding email notifications so everyone is aware of what is happening here and maybe when trying it with a different model altogether. So instead of a driving license OCR model, you're using let's say Passport OCR. That is all with the demo, but I hope you found this talk helpful. And basically the key takeaway is that I hope you have from this talk that it motivates you to consider using a testing workbench, but most importantly, it motivates you to use Python to create powerful production grade websites and applications. Skills that are typically associated with a web developer are things that are now accessible to a Python developer as well. So that's a pretty cool thing, I guess, for the ecosystem in general. In terms of what's next. Definitely you can install and get started. I mentioned it's an open source project. You have all the steps related to getting started in our README in our documentation as well. You can showcase your application and you can ask us any bugs or any questions that you have while using it. There are a lot of python utility functions that we are building, some small and some large scale. So an example is that how can you use python to remove that background from an image. And this basically involves using deep learning and it's a really cool utility function that has been contributed to our repository by an open source contributor. So that is one of the avenues that you can think of when it comes to open source contribution to a framework like this in general. And if you think that it's helpful, then you can support us by giving us a star in GitHub. This is our GitHub repository. I will also share all the different links that I used. The link to our GitHub repository link to the testing workbench. You can clone it at your end. You can try it out and see if you will make like to modify it in a certain way or not. All the apps that we are adding are available here. The utilities, the python utilities that I mentioned are available at this link and last year documentation. It's an ever growing compendium of all the how to guides, tutorials and so on. And with that I will end my talk and thank you for listening. I'm available and you can find me on social media by these handles. Thank you. Thank you so much. Tashita. That was an amazing presentation. All the information is like oh my gosh, there's more to learn about. So that was really great. Also, if you have any questions, I believe you can post it in the Q and a little tab at the bottom. For you. Darcia. I can just start off with a general first question like what's one of the things you really like find great about like being a founder? And what's one of the things like you're still doing a learning curve? Yeah, I mean that's really interesting. I can fill up the 50 minutes and maybe a separate discussion that as I'm sure you would imagine, but basically I briefly talked about this in my introduction, that I never really planned to become a founder. There are many friends of mine who from their college days, they had it in their rear view mirror, maybe not immediately, but something that they will think about five years down the line, ten years down the line and so on. I was always more academically in trying, so I never really thought that this would be a path for me. But I think the biggest learning that I have had as a founder is that whatever I thought being a founder would be like creating a startup is much more difficult than that, but in a very different way. Better way to say it is that you don't have a brand name associated with a company. Like when I was working in Confinance, you always had a legitimacy associated with whatever you were doing because you're part of a bigger organization and you have all these resources at your disposal, right? From HR to people in your team who can help you and who can guide you once you get stuck. But with being a founder, this is like all this all these varied tasks and all this growth has to be done by you. Like, you're responsible for it and you have to do it really quickly because you can't spend five years, for example, learning how to become a better manager. The day you decide that you have to become a founder, you have to start learning. You have to get good at becoming a good manager and handling your team well. So I think the biggest learning for me has been, first, that accelerated development across all areas. And then secondly, I was doing more in Confinance. I was doing mostly machine learning, data science kind of work. But as a founder, I have to do everything because even if you're not doing hands on keyboard stuff, you're responsible for everything. You kind of have to have an opinion. You have to learn about the design. You also have to learn about your HR policies. You have to learn about the culture, which is great, but it's also very nerve wracking because it feels like there is too much to do and it's all happening too fast. Yeah. Keeps it interesting. It's a lot to learn. It's helping concrete to your inner rows. Yeah, it's never not done, that's for sure. It's just that sometimes it just becomes overwhelming, I guess, because there's so much to do, which is exciting on one day, but overwhelming on the other day. Yeah, it comes to go and slow. So, yeah, it's great that you're passionate about. Okay, let's switch back to the workshop topic. So for computer vision, what do you think, like, in your opinion, what will it look like in the future versus now? Do you see any areas of strength or just like, maybe interesting areas that it could tap into? I'm just like rambling. You mean like projects in computers? Yeah, for like, projects. One thing that I definitely see, five years before, there was a lot of work required in just setting up, for example, TensorFlow flow in your system. Like, even installation used to be really difficult and it was not that there were not that many resources, but now, for example, there are full scale models available that you can use and get started. So, for example, if you want to do passport detection, then there is a library that will do everything for you. Right? You just have to supply it an image and it will. Do everything for you, and it will get an output. So it's become much more productionalized now than it was, say, five years ago, when you would like, because there were fewer people who were doing it. And then as a result, there were few resources as well to help you if you face an issue. In terms of what I see going forward, I'm not exactly in machine learning, okay. Like, I'm not building a startup in machine learning space precisely, but definitely I read up a lot about what AI leaders are talking about, and the common theme that I've heard about is that there's a lot more focus on data now than model. So, like, five years, we were spending a lot of time on creating these pipelines. So that, as I mentioned, anyone can do. Passport OCR. But now the focus is more on how you can do more with less data. So a really exciting field, for example, is a really exciting line of research is few short learning. One short learning, which is that you don't need thousands of data points to train your model, and then you can get some value from machine learning. But how you can make two with that level of output and that level of utility with, say, only ten images, and in case of one shot learning with one image, it's pretty cool. It's interesting because I know it's like the other side of the conversation is like, oh, how often do you update your model? Is your data up to date? Obviously, when we're first creating with the models, there are some biases right now. Like, I know even with pictures, there's something called the deepfake. So you can make an imaginary person, and as a human, you can kind of see the uncanny valley, but to a computer, they're like, yeah, okay, it passes the test. It's always like, something new. Yeah. And within model development, there are a lot of avenues around bias, around fairness that are active areas of research. Totally. Other workshop topic? Yeah, definitely. If you want to go, like, another workshop topic on this same biases, like, totally open for you to coming back and doing conversation. But the last question, because I haven't seen anything on it, say I'm working on my own computer vision project, something in health care where you're scanning the cells or search and rescue satellite data. If I want to start using more Python in my workflow, what would you recommend I look at to try to build up my internal resource, like library. Kit for computer vision specifically? Yeah, or for the core with Python, I'm working on computer vision project, but I don't know where to start looking for similar Python projects or tools. What would you recommend? Right, so I'm planning to write up a series on how you can create a data science portfolio or a machine learning portfolio. And that's hopefully going to be a compendium of how you can pick projects and also how you can do more than just putting them up in your GitHub. Because in GitHub it's great. But again, it's one of those things that it was meant to help you with version control around your code. It was not exactly meant to showcase how cool the code is. Yeah, it now is right. That's great. It is now, but, like, originally. Yes. So that is one thing. The other on some of the projects, there are tons of resources on computer vision specifically, because I feel like these examples relate to anyone. I used to work a lot on the language side, so NLP is a bit less understood by even general public than computer vision. For example, I can explain it to my dad that if you walk through this traffic if you walk through traffic, then there is this model that will detect your face. It's facial detection, facial recognition model, and so on. There are a lot of good libraries, like, in this project that I just talked about, yolo V Five model is great for this kind of work. Okay. We are using Passport I for detecting information from a passport image. If you are familiar with these libraries, I think they give you through the tutorials themselves, they give you an idea around what are the different projects that you can pick specifically where you can use them. So that's one way of thinking about how to pick a project. Okay. Yeah. Like, if you just want to build portfolio as a student, or just like, trying to prevent careers, or just like, say, I'm doing this at work and Python makes it look like it might be a bit easier. Just, like, knowing where to look. But yeah, obviously everyone's like GitHub and just knowing the search keywords, but the ones you mentioned, they sound really helpful too. What's your background, by the way? What are the things that you're working on right now? For me, it's a pretty diverse background. Like do with drones. Do it design with, like, bank data, survey data. Right now I'm doing ecommerce marketing analytics. Originally got into Python, I was doing too much Excel at work, and I was like, this is not efficient. I also had learned Java and C in school. Okay. So I understand doing the long way. It's like, oh, Python is like, a really convenient shortcut. It's good to have that foundational knowledge. Okay. Yeah. This is like, what the brand people were thinking about and learning to do in the, like, foundation. But yeah, Python makes life a lot easier. Yeah. In terms of tech journey, it's been georgia is kind of similar to mine. I learned java in school. I learned C engineering, c and MATLAB in engineering, and then used Python specifically at work. I feel like I had many my managers, for example, who had joined my firm after doing PhD. They were more MATLAB people, but everyone got converted into Python. If you hadn't gone through that journey of knowing Java was like oh, this is so easy, this is like such a relief. It's good you learn curve foundation technical documentation. You can grouse with the other people who have been doing it much longer than you have. But I used to be a director with Web new code before joining Leadership Fellow so I really appreciate this community and just like having people just like if you're the only woman in class, it's a bit intimidating. So it's just like having these groups to work together, ask some questions and just have a lot of fun. Right? Good solicit. You're always learning continuous learning growth. Yeah. And see we have a QA. I. Think I might have for ecommerce marketing analytics. I should say that for a different topic, but I appreciate the question for tips to get started doing analytics and marketing, it's just like finding a question needs to be answered and then just like look at the existing data you have and learning to make charts just to understand any business questions. In addition to working with Excel, working with Python great. Yeah, that's like a very simplified. Another. Person asks will recording be shared? Yes. So we will be working video editing the file after this finished and it will be posted. The woman who code YouTube stephanie, can you confirm it's like to our code Python playlist channel? Another question, they always do recent topics. Someone is asking is Java worth to learn it in web development or for web dev? Would you be able to answer that? Yes, partially. So maybe Java was really darling when maybe like two, three years ago as well. But now there are lots of frameworks in other languages and it's just that there's been much more development in other languages languages ecosystem than around Java. So I wouldn't say that it's not worth learning. It's just an important thing to know that there are other options than Java that you can consider if you want to be in web development. And another question of asking a specific YouTube channel playlist. So let's definitely call but it'd be the woman who cooked and I think they do have a playlist where it's like women who code Python. But regardless it will be posted. You can also view it on our website bookmarkovlangsbeacons Aijucode Python and thinks that's everything it's a really great event. Thank you. Darsheeta. I really enjoy listening to your vent and going do the walk through so it's convenient. Like the code is much bigger on the screen you're in it's just like you can read it just like look into like the fishbowl. Thank you so much for hosting me and this was not a problem. Thank you so much for joining doing today and I hope everyone has a great and weekend keep for joining us. Bye bye. Thank you. Bye.