ng-conf is a three-day Angular conference focused on delivering the highest quality training in the Angular JavaScript framework. 1500+ developers from across the globe converge on Salt Lake City, UT every year to attend talks and workshops by the Angular team and community experts.
Follow us on twitter https://twitter.com/ngconf Official Website: https://www.ng-conf.org/
I'm Laura, I'm upfront and development to Redmond, and welcome to my talk about music programming, using machine learning in Angular. A lot of buzz words, right? But let's start with the most unknown term among developers is a program, 1843, 1843. This is the urban music programming was introduced as a media. And as you imagine, we did not have the computers as we understand them today. But despite that, at the level is described in the books, that computing engines, which were mainly for math calculations, might be as a perfect tool to create precise and scientific music. No surprises. Nowadays we have different ways how we can actually create music with coding. And today I will cover music programming in front of applications by getting some help from machine learning. My gender, my gender is an open source research project exploring the role of machine learning as a tool in the creative process that covers not only music but art in general. And a subset of that is my Jamba Juice, which provides JavaScript API to use the models in browsers. And I will show you a quick demo how to use those models for music in the application. Let's start with the result. This is the application I'm using here, magenta in three different ways. The first one here, I am just using that to play some pretty defined as, for example. I want to be clear that I'm not providing some audio track here, I'm literally providing notes which due to the dashboard API capabilities, we can transfer via browser sounds. But I'm not using any models. That's why let's look into the second approach where I will use their actual neural network model in order to continue the melody we just heard, for example. So basically, here I am using the magenta model, I'm providing the original Milligan's baseline, and according to those few options, I'm getting a good result. And whenever I click the button, I always requesting for the results. That's why I always get something new. And the last approach is another type of the model which actually just generated some sort of a totally new melody, according to the model itself. So, for example. So now we are relying more on the model and as well, every time I'm clicking, I'm getting something new. OK, let's look into the code now. So basically here I'm having angular applications that I installed, Magenta VM, and this is one of the components to start to during utilization. I actually need to create some sort of instrument which we will play the melodies. Either we are using models are not. So, for example, here I am creating a player. And then later on this, for example, when I am playing the original melody, I'm just starting some sort of notes. Jump song are predefined notes. Notes might be described in different formats, but this is one of them. This is how I describe the jump song by setting the pitch start time and time. And this is how we can describe different notes. And at this moment, that is just that we just playing those predefined notes. But then let's look into the combination in this way, we actually need to use some sort of the modal. And that's why during initialization we are initializing new music, art, music network, and we need to initialize that by providing check checkpoint, for example. Here I am providing basic ah. And then and then later on when I click the button, I'm changing firstly the format of notes. I'm doing that. I'm changing the quantized format and then I can just simple a request for the result from the model itself. And there are three important options as well for which one we can control. The result, one of them, for example, steps we can control the duration of the result. Then the temperature temperature is is randomisation. Wait, that's in short means that how far from the original notes speech and its length might be the result of notes and then steps per quarter, which basically just sets the tempo itself. So there are a few ways we can control the result of those three options. Then by changing the checkpoint and then by changing the base melody as here I'm using the jump song and another model which calls music operational. Autumn Khudairi. Here it's initialise and pretty similar way by providing the checkpoint but used in a different, different way here, for example, I only need to say that I want to have one sample and the temperature might be around this one and that is it. This is how we are getting the totally new melody without providing some sort of the base baseline. And that this is that this was the brief introduction about music programming using machine learning in Angular. And if you find that interesting, you can check my other talks and articles. And now I want to leave. You have some music music which was created by layering many generated melodies on the drums and bass line that were added by human in order to show how well Magento might be incorporated. Thanks a lot for joining me. It was my pleasure to be here.