
Artificial Dance Music

The development of artificial intelligence (AI) methods like machine learning (ML) and deep learning (DL) has created new opportunities and tools for many industries such as self-driving cars, voice recognition and image classification. But can this new and improved AI be used to create popular electronic music? And how?
This project explores the possibilities of using AI to make Electronic Dance Music (EDM). Artificial Dance Music (ADM) is music created by AI and in particular, by DL, which trains on EDM music to learn from it and then recreate a symbolic representation. In this study, a Turing test is used to evaluate the generated music, with participants listening to both algorithmically-generated music and the original samples. A statistical evaluation of the Turing test results proves that there is no significant difference that distinguishes machine and human generated EDM music. The goal is to investigate to what extent a machine can create original music and if it can be used as a music-producing tool in the future.
The research focuses on the possibilities of using computer creativity, what this means and how to evaluate it. Similar approaches have been explored before but mostly in other genres such as jazz, classical, and contemporary music. A reason for this is that a genre like classical baroque music contains large collections of pieces written by the same composer, which makes it ideal for AI research. The hypothesis of this thesis is that an AI model can successfully create music from a small collection of songs if specific aspects are considered in the system implementation. The key contribution of this thesis is the method of using AI to create popular music, which offers a unique approach in the field of Algorithmic Music research.
How its done
The music is created by using midi as data and a deep learning model created by Florian Colombo and Wulfram Gerstner which is called BachProp(you can read more about it here.)
Bachprop is a neural composer algorithm that is designed to create new music scores in any style without the need of preprocessing of the midi data. It has a midi normalization method that removes unnecessary information and a rhythm normalization feature that maps the note duration and timing into a system of note values.
in order to use Bachprop you need a python environment with keras and tensorflow and a collection of midi songs to use as training data. I recommend to train on cloud hosted computer running linux.


Implementing the model

After I have spent countless hours to make the model
work on a Linux environment, I created an installation
script that makes the implementing of the model go
faster every time I train on a new cloud host.
The script is uploaded on github and can be found here
To run the script simply enter
./implement.sh + with the right flags
Example:
./start.sh -e 600 -t "0.4" -f "http://ftp.com/exp.rar" -l "ftp://myftpserevr.com" -u "ftpusername" -p "ftppassword" -d "/ftpfolder" -b "Experiment number"
Each flag is linked to a variable:
-
-e = echonumber. Range “100-1000”. Default is 500
Sets the number of epoch to train for.
-
-t = temperature. Range “0.1-1”. Default is 0.5
Sets the temperature for the midi generation. High temperature means that the midi files generated while be more experimental, but in this context, it generally means more noise and less house music. Low temperature (under 0.5) will make the model generate that is closer to the original but can lead to overfitting
-
-f = file
The url file of the collection of mididata. In this case it is set to use rar compressed file, but it can easily change to zip.
-
-l = ftpurl
Sets the url address of the ftp server to upload the generated midi files.
-
-u = ftp username
Username for the ftp host to upload files
-
-p = ftp password
Password for the ftp host to upload files
-
-d = ftp folder
Location of the folder to upload the files -
-b = Experiment number, which will be the name of the .rar file uploaded when the training is finished.
Transcribing Midi
For the training data I transcribed 21 songs of Deadmau5 which can be found here.
I only transcribed the "drop" and divided each song into one midi file which consist of an arrangement of 3 parts:
Lead Synthesizer
Midi octave range c6 – c8
Arp Synthesizer
Midi octave range c2-c5
Bass Synthesizer
Midi octave range: c0 – c1
I also removed the drums from the transcribing part because it will make the AI focus more on the harmony.

Generating the music for the Turing test
Generating the music for ARY vs AI Documentary
To generate the music, transcribed midi samples from Deadmau5 where used and implemented into the AI model using the implementation script. After the training is done, the midi generated by AI is implemented into a DAW along with the original midi from deadmau5 and find 10 AI samples that match the quality of the original samples. Each sample is then processed thru the same DAW with the same settings so that the sound quality of the AI samples and the original sound exactly the same.
In Autumn 2021, the methods the ADM project was used to create AI music on behalf of the research council in Norway. The task was to create AI music based on the music from a Norwegian pop artist ARY which was featured in a documentary. Similar methods where used to create the music as the Deadmau5 approach, but in this context, lyrics was also generated by using a transfer learning approach on the GPT2 model.
The Mini-documentary can be seen here.
