ML-Jam: Performing Structured Improvisations with Pre-trained Models
This paper, published in the International Conference on Computational Creativity, 2019, explores using pre-trained musical generative models in a collaborative setting for improvisation. You can read more details about it in this blog post. You can also play with it in this web app! If you want to play with the code, it is here. Demos Demo video playing with the web app: Demo video jamming over Herbie Hancock’s Chameleon:
I created this website as an experiment to learn p5.js. It creates programmatic “music” based on the interaction oof the fish you create. You create fish by clicking anywhere on the screen. The x-axis of the position where you click determines the pitch (taken from the D minor pentatonic scale), while the length of the click determines the size and the speed of the fish. Whenever the fish bump into each other they “sing” and move away.
Dopamine: A framework for flexible value-based reinforcement learning research
Dopamine is a framework for flexible, value-based, reinforcement learning research. It was originally written in TensorFlow, but now all agents have been implemented in JAX. You can read more about it in our github page and in our white paper. Original Google AI blogpost. We have a website where you can easily compare the performance of all the Dopamine agents, which I find really useful: . We also provide a set of Colaboratory notebooks that really help understand the framework:
JiDiJi: An Experiment in Musical Representation
I made this website to convert between music and colours. Read below for details! About Music can be represented in various forms: as a series of sounds, as a score, as tabs (for guitar), as a series of chord names, as MIDI, as a NoteSequence protocol buffer, and more. I stumbled upon the Solresol language recently, as well as upon this talk by Adam Neely, and realized that you can also represent music as colours.