Losses, Dissonances, and Distortions

Exploiting the creative possibilities of the numerical signals obtained during the training of a machine learning model.

I will be presenting this paper at the 5th Machine Learning for Creativity and Design Workshop at NeurIPS 2021.

The code is available here.

You can see an “expainody” video here:

Introduction

In recent years, there has been a growing interest in using machine learning models for creative purposes. In most cases, this is with the use of large generative models which, as their name implies, can generate high-quality and realistic outputs in music, images, text, and others. The standard approach for artistic creation using these models is to take a pre-trained model (or set of models) and use them for producing output. The artist directs the model’s generation by ``navigating’’ the latent space, fine-tuning the trained parameters, or using the model’s output to steer another generative process (e.g. two examples).

Episode 5: Repeats & Loops

The code for this episode is available here.

Loops are such an essential part of programming that I knew I’d have to make an episode on them at some point. A natural musical analogue is musical repeats, so the whole episode came fairly naturally!

I thought it’d be fun to have some beats to accompany the piano, so I used SuperCollider for that. That proved to be the most challenging part of the episode, as getting the timing right was really hard. A big part of the difficulty is that there’s a mechanical latecy induced by the piano, since the hammers have to physically strike the strings! In the end I’m pleased enough with the output, although I think I could have done better…

Tips for Reviewing Research Papers

The NeurIPS 2021 review period is about to begin, and there will likely be lots of complaining about the quality of reviews when they come out (I’m often guilty of this type of complaint).

I decided to write a post describing how I approach paper-reviewing, in the help that it can be useful for others (especially those who are new to reviewing) in writing high quality reviews.

I’m mostly an RL researcher, so a lot of the tips below are mostly from my experience reading RL papers. I think many of the ideas are applicable more generally, but I acknowledge some may be more RL-specific.

Episode 4: Live Coding & Jazz

The code for this episode is available here.

I had a different idea for the fourth episode, but then I saw John McLaughlin’s tweet about International Jazz day, and decided to do something for that instead.

Obviously I’d talk about Jazz in the musical section, but it wasn’t clear yet what part of Jazz I’d talk about. I spoke to a few people and it seemed like a good idea would be to talk about improvisation, and how jazz musicians do it; in particular, I’m hoping this helps people who don’t “get” jazz to understand what we’re doing when we play it, and that we’re not just playing random notes! :)

Episode 3: Leitmotifs & Variables

The code for this episode is available here.

I had it in my head that the third episode would talk about variables in the section about Computer Science. Originally I thought the musical would be about chords, but it didn’t quite fit well with variables. Then I thought about key signatures, thinking that these are kind of like variables in the sense that you can shift any song into different pitches just by changing key signatures; but again, I wasn’t very content with the connection. While walking Lucy (my dog) one day it hit me that leitmotifs are actually quite similar to variables in the sense that you can reference them at any point, and they hold a particular “value” when called.

Episode 2: Bits & Semitones

The code for this episode is available here.

The idea for doing something with bits seemed kind of natural to me as a second episode. After covering what “computation” is, why not cover what computers actually “see” when they run computations?

Given that bits are what makes up everything inside a computer’s software, I wanted a musical topic that was inside every type of music (at least in Western music). Initially I was thinking of doing scales, but as I was developing this idea it dawned on me that there is a very close relationship between semitones and tones (or between half-steps and whole-steps) and the zeros and ones of the binary system.

Metrics and continuity in reinforcement learning

In this work we investigate the notion of “state similarity” in Markov decision processes. This concept is central to generalization in RL with function approximation.

Our paper was published at AAAI'21.

Charline Le Lan, Marc G. Bellemare, and Pablo Samuel Castro

The text below was adapted from Charline’s twitter thread

In RL, we often deal with systems with large state spaces. We can’t exactly represent the value of each of these states and need some type of generalization. One way to do that is to look at structured representations in which similar states are assigned similar predictions.

Episode 1: Musical Notes & Computation

The code for this episode is available here.

I originally thought this channel would be a kind of educational channel, where people could learn about both music and computer science in a fun and informal way. I tweeted asking for suggestions for what to cover first on the CS side, and Kory Mathewson’s response was my favourite.

On the music side, it was kind of a train-of-thought process. The first thing that came to mind when thinking about the first thing you might learn in music theory was musical notes themselves.