psc's website
    • Posts
    • Mentoring / Education
      • CME is A-OK
      • GridWorld Playground
      • Intro to RL
      • Preparing your resume
      • Tips for Interviewing at Google
      • Tips for Reviewing Research Papers
    • MUSICODE
      • Phase 1
        • 0-Introducing
        • 1-Musical Note & Computation
        • 2-Bits & Semitones
        • 3-Leitmotifs & Variables
        • 4-Live Coding & Jazz
        • 5-Repeats & Loops
      • Introducing
      • Losses, Dissonances, and Distortions
      • Portrait of Hallelagine
    • Art
      • Cost of Beauty
      • Covid Music
      • Family
      • JiDiJi
      • Musical Aquarium
    • Misc
      • Artificial General Relativity
      • Crosswords
      • Origins of April Fool's Day
      • yovoy
    • Research
      • Other
        • RigL
      • RL
        • 2020 RL Highlights
        • Contrastive Behavioral Similarity Embeddings
        • Dopamine
        • Flying balloons with RL
        • Metrics & continuity in RL
        • MICo
        • Revisiting Rainbow
        • Scalable methods ...
        • SparseRL
        • Statistical Precipice
        • Tandem RL
      • Creativity
        • Agence, a dynamic film
        • GANterpretations
        • ML-Jam
    Agence: a dynamic film exploring multi-agent systems and human agency

    Agence is a dynamic and interactive film authored by three parties: 1) the director, who establishes the narrative structure and environment, 2) intelligent agents, using reinforcement learning or scripted (hierarchical state machines) AI, and 3) the viewer, who can interact with the system to affect the simulation. We trained RL agents in a multi-agent fashion to control some (or all, based on user choice) of the agents in the film. You can download the game at the Agence website.

    December 1, 2020 Read
    GANterpretations

    GANterpretations is an idea I published in this paper, which was accepted to the 4th Workshop on Machine Learning for Creativity and Design at NeurIPS 2020. The code is available here. At a high level what it does is use the spectrogram of a piece of audio (from a video, for example) to “draw” a path in the latent space of a BigGAN. The following video walks through the process:

    November 8, 2020 Read
    ML-Jam: Performing Structured Improvisations with Pre-trained Models

    This paper, published in the International Conference on Computational Creativity, 2019, explores using pre-trained musical generative models in a collaborative setting for improvisation. You can read more details about it in this blog post. You can also play with it in this web app! If you want to play with the code, it is here. Demos Demo video playing with the web app: Demo video jamming over Herbie Hancock’s Chameleon:

    June 19, 2019 Read
    Navigation
    • About
    • Recent Posts
    • Publications
    Contact me:
    • Twitter

    Toha
    © 2020 Copyright.
    Powered by Hugo Logo