I was born and raised in Quito, Ecuador, and moved to Montreal after high school to study at McGill. I stayed in Montreal for the next 10 years, finished my bachelors, worked at a flight simulator company, and then eventually obtained my masters and PhD at McGill, focusing on Reinforcement Learning under the supervision of Doina Precup and Prakash Panangaden. After my PhD I did a 10-month postdoc in Paris before moving to Pittsburgh to join Google. I have worked at Google since 2012, and am currently a staff research Software Developer in Google Brain in Montreal, focusing on fundamental Reinforcement Learning research, Machine Learning and Creativity, and being a regular advocate for increasing the LatinX representation in the research community. I am also an adjunct professor at Université de Montréal. Aside from my interest in coding/AI/math, I am an active musician and love running (6 marathons so far, including Boston!).
I’ve been in bands since I was 12. In a parallel universe I’m a full-time musician :D. This page collects the albums I’ve released so far, in reverse chronological order. Enjoy! gregor samsa - amorfo (2023) With my good friend Esteban Nichols, we recorded this jazz fusion album during 2022. It was challenging because Esteban (and Matías) recorded in Quito, Ecuador, while I recorded in Ottawa! I’m quite proud that this album was made with 100% Ecuadorians.
I learned on the radio that last November 29th marked the 50th anniversary of the classic arcade game Pong. This game is particularly meaningful for those of us that do RL research, as it is one of the games that is part of the Arcade Learning Environment, one of the most popular benchmarks. Pong is probably the easiest game of the whole suite, so we often use it as a test to make sure our agents are learning.
Como parte de la RIIAA en Quito, di una introducción a los Transformers, que es la arquitectura detrás de avances como GPT-3, Music Transformer, Parti, y muchos otros. Grabación Pueden ver la grabación aquí: Materiales Aquí pueden acceder a los diferentes materiales que mencioné durante el curso: Las diapositivas que usé en el curso Write with Transformers de Hugging Face (GPT-2) Eleuther GPT-J-6B, que es mucho mejor modelo que el GPT-2 de Hugging Face El colab simple sobre bigrams El colab de Flax sobre LSTMs El excelente the Illustrated Transformer de Jay Alammar, en el cual basé la descripción de Transformers.