Projects

I am a huge proponent of project-based learning and am always working on something. I started working through projects of interest back in graduate school to solidify my knowledge and application of machine learning. These early projects were largely inspired by playing board games with friends. More recent projects, also inspired by my outside hobbies, have focused on the application of computer vision models to judging powerlifting form.

2025

Perfect Form: To the Cloud

Perfect Form: To the Cloud

With a functioning local prototype of my powerlifting form analysis application, I was ready to migrate the application to the cloud. Could I transform my demo to a real hosted application leveraging cloud GPUs with a UI capable of displaying all of the extracted metrics?

2023

Perfect Form: Computer Vision for Powerlifting

Perfect Form: Computer Vision for Powerlifting

As an avid powerlifter looking for a problem to learn machine model deployments, I gravitated towards designing and implementing a microservice architecture that would leverage the latest computer vision models to analyze powerlifting from through inferred biomechanical metrics. How far could I push this idea to build a useful lifting tool for my training?

2021

Probing Bacterial Parasitism Using Multi-Agent Reinforcement Learning

Probing Bacterial Parasitism Using Multi-Agent Reinforcement Learning

Bacteria in their natural environment do not exist in isolation but with several other species. Much richer dynamics can be observed with additional species, such as commensalism and parasitism, which cannot be observed with a single species. In this work, we used multi-agent reinforcement learning to find the optimal policies of these bacterial agents in different environments. In particular, we try to quantify and understand the optimal level of antagonism for a given environment.

2020

Steep: Imitation Learning to Complete Ski Races

Steep: Imitation Learning to Complete Ski Races

Steep is an open-world winter sports game that allows players to participate in downhill skiing races, among other activities. Because of the open-world design, I was interested in trying to create a neural network that could complete the different races. I trained a neural network using imitation learning, creating training data from my own gameplay and augmenting it to create a sizable training set. How close to human performance can we achieve from a neural network trained using imitation learning?
Steep Part 2: Imitation Learning to Complete Rocket Suit Races

Steep Part 2: Imitation Learning to Complete Rocket Suit Races

After spending considerable effort developing a data collection and processing pipeline, the ski neural network was able to perform well on a simple course. There exists a separate race mode in Steep using a rocket suit that allows a set of guidelines that can be toggled. However the rocket suit races require more controls to navigate the course and are much less forgiving. Is the additional visual information sufficient to allow a network trained through imitation learning to navigate the course?
Lost Cities: Using Deep Q-Networks to Learn a Simple Card Game

Lost Cities: Using Deep Q-Networks to Learn a Simple Card Game

Lost Cities is a simple two-player card game that is not unlike competitive solitaire. Players are forced to invest in playing cards while lacking crucial information about future cards that may be drawn. To determine optimal play, I developed and trained a deep Q-network that learned to maximize its collected score through self-play. How well can a simple neural network perform with so much information unknown?
Lost Cities Part 2: Learning from Deep Q-Networks

Lost Cities Part 2: Learning from Deep Q-Networks

Now that we developed a deep Q-network that achieved a mean score of 45 points in Lost Cities, what can we learn from it? We probe this network to understand the high-level strategy choices it makes, and we quantify the distributions of important metrics governing its strategy. To better evaluate its performance, we want to play against the network ourselves. Can we train an object detection architecture so that we can play against the deep Q-network using a game with physical cards?