Robotic Perception

I’m a first year PhD student working on vision based robotic control at Oregon State University. I am part of the Dynamic Robotics and Artificial Intelligence Laboratory where I use deep learning based methods to give bipedal robots, such as Digit, the capability to perceive and interact with the world. My research goals involve extending visual loco-manipulation tasks to be language conditioned through the use of multi-modal foundation models.

I am advised by Alan Fern and frequently collaborate with Stefan Lee. I completed my undergraduate degree at the University of Maryland where I worked with Rama Chellappa on face verification and completed my honors thesis on reinforcement learning with James Reggia.

LLMs and Game AI

Outside my formal research, I am extremely interested in Large Language Models as well as using games as benchmarks for AI. Previously I’ve coded PPO and DDQN from scratch to play multiple games on the Arcade Learning Environment. I have also fine-tuned an LLM to play Pokemon Showdown to compare with an In-Context Learning method presented in PokeLLMon. Resources for this project were limited, so we used a 7b parameter model to teach a 1b parameter model to play the game. This allowed the 1b parameter model to outperform the teacher model by 4% in win rate. More details can be found here.