Boston Univeristy, Department of Computer Science, Boston, MA
This summer, I interned remotely at the Learning, Intelligence + Signal Processing (LISP) Lab. LISP is part of the Computer Science Department at Boston University, and targets research in machine learning, intelligent decision- making systems, and signal processing. LISP is led by Professor Sang Peter Chin, whose research focuses on differential geometric methods, topological methods, and game theory, as well as developing sparse deep networks. I mainly worked with a post-doc working in the lab, Laura Greige, whose research is on strategic decision making, game theory, and reinforcement learning.
Specifically, I helped Laura extend her research on reinforcement learning in FlipIt, a game designed to model various scenarios in the world of cybersecurity. In the original game of FlipIt, two players, an attacker and a defender, compete against each other to gain ownership of a single resource. However, an agent only receives information on who owns the resource and when their opponent last moved when they make a move. Thus, this is a game where only the partial state is known at all times, as opposed to a game where the full environment is known to the agent at all times, such as chess. Laura has been studying whether reinforcement learning is successful in a game such as this, where the information received by an agent is incomplete. She has also studied the success of reinforcement learning in extensions of FlipIt, for example, studying what happens when more than two agents are competing for the same resource. The specific model used in her research is a deep neural network combined with Q-learning (a DQN agent), which is trained to maximize the amount of time the defender owns the resource. This summer, I worked on extending FlipIt further to a version called team-based FlipIt, in which two DQN agents work together against an opponent to gain ownership of the resource. Some challenges this project presented were choosing how to reward each individual agent (we decided to use Shapley Values), and how to train agents on the same team so that they do not steal the resource from each other in order to optimize the team’s score.
Overall, I had a great experience this summer. I developed my programming skills and knowledge of Python, and I learned so much about machine learning, and reinforcement learning in particular. I also started my internship with very little knowledge of game theory and enjoyed learning about it; I would love to pursue future courses that involve game theory. Additionally, I was able to gain insight into what it means to work in computer science research, so I can take that knowledge into account when deciding my future career path or studies. I would like to thank the Wong Family for their generous support. I am so grateful I had this opportunity this summer.