The application of machine learning and reinforcement learning (RL) to a variety of problems has changed and expanded the frontier of many disciplines. The field of robotics and controls is no exception. Where previously tools like PID controllers and state feedback were the general rule of thumb for solving problems like the cartpole or inverted pendulum, reinforcement learning, particularly deep Q learning (DQN) has been used to effectively solve many of these problems. One type of system that has had little application of reinforcement learning is in fast-slow dynamic systems. This project focused on implementing, observing, and evaluating DQN approaches to controlling slow-fast dynamic systems, specifically seeking to identify what parameters allow the controller to most effectively control the system and outperform classical techniques. The results imply that as the fast and slow dynamics decouple, DQN controllers more effectively outperform dynamics based controllers. Further, the architecture of the DQN can be optimized to learn control laws more efficiently by emphasizing the number of nodes per layer instead of layers in the network.