on
OMSCS Retrospective
At the end of 2021, I finished earning my master’s degree in computer science through Georgia Tech’s OMSCS program. This post is a look back on that experience. Previously, I wrote about my motivation for enrolling in OMSCS.
In terms of time, it took me 4.5 years to complete the program. I was working full time during this period, so I only took one course per semeseter (except for Fall 2020 when I doubled up). I also didn’t take any classes during the summer semesters. Fitting school around my work schedule was doable. My normal routine was school work three weeknights and one weekend day. I probably averaged ~10 hours per week on coursework and studying, though the workload varied depending on the class. I was able to earn a 4.0, and I never felt like I was doing an unreasonable amount of work. The course workloads aggregated on OMSCentral seem relatively accurate, but personally I think I spent less time than what’s listed there.
The program did sometimes put a strain on my social life, but from spring 2020 onward we were under COVID-19 restrictions anyway. It was often a drag to finish an entire day of work, only to then have to study for a test or implement a programming assignment. I know people complete their degree while caring for dependents, and it’s hard to imagine how they make it work. Now that I’m done, I’m glad to have some more free time back in my life.
In direct financial terms, the entire degree costed almost $8k, some of which was covered by my employer. Whether the degree has paid (or will pay) for itself is unclear. I’m not intending, in my next job search, to target only jobs for which an M.S. is required. I believe I’m a stronger engineer for having completed the program, but any future success in my career probably won’t be directly attributable. My OMSCS specialization was Machine Learning, but I don’t intend to pivot my career to an ML focus. “Artificial Intelligence” has an aura of extreme hype, and I think my ML specialization has helped me to regard AI with a more informed and critical perspective.
I think the experience served the goals I’d set for it. Namely, it gave me a structured way to learn more about sub-fields of CS that I wasn’t normally exposed to in my daily work as a web software engineer. Probably most importantly, it strengthened my learning ability, which is of course hugely applicable. Fighting impostor syndrome is an ongoing battle, but I think being able to understand the course material also helped in that regard.
My advice to OMSCSers…
- Create a schedule and stick to it. I didn’t start doing this until about halfway through – till then I didn’t have a clear boundary between personal time and “school” time, and so I was under a constant low level stress that I should be doing school work.
- Watch all the lecture material. If you don’t understand something, rewatch it.
- Do all the homework and follow the schedule prescribed by each course.
- Attend office hours or watch recordings afterward. There is a lot of elucidating conversation there, and TAs will often go into depth on issues that are immediately relevant to exams and homework.
- Participate in Piazza and unofficial channels like Slack. Interacting with other students helps solidify understanding and it’s one of the benefits of a program like OMSCS over self-study.
- As in all of life, don’t be afraid to ask questions.
The rest of this post is a list of the courses I took and some brief notes about each one…
Fall 2017: Intro to High-Performance Computing
I took this course first because it was supposedly really challenging and really good. I wanted to see what I was getting myself into. It was indeed pretty hard! The programming assignments were in C and C++, with which I was rusty. I’d also been out of college for five years. The course material was about distributing massive workloads on supercomputers using tools like MPI. The recorded lectures were engaging and entertaining and the lab assignments were nontrivial. In retrospect, it was a rewarding and fun class despite being challenging. I’m glad I took it first. I probably spent the most time on this course.
Spring 2018: Machine Learning for Trading
This class was a broad introduction to statistical methods like regression, Q-Learning, and KNN, and to financial concepts like market mechanics, valuing companies, and technical analysis. The course was much easier than HPC. The lectures were entertaining. It turned out to be a good primer for other concepts that are covered throughout the Machine Learning specialization, and I’m glad I took it before ML. And practically, it was a nice introduction to working with standard Python tools like NumPy, Pandas, etc. This course was also where I first learned about options.
Fall 2018: Computer Vision
This class covered a lot of material, including: linear image processing, Hough transforms, feature detection, optical flow, camera calibration, and tracking. I was surprised how powerful classical computer vision algorithms could be. CV was my first introduction to the Kalman filter, which would crop up in other courses as well. My final project for the course was on augmented reality: projecting an object into a 3D video. It was cool to see how this technology works under the hood. CV was one of my favorite classes.
Spring 2019: Machine Learning
ML is another class with high-quality lecture production. It’s a great overview of supervised learning, unsupervised learning, and reinforcement learning. The assignments are writing-heavy. I really liked that aspect of it, because by writing about the output and behavior of various ML algorithms, it helped me develop intuition about how these tools worked. I had previously taken Andrew Ng’s Machine Learning course, and so I felt well prepared for this class.
Fall 2019: Artificial Intelligence for Robotics
The lectures for this course are taught by Sebastian Thrun, the founder of Google’s self-driving car team. Dr. Thrun held office hours for the course as well, which was cool. The material covers basic robotics algorithms, with a focus on robotic vehicles: Kalman and particle filters, search algorithms like A*, PID controllers, and SLAM. The assignments for this course were fun because you get to drive a little robotic actor through scenes. After this class is when I started becoming anxious to wrap up the program.
Spring 2020: Simulation and Bayesian Statistics
Spring 2020 was the only term where I took two courses simultaneously. I imagined they would have some overlapping ideas, since they were both stats classes, and neither seemed especially hard. I managed to get an A in both courses, but it was definitely a lot of effort to coordinate the workloads and fit them into my schedule.
Simulation and Modeling for Engineering and Science was an interesting course. It was all about simulation systems: hand simulations, monte-carlo, the Arena simulation language, random variate generation, and input and output analysis. The lectures were entertaining, and there was a lot of material. This course was a little harder than I expected, probably because I didn’t have a strong stats background.
Bayesian Statistics had some overlapping ideas about probability distributions and monte-carlo methods. It was a deep-dive on Bayes Theorem and Bayesian analysis. The material covered Bayes formula, Bayesian networks, OpenBUGS, Bayesian inference, Bayesian computation, MCMC methodology, and more. This course helped me think in Bayesian terms, which is sometimes counter intuitive.
Fall 2020: Data and Visual Analytics
DVA is a broad introduction to data visualization. This was the first course I took where there was a group project. The course touched on data collection, data cleaning, SQLite, data integration, data analytics, Hadoop, Spark, D3, classification, and ensemble methods. I wasn’t crazy about this course. There didn’t seem to be any cohesion between the various ideas, and there was just a superficial coverage of the topics. I did learn how to use D3, though, which was useful.
Spring 2021: Deep Learning
This course was quite interesting. Most of the hype-generating news in the Machine Learning world is related to Deep Learning, so it was fascinating to learn how these powerful models actually work. The course covered neural networks and gradient descent, optimization of deep networks, convolutional neural nets, pooling layers, PyTorch, bias and fairness, language models, embeddings, transformers, attention, and generative models. This was another one of my favorite classes.
Fall 2021: Graduate Algorithms
GA has earned a reputation of being difficult, and it is a core requirement. As many students do, I took this as my final course. It was challenging, but I put in plenty of effort and had no issues. It covers dynamic programming, graph algorithms, and NP-completeness all in depth. The material is well organized. The grading is heavily based on three exams spaced throughout the course, but the homeworks do a good job of preparing one for them. It felt rewarding to complete this class, and I was glad to brush up on concepts I hadn’t studied since undergrad.