– Tim Rocktäschel, Jakob Foerster and Greg Farquhar, 29/08/2018
Every year we get contacted by students who wish to work on short-term machine learning research projects with us. By now, we have supervised a good number of them and we noticed that some of the advice that we gave followed a few recurring principles. In this post, we share what we believe is good advice for a master’s thesis project or a summer research internship in machine learning. This post is by no means comprehensive but instead emphasizes those pitfalls that we saw over and over again. For instance, we will not talk about how to pick a good project or how to generally approach a machine learning research project. Some of our advice is generally applicable for working on machine learning and specifically deep and/or reinforcement research projects. However, some of it is only important when faced with the time constraints of a three-month project and are considerably less important when you just started the journey of a three to five year Ph.D. degree.
Machine learning and specifically deep and reinforcement learning models are notoriously hard to debug. To give you a sense of the myriad of ways of making mistakes, have a look at Andrej Karpathy’s Twitter thread. All of us, even the more senior researchers, make such mistakes all the time. What makes these so hard to detect is that even buggy models often still learn and produce meaningful outputs. Bugs might introduce subtle changes to your model and most of them only show up at runtime. Having this in mind, the worst thing you can do is to assume your code does not contain any mistakes. What often distinguishes a productive from an unproductive machine learning researcher is their attitude towards their own code. If your default assumption is that there is likely something wrong with your code, you will search for bugs more carefully. Step through your code line-by-line, carefully inspecting intermediate outputs. Visualise them if possible. Do tensors have the right shapes? Have they been properly initialized, cloned or detached? Monitor gradients during training and look out for NaNs. It could be helpful to write unit-tests and to make your experiments reproducible by setting seeds of random number generators. For more tips on neural network debugging, have a look at Section 11.5 in Goodfellow et al.’s Deep Learning book.
While one aim of your project might be to achieve improvements on some evaluation metric, you should, more importantly, develop a good understanding of how and why your model works. Especially early in a project, final evaluation metrics contain little information that is useful for iterating and developing your algorithm or model. Instead, ask deeper questions and develop informative diagnostics. If you have introduced a gating or attention mechanism, does your model in fact make use of it? Which of the model innovations that you propose actually contribute to the overall performance gain? Did you carry out an ablation study? How many training examples/epochs did it take your model to achieve reasonable performance and does that differ to the baseline you are using? Are there any systematic differences between the test instances on which your model does well or terribly? How robust are your results with respect to changes of hyper-parameters? Can important features be predicted from the model’s hidden state? Keep in mind that your research and your project report are not really about informing the research community about some (marginal) improvement over the previous state-of-the-art, but instead about contributing to our understanding of the subject. Others in the field will want to know what works and what does not, and which of your findings could be applied to their problems.
With current deep learning libraries it is easy to make a model more complex by adding more components, layers and optimization tricks. However, when you make a change to the code or model, you should have at least an intuition for why this change should help. Likewise, when you run an experiment, you should have a clear expectation of its outcome. What do you expect the plotted results to look like, and what will they tell you? This is even more important when you find yourself in a situation where your model is not doing what it is supposed to do. Then it is more likely that you are currently seeing the symptoms of a bug, so extending your model will not help you find that bug and might even make it harder to isolate the problem. Before making your model more complex, get to the bottom of what might be wrong with it. Moreover, keep in mind that in your report you will have to justify what you did. An assessor of your report is interested in understanding your thought process. If you cannot formulate a research hypothesis and explain to yourself why what you are doing should work, then chances are good that neither can anyone else.
We have often seen highly-motivated students jumping on hard problems and trying complex solutions right away. This makes it hard to analyze in case something goes wrong. Instead, ask yourself: what is the minimal thing that should work? Can the model learn to memorize a small data set? What does it learn when using only a few parameters? Does the code work when training on a single training instance instead of a batch? What is the simplest form of generalization that we expect to see? What is a simple baseline that we expect to fail? What is a minimal extension of the baseline that should make it work?
Experiments can take a long time. Deep learning and reinforcement learning, in particular, can be extremely time consuming when amassing statistically significant numbers of random seeds. It is therefore critical to not fall into a slow iteration cycle too early in the course of a short-term project. Debug your models using simple environments and implement a proof-of-concept of your idea that can be run on your personal computer. Sometimes a simple matrix game or a grid world experiment can provide useful validation of ideas. Sometimes you can also use the exact value functions of MDPs to test algorithmic ideas without having to mess around with gradient estimation, actor-critic training etc. When moving to larger-scale experiments, streamline your process for launching experiments and checking their results. Check those results before experiments have run their full course to see if performance is flatlining. Investing in the infrastructure can be time consuming in the beginning, but it will pay off towards the end of your project. When analyzing results, be hungry for useful information.
We usually hand out projects months before the official start date. One reason for this is that three months is a really short time for i) learning about the background and related work, ii) carrying out the implementation and experiments, and iii) writing a good report. Another reason is that we generally propose research projects that, if successful, could lead to a publication in a machine learning venue. While we know that students have a lot of things going on during the term, we generally encourage you to at least start reading about relevant literature ahead of time. Ideally, by the time you start working full-time on the project, you should know what to do, how it relates to existing approaches, and have some idea of how to do it. This might also be a good time for getting familiar with your machine learning framework of choice (we recommend PyTorch!).
You really should use version control for your research code and project report. There is nothing worse than losing all your hard work days before the deadline. If you do not have one already, open a GitHub account. As a student you get free private repositories. If you do not know about version control, learn it now and thank yourself later.
In academia it is unlikely that you have access to more than a handful of GPUs during your project. However, particularly in deep reinforcement learning it is important to not draw premature conclusions from a single or few experiments. Ideally, you want to repeat experiments multiple times and, as mentioned, get a sense of the robustness to different starting conditions and hyper-parameters.
If you are doing a master’s project, your work will be assessed based on your written report, not based on the outstanding work that you did but did not have enough time to write clearly about. Start writing early and do not underestimate the effort of disseminating your research. State your aims, hypothesis and contributions clearly and allow the reader to follow your thought process. Explain your design choices and discuss your findings clearly. Ideally, you should write your report consistently during the course of the project. That way, you force yourself to think your next steps through and it is less likely that you forget about any important information when the deadline gets close.
Your supervisors are busy people, but they are here to help you. Instead of running into problems and then getting stuck until the next scheduled meeting, reach out to your supervisors when you need it. Be proactive about arranging meetings and prepare the results, code, or write-up that you want to discuss in advance. Make good use of your supervisors! Lastly, do not panic! We all have been through this and we know that it can be a daunting experience, particularly if your job prospects or the success of your Ph.D. applications depend on this research project. We really want you to succeed.