In 2019, Ilya Sutskever (former Chief Scientist at OpenAI and one of the leading lights of the ‘Google Brain’ AI era) sent John Carmack (computer scientist extraordinaire, created Doom) a list of ~40 papers to learn about AI.
That list is supposedly lost to the sands of time (Facebook’s email servers apparently wiped the list oops). But someone else apparently had access to the list and had a partial version of it saved!
I have a bit more free time on my hands these days, and I’m already reading a bunch of ML papers, so whats 30 more to add to the list?
This document will serve as a central hub for my notes on each paper. Feel free to follow along — the full paper set is here. Table of Contents will update as the relevant posts go live!
(Note that paper reviews may be published out of order)
Paper 1: Complextropy
Paper 2: RNNs
Paper 3: LSTMs
Paper 4: RNN Regularization
Paper 5: Transformers
Paper 6: NN Regularization
Papers 7 - 8: Pointer Networks, Conv Nets
Paper 9: Set seq2seq
Paper 10: Model Parallelism
Paper 11: ResNets
Paper 12: Dilated Convolutions
Paper 13: MPNNs
[Paper 14 is the same as paper 5]
Paper 15: Attention
Paper 16: Identity Mapping
Papers 17, 18, 19: Relational Networks, VLAEs,
[more to come]