Everyone is talking about GPT-3, so, we talked about GPT-3. This seems to be the paper that started it all, “Language Models are Few-Shot Learners.”
The paper is long with 75 pages, but there are 31 authors, so there are ~2.42 pages per author. That should make it easy. Probably we will only have time to discuss sections 2 and 3 in detail.