Now feels like a good time to lay out where we are with AI, and what might come next. I want to focus purely on the capabilities of AI models, and specifically the Large Language Models that power chatbots like ChatGPT and Gemini. These models keep getting “smarter” over time, and it seems worthwhile to consider why, as that will help us understand what comes next. Doing so requires diving into how models are trained. I am going to try to do this in a non-technical way, which means that I will ignore a lot of important nuances that I hope my more technical readers forgive me for.
Ethan Mollick