I’m betting you’ve heard about the next generation of artificial intelligence, the one that’s just around the corner. It’s going to be pervasive, all-competent, maybe super-intelligent. We’ll rely on it to drive cars, write novels, diagnose diseases, and make scientific breakthroughs. It will do all these things better, faster, more safely than we bumbling humans ever could. The thing is, we’ve been promised this for years. If this next level of AI is coming, it seems to be taking its time. Might it be that AI is taking awhile because it’s simply harder than we thought?
My guest today is Dr. Melanie Mitchell. She is the Davis Professor at the Santa Fe Institute and the author of a number of books, including her latest, which is titled ‘Artificial Intelligence: A Guide for Thinking Humans.’
In this conversation we zoom in on Melanie’s widely discussed recent essay, ‘Why AI is harder than we think.’ We talk about the repeating cycle of hype and disenchantment within AI, and how it stretches back to the first years of the field. We walk through four fallacies that Mitchell identifies that lead us to think that super smart AI is closer than it actually is. We talk about self-driving cars, brittleness, adversarial perturbations, Moravec’s paradox, analogy, brains in vats, and embodied cognition, among other topics. And we discuss an all-important concept, one we can’t easily define but we can all agree AI is sorely lacking: common sense.
Across her scholarly publications and public-facing essays, Melanie has recently emerged as one of our most cogent and thoughtful guides to AI research. I’ve been following her work for a while now and was really stoked to get to chat with her. Her essay is insightful, lucid, and just plain fun—if you enjoy this conversation, I definitely suggest you check it out for yourselves.
Alright folks, on to my conversation with Dr. Melanie Mitchell. And for those in the US—happy thanksgiving!
The paper we discuss is available here. A transcript of this episode will be available soon!