A curious interview popped up in my feed. This is Daniel Dennett, a professor at Tufts University and the kinder, gentler horseman. Here he is talking about consciousness and its implications for artificial intelligence, hitting on many of the themes he has in his lectures in recent years; Gaudi and the termite colony, the soul being made of lots of tiny robots.
Later he goes on to show a very pragmatic, boots on the ground insider’s look at the future of artificial intelligence, talking about issues that most people wouldn’t think of like, as we climb the ladder of “intelligence” in our machines, what kind of licensing and insurance you’ll be required to have in order to operate it, or what kind of regulation will be involved in advertising it.
One of the things that turned me off of academic philosophy was the writings of Richard Feynman (at some point I will have to re-read his popular books to get the exact citation because I now realize it has deeply informed the way I think about the world) where he basically said they had no idea what they were talking about. They just spent their time writing commentaries on commentaries running around chasing their tails and not producing anything.
So when I read Dennett, I don’t feel like I’m reading philosophy, I feel like I’m reading science. So I think when philosophy is done well, philosopher simply starts to mean articulate plainspoken scientist.