Chess, Computers, and the Human Element
August 18, 2007
Ten years since the highly publicized chess match between Gary Kasparov and the Deep Blue supercomputer, Daniel Dennett from Tufts University reflects on the significance and legacy of Deep Blue’s win in “Higher Games,” a recommended read in MIT’s latest Technology Review.
Dennett recalls that at the time, many wondered if Deep Blue’s accomplishment (or rather, the accomplishment of its designers) signaled the end of humans as the dominant thinkers on this planet. Yet ten years after Deep Blue–and despite Moore’s Law holding steady all this time–computers are still the ones being programmed by humans, and not the other way around. What seems obvious to me is that computers rely entirely on us to give them the wherewithal to accomplish any task, however large or small. This is one of the many shortcomings of artificial intelligences, a few of which Dennett acknowledges:
Computers–at least currently existing computers–can’t be bored or embarrassed, or anxious about losing the respect of the other players, and these are aspects of life that human competitors always have to contend with, and sometimes even exploit, in their games. . . . The imperviousness of computers to this sort of gamesmanship means that if you beat them at all, you have to beat them fair and square–and isn’t that just what ÂKasparov and Kramnik were unable to do?
What I found interesting about this “fair and square” comment is that it treats chess as a series of statistically weighted moves and counter-moves on a board, which is not what most people find interesting about the game. “Gamesmanship,” as Dennett calls it, is crucial to chess matches or any other competitive activity, at least when humans are involved. In fact, playing a nonhuman opponent forced Kasparov to act as a computer (merely) choosing among strategies that fell within the fair rules of play. Make a computer that can play chess as a human plays, though, and you will have accomplished something that no one has ever done: turning Pinocchio into a real boy.
In light of these considerations, I take issue with Dennett’s assertion in this article that humans are “protein machines” in the same way that present-day computers are silicon machines. According to him, we should embrace this idea simply because no other metaphor explains the process of human thinking quite as well. The gap here in Dennett’s logic is a mile wide: a good metaphor may help us understand reality, but we would be remiss to confuse a puppet with a person. We do not merely run through algorithms when making decisions; we emote, intuit, and even pray. We have a first-person experience of the world that cannot be formalized into finite mathematics and logic gates without losing the most important element of all, namely, subjectivity itself.
I think that the story of Kasparov vs. Deep Blue can serve as a mirror for our own assumptions: If we believe humans are only biological mechanisms, then Deep Blue bested the best chess-playing machine in the human race, fair and square. But if we were expecting a supercomputer to compete as a fellow human chess master would–with all of the gamesmanship that goes beyond the rules–we will be disappointed in Deep Blue and its programmers. Those like Dennett who believe computers can surpass us in chess (or any other human pursuit) miss why we play in the first place.