Why Computers Won’t Make Themselves Smarter

March 30, 2021

(The New Yorker) – The idea of an intelligence explosion was revived in 1993, by the author and computer scientist Vernor Vinge, who called it “the singularity,” and the idea has since achieved some popularity among technologists and philosophers. Books such as Nick Bostrom’s “Superintelligence: Paths, Dangers, Strategies,” Max Tegmark’s “Life 3.0: Being Human in the age of Artificial Intelligence,” and Stuart Russell’s “Human Compatible: Artificial Intelligence and the Problem of Control” all describe scenarios of “recursive self-improvement,” in which an artificial-intelligence program designs an improved version of itself repeatedly. I believe that Good’s and Anselm’s arguments have something in common, which is that, in both cases, a lot of the work is being done by the initial definitions. These definitions seem superficially reasonable, which is why they are generally accepted at face value, but they deserve closer examination.