This from The Atlantic – kindly sent my way by Richard Symonds.
In my view it is because, as an unknown sage once remarked, “it ain’t what we don’t know that causes trouble, it’s what we know that just ain’t so.” I cannot think of any other significant field of knowledge where the prevailing wisdom, not only in society at large but among experts, is so beset with entrenched, overlapping, fundamental errors.
Another of probably several Turing related posts in the run up to the summer commemorations of Turing’s birth and death (23 June 1912 – 7 June 1954) – the following from Science Magazine 13 April 2012: Vol. 336 no. 6078. I paste in a couple of paragraphs from each paper as a preview. Here is a Wired article based upon the articles below.
1. Beyond Turing’s Machines - Andrew Hodges
In marking Alan Turing’s centenary, it’s worth asking what was his most fundamental achievement and what he left for future science to take up when he took his own life in 1954. His success in World War II, as the chief scientific figure in the British cryptographic effort, with hands-on responsibility for the Atlantic naval conflict, had a great and immediate impact. But in its ever-growing influence since that time, the principle of the universal machine, which Turing published in 1937 (1), beats even this.
When, in 1945, he used his wartime technological knowledge to design a first digital computer, it was to make a practical version of that universal machine (2). All computing has followed his lead. Defining a universal machine rests on one idea, essential to Turing’s mathematical proof in 1936, but quite counter-intuitive, and bearing no resemblance to the large practical calculators of the 1930s. It put logic, not arithmetic, in the driving seat. This central observation is that instructions are themselves a form of data. This vital idea was exploited by Turing immediately in his detailed plan of 1945. The computer he planned would allow instructions to operate on instructions to produce new instructions. The logic of software takes charge of computing. As Turing explained, all known processes could now be encoded, and all could be run on a single machine. The process of encodement could itself be automated and made user-friendly, using any logical language you liked. This approach went far beyond the vision of others at the time.
2. Dusting Off the Turing Test - Robert M. French
Hold up both hands and spread your fingers apart. Now put your palms together and fold your two middle fingers down till the knuckles on both fingers touch each other. While holding this position, one after the other, open and close each pair of opposing fingers by an inch or so. Notice anything? Of course you did. But could a computer without a body and without human experiences ever answer that question or a million others like it? And even if recent revolutionary advances in collecting, storing, retrieving, and analyzing data lead to such a computer, would this machine qualify as “intelligent”?
Just over 60 years ago, Alan Turing published a paper on a simple, operational test for machine intelligence that became one of the most highly cited papers ever written (1). Turing, whose 100th birthday is celebrated this year, made seminal contributions to the mathematics of automated computing, helped the Allies win World War II by breaking top-secret German codes, and built a forerunner of the modern computer (2). His test, today called the Turing test, was the first operational definition of machine intelligence. It posits putting a computer and a human in separate rooms and connecting them by teletype to an external interrogator, who is free to ask any imaginable questions of either entity. The computer aims to fool the interrogator into believing it is the human; the human must convince the interrogator that he/she is the human. If the interrogator cannot determine which is the real human, the computer will be judged to be intelligent.
I propose to consider the question, “Can machines think?” This should begin with definitions of the meaning of the terms “machine” and “think.” The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.
What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the ‘singularity’.
By way of a taster here is an excerpt from Dan Dennett’s contribution in his usual inimitably bold style entitled “The Mystery of David Chalmers.”
‘The Singularity’ is a remarkable text, in ways that many readers may not appreciate. It is written in an admirably forthright and clear style, and is beautifully organized, gradually introducing its readers to the issues, sorting them carefully, dealing with them all fairly and with impressive scholarship, and presenting the whole as an exercise of sweet reasonableness, which in fact it is. But it is also a mystery story of sorts, a cunningly devised intellectual trap, a baffling puzzle that yields its solution — if that is what it is (and that is part of the mystery) — only at the very end. It is like a ‘well made play’ in which every word by every character counts, retrospectively, for something. Agatha Christie never concocted a tighter funnel of implications and suggestions. Bravo, Dave.
With all the hipster hype accorded to Steve Jobs on his passing, there are two names that are being overlooked – Dennis Ritchie and John McCarthy. Here is their obituary from The Economist.