hawkins

How to Build a Digital Brain

Jeff Hawkins interview.

Here’s what we do inside Grok: we build this 60,000-neuron neural network that emulates a very small part of one layer of the neocortex. It’s about a thousandth the size of a mouse brain and a millionth the size of a human brain. So: not super-intelligent, but we’re using the principle by which the brain does all the inference and motor behavior. I’m very confident that this sequence memory we use is the core of how all intelligence works. The brain’s taking in streaming data, they’re noisy, they’re constantly changing, and it has to figure out what the patterns are and make predictions from them.

Here is a more detailed interview.

Human brain

Philosophy will be the key that unlocks artificial intelligence

David Deutsch on AI

In my view it is because, as an unknown sage once remarked, “it ain’t what we don’t know that causes trouble, it’s what we know that just ain’t so.” I cannot think of any other significant field of knowledge where the prevailing wisdom, not only in society at large but among experts, is so beset with entrenched, overlapping, fundamental errors.

F1.medium

Turing Test

Another of probably several Turing related posts in the run up to the summer commemorations of Turing’s birth and death (23 June 1912 – 7 June 1954) – the following from Science Magazine 13 April 2012: Vol. 336 no. 6078. I paste in a couple of paragraphs from each paper as a preview. Here is a Wired article based upon the articles below.

1. Beyond Turing’s Machines – Andrew Hodges

In marking Alan Turing’s centenary, it’s worth asking what was his most fundamental achievement and what he left for future science to take up when he took his own life in 1954. His success in World War II, as the chief scientific figure in the British cryptographic effort, with hands-on responsibility for the Atlantic naval conflict, had a great and immediate impact. But in its ever-growing influence since that time, the principle of the universal machine, which Turing published in 1937 (1), beats even this.

When, in 1945, he used his wartime technological knowledge to design a first digital computer, it was to make a practical version of that universal machine (2). All computing has followed his lead. Defining a universal machine rests on one idea, essential to Turing’s mathematical proof in 1936, but quite counter-intuitive, and bearing no resemblance to the large practical calculators of the 1930s. It put logic, not arithmetic, in the driving seat. This central observation is that instructions are themselves a form of data. This vital idea was exploited by Turing immediately in his detailed plan of 1945. The computer he planned would allow instructions to operate on instructions to produce new instructions. The logic of software takes charge of computing. As Turing explained, all known processes could now be encoded, and all could be run on a single machine. The process of encodement could itself be automated and made user-friendly, using any logical language you liked. This approach went far beyond the vision of others at the time.

2. Dusting Off the Turing Test – Robert M. French

Hold up both hands and spread your fingers apart. Now put your palms together and fold your two middle fingers down till the knuckles on both fingers touch each other. While holding this position, one after the other, open and close each pair of opposing fingers by an inch or so. Notice anything? Of course you did. But could a computer without a body and without human experiences ever answer that question or a million others like it? And even if recent revolutionary advances in collecting, storing, retrieving, and analyzing data lead to such a computer, would this machine qualify as “intelligent”?

Just over 60 years ago, Alan Turing published a paper on a simple, operational test for machine intelligence that became one of the most highly cited papers ever written (1). Turing, whose 100th birthday is celebrated this year, made seminal contributions to the mathematics of automated computing, helped the Allies win World War II by breaking top-secret German codes, and built a forerunner of the modern computer (2). His test, today called the Turing test, was the first operational definition of machine intelligence. It posits putting a computer and a human in separate rooms and connecting them by teletype to an external interrogator, who is free to ask any imaginable questions of either entity. The computer aims to fool the interrogator into believing it is the human; the human must convince the interrogator that he/she is the human. If the interrogator cannot determine which is the real human, the computer will be judged to be intelligent.