Many people can lay claim to the title "father of the computer": John Mauchly, Atanasoff and Berry, even Charles Babbage, who died in 1871. But British mathematician Alan Turing‘s claim is as good as any. Turing’s university studies under Max Newman led him to study the "halting problem", the question of whether one could tell in advance whether any formally defined problem in mathematics could be solved by a given algorithm in finite time. For instance, an algorithm could be written to test every possible factorization of a number to determine whether it was prime, which would take a large but finite number of steps. An algorithm attempting to find an odd perfect number, however, might run forever or it might not; while mathematicians haven’t been able to find one and can demonstrate that none smaller than 10300 (a number many orders of magnitude higher than the number of atoms in the universe) exist, they haven’t been able to prove that there are none to be found. Solving the halting problem for the odd-perfect-number-finding algorithm (which can be written in just 43 characters of C code) would prove the existance or non-existance of such a number, enabling mathematicians to solve an enormous number of currently intractable problems. But through a similar jujitsu involved used by Gödel in his incompleteness theorem, Turing demonstrated that it couldn’t be done. There is no general solution to the halting problem; we can wind up a mathematical clock without knowing whether it will wind down. Turing’s proof led him to develop the concept of the Turing machine, a sort of idealized computer (back when the term was used to describe people with adding machines), the building block of theoretical computer scientist. But a hugely important proof, even one linked to Gödel’s (beloved and often misunderstood) theorem, doesn’t always result in one getting namedropped in Neuromancer and used as a character by Neal Stephenson. Alan Turing’s fame in popular culture comes not from his mathematical work but from a bit of philosophy. Turing had been a chess enthusiast since his schoolboy days, and while working with a motley band of linguists, mathematicians, and crossword enthusiasts on the British codebreaking efforts at Bletchley Park during World War II, he had been exposed to electronic computers for the first time. He took to them immediately, and

[i]n 1945 Turing joined the National Physical Laboratory (NPL) in London, his brief to design and develop an electronic stored-program digital computer for scientific work. (Artificial Intelligence was not far from Turing’s thoughts: he described himself as ‘building a brain’ and remarked in a letter that he was ‘more interested in the possibility of producing models of the action of the brain than in the practical applications to computing’.) John Womersley, Turing’s immediate superior at NPL, christened Turing’s proposed machine the Automatic Computing Engine, or ACE, in homage to Babbage’s Difference Engine and Analytical Engine.

Turing was not just a theorist; he was a hands-on technician with a theoretical bent. It’s hardly surprising that, like his American counterpart Claude Shannon (the Bell Labs researcher who invented the idea of information theory), Turing’s thoughts would turn to computer chess. People had dreamed of chess-playing machines since the days of Kempelen’s Turk, if not earlier. But the Turk was a fraud, a clockwork frame with a human inside; Turing was the first to do it for real, writing a "paper machine" — an algorithm — for playing chess in 1945. Running the computations by hand, it took him half an hour to make each move and played a weak game, but it was nonetheless chess. (Shannon, an intrepid gamesman, would greatly improve on Turing’s approach in 1949 by introducing the concept of minimax decision trees.) In 1950, Turing wrote the first real computer chess program.

If a computer could play chess, what couldn’t it do? Turing’s own invention, Turing machines, laid out the computational limits of the new devices, but if a machine could play chess, could it think? The question would haunt researchers, including some very smart ones, for years, but in a 1949 paper, Turing laid out an empirical test. Put a person in a room with a teletype machine. Via the machine, the person can send and receive messages to another party. The two carry on a conversation. If the person in the room can correctly distinguish between people and computers on the other end of the line, the computer isn’t thinking. The Turing Test has drawn some rebuttals, objections largely based on the idea of what thinking really signifies, but it provides a nice testing ground. Despite more than a half century of work, nobody has yet come close to writing a program that will pass the Turing test and claimed the $100,000 prize sponsored by computer scientist Hugh Loebner.

People are predisposed to recognize patterns. A man named Joseph Weintraub hooked together a primitive sentence parser and created a program called ELIZA, designed to parrot statements back questioningly and ask about feelings in the manner of a Freudian psychologist (or, in certain cases, a priest giving confession). The technique was amazingly primitive, but it fooled a surprising number of people, if only briefly. More advanced efforts along the same lines, such as MGonz (a program which does an amazing job of mimicking a rude, drunk 17-year-old wasting time on the internet) are better at the illusion of sentience. But nobody has been able to create a real conversational partner, something capable of expressing an opinion about why the sky is blue and the U.S.’s population and whether Julie Newmar or Eartha Kitt was the better Catwoman (or at least mimicking the action of expressing an opinion, which, according to the philosophical objections to the Turing test, is not the same thing). The winners of Loebner’s annual contest are enormously clever programs, but no one would mistake them for real people. In fact, transcripts that look too much like they have people behind them probably do; the so-called ChatNannies developed by Jim Wightman bear an uncanny similarity to previous Loebner Prize entrants, raising suspicions that Wightman’s more impressive demonstrations were simply Maelzel’s Turk redone, a mechanical interface with a human mind guiding it.

People can usually recognize intelligence; we can often recognize criminal stupidity only in retrospect. Alan Turing was gay. When convicted for having sexual relations with a man, he could not attempt to mitigate his sentence thanks to his work cracking the German ENIGMA code machine during the war, work that made him the single most important British civilian in efforts against the German Navy; Bletchley Park’s very existance was classified for decades after his arrest. He was fired from his government codebreaking job as a security risk and injected with estrogen as a form of chemical castration. The estrogen had terrible side effects, and fifty years ago this month, Alan Turing poisoned an apple and bit down. He was forty-one, and he apparently chose that method of suicide in an effort to allow his mother to think his death was an accident. He was an intellectual omnivore who was friends with Wittgenstein, a man who danced from mathematics to philosophy to computer science before there was such a field, a scientist who in the prime of his life was studying quantum physics and the relationship of mathematics to biology. Alan Turing would have been ninety-two years old today. Someday, scientists may reach the level of the HAL 9000 (or, hopefully, something kindlier, perhaps C-3PO). But how much further would research have gotten with the assistance of Alan Turing and his fine mind, that unimaginably tuned thinking machine?