I’m a big believer in apologies. I try to apologize for something at least once a day. Everyone will express a false belief at least some time in their life. What separates rational from irrational people is that rational people acknowledge their mistakes and replace their old false beliefs with true ones, while irrational people do no such correction.
I lead in with that because I feel I owe a friend of mine something akin to an apology. My good friend Alex Turner and I were having a debate about artificial intelligence a few weeks ago. Alex told me he thought that computer scientists were only 100 years away from creating a mechanical brain comparable to a human brain. I said I was skeptical. So skeptical was I that I wrote a column about my skepticism for The Washington Evening Journal, which was published June 16 of this year.
In the article, I said Alex’s prediction of 100 years was off “by an order of magnitude” and that the feat he describes is closer to 1000 years away. To be fair to Alex, I read an article just yesterday that made me more optimistic of the future, and caused me to reconsider my initial chastisement of his prediction.
It was an article from the New York Times about IBM’s invention of a supercomputer that answers Jeopardy! questions. To my amazement, the computer (known as “Watson”) could even answer questions that involved tricky word play such as “Classic candy bar that’s a female Supreme Court justice” -> “What is Baby Ruth Ginsberg?”
How is such a feat possible in a matter of a few seconds? According to the Times article:
New York Times: The great shift in artificial intelligence began in the last 10 years, when computer scientists began using statistics to analyze huge piles of documents, like books and news stories. They wrote algorithms that could take any subject and automatically learn what types of words are, statistically speaking, most (and least) associated with it.
But how does Watson generate an answer from statistical correlations between words, and why wasn’t this possible years ago?
New York Times: Most question-answering systems rely on a handful of algorithms, but Ferrucci (an IBM programmer) decided this was why those systems do not work very well: no single algorithm can simulate the human ability to parse language and facts. Instead, Watson uses more than a hundred algorithms at the same time to analyze a question in different ways, generating hundreds of possible solutions. Another set of algorithms ranks these answers according to plausibility; for example, if dozens of algorithms working in different directions all arrive at the same answer, it’s more likely to be the right one.
I have to admit that computers are farther along than I thought. In my original column, I acknowledged that computers have become good at parsing sentences (identifying the noun and verb, for instance) but I was skeptical that they are now advanced enough to detect the kind of subtleties and hidden clues in a Jeopardy! clue. I wrote:
Me: Can you imagine trying to program a computer to detect sarcasm? Backhanded compliments? Romantic advances? And we haven’t even left the field of communication! Humans can detect all of those things with ease. The super-computers of 2010 can do nothing of the kind.
I’d bet computers are still decades away from distinguishing between straight-forward and sarcastic sentences, but they are much closer to achieving those kinds of goals than I previously thought.
Perhaps a 1,000 year wait for a human-like artificial brain is a bit too pessimistic after all.