Archive for June, 2010

New optimism about artificial intelligence

June 24, 2010

I’m a big believer in apologies. I try to apologize for something at least once a day. Everyone will express a false belief at least some time in their life. What separates rational from irrational people is that rational people acknowledge their mistakes and replace their old false beliefs with true ones, while irrational people do no such correction.

I lead in with that because I feel I owe a friend of mine something akin to an apology. My good friend Alex Turner and I were having a debate about artificial intelligence a few weeks ago. Alex told me he thought that computer scientists were only 100 years away from creating a mechanical brain comparable to a human brain. I said I was skeptical. So skeptical was I that I wrote a column about my skepticism for The Washington Evening Journal, which was published June 16 of this year.

In the article, I said Alex’s prediction of 100 years was off “by an order of magnitude” and that the feat he describes is closer to 1000 years away. To be fair to Alex, I read an article just yesterday that made me more optimistic of the future, and caused me to reconsider my initial chastisement of his prediction.

It was an article from the New York Times about IBM’s invention of a supercomputer that answers Jeopardy! questions. To my amazement, the computer (known as “Watson”) could even answer questions that involved tricky word play such as “Classic candy bar that’s a female Supreme Court justice” -> “What is Baby Ruth Ginsberg?”

How is such a feat possible in a matter of a few seconds? According to the Times article:

New York Times: The great shift in artificial intelligence began in the last 10 years, when computer scientists began using statistics to analyze huge piles of documents, like books and news stories. They wrote algorithms that could take any subject and automatically learn what types of words are, statistically speaking, most (and least) associated with it.

But how does Watson generate an answer from statistical correlations between words, and why wasn’t this possible years ago?

New York Times: Most question-answering systems rely on a handful of algorithms, but Ferrucci (an IBM programmer) decided this was why those systems do not work very well: no single algorithm can simulate the human ability to parse language and facts. Instead, Watson uses more than a hundred algorithms at the same time to analyze a question in different ways, generating hundreds of possible solutions. Another set of algorithms ranks these answers according to plausibility; for example, if dozens of algorithms working in different directions all arrive at the same answer, it’s more likely to be the right one.

I have to admit that computers are farther along than I thought. In my original column, I acknowledged that computers have become good at parsing sentences (identifying the noun and verb, for instance) but I was skeptical that they are now advanced enough to detect the kind of subtleties and hidden clues in a Jeopardy! clue. I wrote:

Me: Can you imagine trying to program a computer to detect sarcasm? Backhanded compliments? Romantic advances? And we haven’t even left the field of communication! Humans can detect all of those things with ease. The super-computers of 2010 can do nothing of the kind.

I’d bet computers are still decades away from distinguishing between straight-forward and sarcastic sentences, but they are much closer to achieving those kinds of goals than I previously thought.

Perhaps a 1,000 year wait for a human-like artificial brain is a bit too pessimistic after all.

Advertisements

A prediction on the popularity of retributivism

June 15, 2010

Some occupations require you to impose costs on other people. Principals send students to detention. Police put people in jails. Soldiers kill people.

Perhaps all of those examples are justifiable on utilitarian grounds because the act, while imposing a cost on someone else, is the only way to prevent even greater suffering in the future. With this in mind, it is possible for someone to intentionally impose costs on others and still be a rational utilitarian.

Possible, but unlikely. The reason I say that is because people like to think they are doing good. Even Hitler was not intentionally trying to make the world a worse place. Someone who engages in acts whose immediate effect is to produce suffering has a difficult time believing they are doing good. He must have confidence that his actions will produce utility in the long-run, even if it is for people far away or for people not yet born. When the costs of your action are known and immediate, while the benefits are distant and uncertain, convincing yourself that you are one of the good guys is an uphill battle.

In this way, cost-imposing occupations cause cognitive dissonance which is:

A condition of conflict or anxiety resulting from inconsistency between one’s beliefs and one’s actions, such as opposing the slaughter of animals and eating meat. [Source: American Heritage Dictionary]

In this case, the cognitive dissonance is caused by the person’s desire to do good and their occupation, which asks them to do bad [i.e. impose costs]. I believe the way most people in these occupations overcome their cognitive dissonance is to convince themselves that they are not in fact imposing costs at all, but are rather meting out justice. They see detention, jail and death not as costs to be mourned but rather benefits to be celebrated. In short, they become irrational utilitarians, placing the pain of the “guilty” on the wrong side of the cost/benefit ledger.

Personal Anecdote:

I was a resident advisor in a college dormitory called Friley Hall when I was a junior at Iowa State University from 2006-2007. One of my tasks was to enforce the drinking age and other alcohol related offenses. That fall, even before school started, a co-worker and I discovered a roomful of underage students consuming “strong spirits.” We did our duty, which was to write down the names of the perpetrators, of which there were eight.

The next day, someone on our staff got the idea to keep score of how many people our staff in Lower Friley had “written-up” or documented, versus the number of people written up by our sister staff in Upper Friley. After one day on the job, the score was 8-1 in favor of Lower Friley.

Why would someone on the staff do such a thing? Why would they think it was funny to have a running tally of the number of kids who were embarrassed in front of their peers, and who would later be disciplined for the infraction? I suspect that it was for the reason I described above. That staff member didn’t think of the write-ups as costs to be borne by the students but rather as an accomplishment to be boasted of.

I earlier wrote that the success of prediction markets has to do with giving knowledgeable people an avenue to get rich, and in the process share the information they have about an event’s probability. With this in mind, I will try to make more predictions on this blog, as a way of showing that I have confidence in the theories I propose. What I risk is not money (sorry) but reputation.

Based on the preceding analysis, I predict that people who are employed in cost-imposing occupations are more likely to have a perverted moral sense. I do not mean that in a pejorative sense. What I mean is that they are more likely to confuse costs and benefits.

I’m not exactly sure how this hypothesis can be tested, but I have an idea. There is a philosophy of punishment known as “retributivism” which holds that the “guilty” should be punished because they deserve it.

As stated by law professor Michael Moore, “Retributivism is the thesis that punishing those who deserve it is an intrinsic good, that is, something good in itself and not good because it causes something else.” This is in stark contrast to the utilitarian approach to punishment, which treats all punishment as a cost that can only be justified by producing benefits in the future.

To make my prediction more specific, I predict that cost-imposers will be more receptive to the retributivist philosophy of punishment than the utilitarian one, because the retributivist philosophy produces less cognitive dissonance for them. Perhaps this could be tested through survey questions such as “Should criminal X be punished even if doing so does not reduce crime or ease the victim’s pain?”

In addition, an important implication of this theory, if true, is that asking people to impose costs will reduce utility even further in the long-run by perverting the moral sense of the cost-imposer, causing him to view costs as benefits. That is a good reason why we should be leery of deliberately imposing costs, and especially of asking a person to impose costs for a living.

I’m willing to hear evidence against this theory if you have it, so please don’t be afraid to leave a comment.

P.S. I was the staff member who got the idea to keep score of the write-ups.

Eviction Day

June 13, 2010

The second in a two-part series about the troubles I am in here in Washington (with special appearances by my friends Grant Olson and Mahmoud Karim).

Take a tour of downtown Washington (Iowa)

June 6, 2010

This is a video I recorded earlier in the day in which I showcase a few of the landmarks in Washington’s downtown area. Stay tuned for more videos featuring historic places in the city.

Prediction markets

June 3, 2010

Last week I wrote about how voters’ poor incentives to gain information lead to their ignorance of political issues. The implication was that better incentives would lead to better information. The question is how do we improve incentives? Secondarily, is there any evidence to show people with an incentive to collect information actually do it?

A number of opinion surveys have shown that the general public has great difficulty answering questions correctly about the federal government’s budget. I suggested that this is because each individual voter has very little say about what government does – especially the federal government – owing to the small probability of affecting an election. In short, voters don’t know what’s happening on Capitol Hill because there is little incentive to research it. If they had more of a stake in the outcome of the election, I think they would try harder to obtain information about government.

How do we give people a stake in the outcome of some event, such as an election? One way of doing that is to give the person a pecuniary award for possessing correct information. If people know that they can benefit financially from being smart, that is a good incentive to become smart. Luckily for us, there is already an institution that rewards people for correct beliefs, and it is known as a prediction market.

A prediction market is a speculative market in which the participants bet on whether or not a specific event in the future will occur. The most famous example of a prediction market is the market run by a company known as Intrade. Intrade allows its customers to bet on the outcome of political races and other events from the world of entertainment. For instance, Intrade allowed users to bet on who was going to win American Idol, whose season concluded earlier this week. Customers take bets on a contract that pays the buyer $10 if the event comes true, or more specifically here, if the candidate wins the competition. When there are many contestants early in the season, the odds of any one candidate winning are slim, which is why the bets for each contestant are typically low. Exactly how high or how low the bets are is a function of what the bettors believe is the probability of the contestant winning.

The contestant Crystal Bowersox was considered a favorite to win the competition a month ago, both by the show’s judges and by customers of Intrade. When there were six contestants remaining in late April, Bowersox was a 7-3 favorite to win the competition, according to the bets Intrade customers were making. By mid-May, Bowersox was an even-money bet. After Bowersox and fellow contestant Lee DeWyze faced off in the final performance Tuesday night, Intrade users had shifted their allegiances and were giving 4-1 odds that DeWyze would win. The results of the fan voting were announced Wednesday and DeWyze was declared the winner.

How could Intrade customers know with such certainty that DeWyze was going to win? How could they know how millions of people would vote on the final day? The answer is none of them knew. In fact, each bettor only had access to a tiny sliver of information. Through the use of prediction markets, a person can use that sliver of information to aid him in making a bet.

Suppose that the American Idol judges panned a contestant’s performance, but all the people you talked to at the local gym said they liked it. That is useful information. If everyone else at Intrade is putting very low odds on that contestant winning, the piece of information you have suggests the other bettors are underestimating the contestant’s chances. That means there is an opportunity for you to put your knowledge to work and make a buck by placing a bet on that contestant. When thousands of people do the same thing – making bets on the theory that they possess information other people don’t – the resulting odds reflect the collective knowledge of the betting pool’s members. By rewarding accurate bets and punishing inaccurate ones, the system provides an incentive to bet only when you have good information. Because prediction markets contain (mostly) good information, the betting odds you see in such markets closely reflect an event’s true probability, as evidenced by the markets’ proven track record.

For more information, watch this report the ABC series 20/20 did on prediction markets.