Monday, October 31, 2011

Naturally Artificial Intelligence

To illustrate the Abstraction issue I raised in Learning....Slowly, here are a couple of examples from my study of probability. To lay the groundwork, there are three basic operations OR, AND, and GIVEN and for the most part they are defined in terms of each other in a tight little tautology -- see my terminology summary here. Every time I tried to figure out what they _really_ did I ended up in some kind of sink-hole-loop. This was exacerbated by the only two fully worked problems in the AIMA textbook where the behavior of OR and AND distinctly diverged from the definitions.

So I tortured myself for about a week with: "What are they trying to tell me?" Then I took a couple of showers...

The first problem was OR. The definition summed up a set of values and subtracted the AND of those values, but the book example just summed a buncha things and was done with it. After the first shower I realized that the AND part was there to eliminate double-counting of certain values and that "they" has silently elided it because "they" had also silently elided the actual double count that would have been subtracted out. A little note to that effect would have saved me a week's worry....maybe.

The second problem was AND. The definition shows a product of values, i.e., multiplying them all. The book example showed a sum... Well, WTF?! I went around and around on that and complained to anyone who showed any semblance of interest -- where such interest died fairly quickly with no positive results. During the second shower it occurred to me that I had only seen addition in one other place in this whole mess, and that was in calculating the Total Probability of a set of variables. Since probabilities were usually specified as Conditionals -- the probability that X is true GIVEN that Y is known to be true -- this involved multiplying a buncha values (one for each variable of interest) which were "conditioned" on Y being true, then multiplying a buncha different values, conditioned on Y being false, and then SUMMING the results... Eureka! That's what the (fkrs) were doing in the book: The values they were working with came from a table where all the multiplying had already been done, so all they had to do was add them up. Jeez, maybe just another little note would have been in order? Or maybe I wasn't supposed to be looking at it so closely?

So...The point is, my (slightly) Intelligent Behavior was the result of hot water....no, no, it was the result of having a higher level view of the problem and seeing patterns that were not apparent in the details. This is what I'm trying to call Abstraction. Of course this "ability" is probably the result of billions-and-billions of mindless iterations in some very low level neural processing, just like looking at a map integrates a huge amount of visual information with a huge amount of "common sense" information to come up with a route. And this is what Krakauer was trying to get at in his talks: What we really want to call Intelligence is so far and above what our poor little machines are doing these days that the scales need to be re-calibrated all the way down.

No comments:

Post a Comment