Monday, October 31, 2011

Naturally Artificial Intelligence

To illustrate the Abstraction issue I raised in Learning....Slowly, here are a couple of examples from my study of probability. To lay the groundwork, there are three basic operations OR, AND, and GIVEN and for the most part they are defined in terms of each other in a tight little tautology -- see my terminology summary here. Every time I tried to figure out what they _really_ did I ended up in some kind of sink-hole-loop. This was exacerbated by the only two fully worked problems in the AIMA textbook where the behavior of OR and AND distinctly diverged from the definitions.

So I tortured myself for about a week with: "What are they trying to tell me?" Then I took a couple of showers...

The first problem was OR. The definition summed up a set of values and subtracted the AND of those values, but the book example just summed a buncha things and was done with it. After the first shower I realized that the AND part was there to eliminate double-counting of certain values and that "they" has silently elided it because "they" had also silently elided the actual double count that would have been subtracted out. A little note to that effect would have saved me a week's worry....maybe.

The second problem was AND. The definition shows a product of values, i.e., multiplying them all. The book example showed a sum... Well, WTF?! I went around and around on that and complained to anyone who showed any semblance of interest -- where such interest died fairly quickly with no positive results. During the second shower it occurred to me that I had only seen addition in one other place in this whole mess, and that was in calculating the Total Probability of a set of variables. Since probabilities were usually specified as Conditionals -- the probability that X is true GIVEN that Y is known to be true -- this involved multiplying a buncha values (one for each variable of interest) which were "conditioned" on Y being true, then multiplying a buncha different values, conditioned on Y being false, and then SUMMING the results... Eureka! That's what the (fkrs) were doing in the book: The values they were working with came from a table where all the multiplying had already been done, so all they had to do was add them up. Jeez, maybe just another little note would have been in order? Or maybe I wasn't supposed to be looking at it so closely?

So...The point is, my (slightly) Intelligent Behavior was the result of hot water....no, no, it was the result of having a higher level view of the problem and seeing patterns that were not apparent in the details. This is what I'm trying to call Abstraction. Of course this "ability" is probably the result of billions-and-billions of mindless iterations in some very low level neural processing, just like looking at a map integrates a huge amount of visual information with a huge amount of "common sense" information to come up with a route. And this is what Krakauer was trying to get at in his talks: What we really want to call Intelligence is so far and above what our poor little machines are doing these days that the scales need to be re-calibrated all the way down.

Saturday, October 29, 2011

Local Color III -- only in the SouthWest...

Police responded to a stabbing call at 1:21 a.m. Wednesday in ... southwest Santa Fe.

Deputy Kurt Whyte arrived at the apartment where he says he found the 48-year-old male stabbing victim, "bleeding heavily from his head and right wrist area."

Chavez, who police say admitted stabbing the man with a kitchen knife, was arrested and charged with aggravated battery on a household member with a deadly weapon, battery upon a peace officer, assault upon a peace officer and resisting or evading a police officer. She remained jailed late Wednesday in lieu of a $5,000 surety bond... Her boyfriend, meanwhile, remained hospitalized late Wednesday but was in stable condition...

Santa Fe County Sheriff's deputies took the 60-year-old [Chavez] directly to jail after they say she repeatedly stabbed her boyfriend Wednesday after arguing during a game of Monopoly.

Police say both Chavez and her boyfriend appeared to be intoxicated.

Friday, October 28, 2011

AI Class 3, Learning...Slowly

Well. I survived last week's class and got 100% on the homework!! Part of this was due to a sudden realization that the demo code I had been so assiduously analyzing actually contained the skeleton of a system for answering three of the questions. And part was dumb luck, tempered with reason of course. The realization part happened after I had worked out the problems on my own, but I used the software to validate my answers -- which were, amazingly but truly, correct.

The original estimate for time to be spent on the class was a glib 1-10 hours a week -- it's not clear if that included watching all the video lessons which are at least 2 hours a shot -- and maybe some go-getter StanfooFrosh with all their god-given brain cells still intact could do it. Me? I'd say 50 hours last week trying to intuit the inner workings of Probability...

This week -- Machine Learning -- I got off easy. After only three days I believe I'm done. Or else I've missed something really important. Those days include time spent shuttling around finding working internet connections -- because my usually-fairly-almost reliable LCWireless coop took a big poop right after the videos were posted on Tuesday -- and summarizing the lessons for an online study group which meets Thursday evenings. In the course of the summary I discovered that I had developed a simplified method for working the hard parts of the homework (which I _might_ reveal after entries are closed next week). In keeping with standard practice only the first of the two lessons had any relevance to the homework. So now I'm living with the sneaking fear that the exams will cover the missing lessons.

As has been pointed out a number of times: Why do I care about my grade? I dunno. Knee jerk reaction to jerks I guess.


Moving on to the philosophy portion of our time here together....One thing I've noticed about the class so far is that it makes heavy use of exactly what computers are good at: Mindless Iteration.

First we had Search which is just opening doors and walking down hallways until you stumble upon that which you were seeking. Admittedly there are some shortcuts. And even some automated ways to discover the shortcuts. But it's really just wandering around in a big field without your glasses.

Then there was my bugaboo, Probability. This boils down to multiplying and adding big lists of small numbers. Over and over. It's something that Professor Sebastian seems to pride himself on being able to do, but god help me, that's why we have computers isn't it? Of course one does need to be able to set up the problem and understand the necessary transformations -- and the results, which are in many cases "not obvious" -- but that's Systems Analysis.

And this week, Machine Learning. Many of the problems presented make big use of Probability so it goes without saying that there's a lot of repeated number crunching. Moving on to Regression and Clustering, to para-quote: "Often there are no closed form solutions so you have to use iteration." All manner of try-try-again-until-you-succeed perseverationist algorithms are put to use. Gradient Descent is just bumbling-around-in-a-field search with a proviso that one always bumbles down hill. And we haven't even addressed non-local minima yet.

So my question: Is this Intelligent behavior? In one respect, once a computer finds a way to do something we used to pride ourselves on, we always diss it by saying, "Well, that's not _really_ intelligent after all now is it?" But in another respect I think number-crunching may be going about it wrongly. In the map problem used to introduce different types of searching the question was how to get from Arad to Bucharest -- which is probably easier if you are in Romania  A human would look at the map, squint their eyes for a couple seconds, and then go, "Yah shure, we gotta go through Rimnicu." The computer however tries all the possibilities...in the "less intelligent" versions it even goes the wrong direction, just to, you know, see...and then finally pretends that it has discovered a route.

What the computer does is wander around in the field until it trips on the solution, but what the human does is some kind of integration and abstraction of the data. I think this ability to Abstract is at the core of intelligence. We may get to some bits of that in this class but it's gonna be some rough iterations.

Friday, October 21, 2011

AI Class 2, Probably

(I realized my numbering scheme for these posts is bad, so I'm going to use AI Class N, where N is the week.)

Anyway... Week 2 and counting with 10% of the class under my belt.

I missed one question on the first homework -- after receiving actual clarifications-from-on-high vis the ambiguities, some of which were significantly different from the assumptions I was making. What I missed is the idea that, for an environment to be "Completely Observable" requires that the agent use no memory of previous states. Which, IMHO, is a little strange, since it was pointed out that chess and checkers need one bit of memory to determine who's turn it is next...But there's no point in arguing with two-dimensional-video-professors, so I guess I will Just Let It Go.

Now well into Week 2: Probability: Our two new video-Lessons were posted a day late due to crashes in the homework system from the previous week AND after three mostly full time days I'm still only half finished. So I am calculating the priority of my posterior, i.e.: it is kicking my asp. I suspect that my chance of completing the class is 1/N as weeks progress. Fortunately it seems that the homework only requires the absorption of the first of the two Lessons, and that only one of the six questions requires the full-press-calculations that the prof -- Sebastian Thrun -- so gleefully dragged us all through. Repeatedly.

As per established process the first Lesson was a (long) set of short videos each explaining a concept, followed by a quick quiz question. And in keeping with tradition, most of the quizzes were used to introduce the next section rather than reviewing and using what was covered in the current one. Sometimes a quiz introduced new notation and concepts with no explanation. I finally realized that entering random answers would move things along so I could get to the point. Occasionally a question could be answered by grinding through the material at hand, so that does make the occasional correct answer a thrilling Thank God! moment.

We now return to our original programming: Unit 4 -- Probabilistic Inference. Oy.

Sunday, October 16, 2011

Local Color II

Form the Eldorado Police Blotter....

"Victim reported that while on vacation an unknown person has twice made entry into the residence by unknown means and stole $5000.00 in cash and placed erotic items on a bed. No damage was reported to the residence and nothing else appeared to be disturbed."

Wednesday, October 12, 2011

AI Class III, Homework I

Well, we finally got it, and it's not so scary, just a little vague... They posted a set of  seven short videos each posing a question with a multi-choice or small numeric answer. Unfortunately there are some ambiguities about the constraints and exact definitions in some of the problems. There are a couple useful discussion threads on the Reddit AICLASS site which are wrangling about the specifics:

I have distilled and posted the Week 1 Homework questions and options, along with comments about the ambiguities encountered (mine and others from the reddit threads). Stay tuned for answers next week...

To get to the bottom of it we're really gonna just have to wait until we get clarification from on high. We hope we do anyway. As someone in those threads pointed out, this is the kind of stuff one would ask the TA or Prof if one were having a two-way class like experience.

<IMHO mode="I could be wrong about this">
One thing that has come up in general is how to deal with the basic Environment class definitions:

  • Fully vs Partially Observable
  • Deterministic vs Stochastic
  • Discrete vs Continuous
  • Benign vs Adversarial           
All those definitions tend to be fairly good black and white approximations but have little gray areas. Folks seem to be getting hung up in the gray.

For Instance: one homework question asks if coin flips are Partially Observable and if they are Stochastic or not. There seems to be some confusion about the scope of Observability, e.g., if you don't know the future is the system Fully Observable? Or from a different tack, "If you don't know how your Adversary is going to respond, is it Partially Observable or even Stochastic?"

Because I think that being Fully Observable covers just the current system state and doesn't preclude being uncertain about future states, in this context I would say: "Do we know the entire result after each action is performed, or is there still ambiguity in the current state of the system?"

There's also confusion about Discrete vs Continuous. The questions are more philosophical than practical, such as "Can one even have a Continuous representation of a system?" or "Since the result of a coin flip is dependent on exactly how it is flipped, isn't that Continuous?" I say, lighten up a bit...If you've got something that can take any real-number value, it's Continuous. But if it can only take a sub-set of values, it's Discrete. So the result of a coin flip is ??? -- maybe I'll answer next week, eh?

And there was a funny mis-apprehension on the Unit-1 quiz that asked if a robot car driving in a "real" environment was Adversarial. The given answer was No -- admittedly with a little joshing around. I think this is because the instructors live in Palo Alto and only have to deal with Volvo-Soccer-Mom's passive aggressive driving, rather than in New Mexico where every drunk wants to be in your lane.
</IMHO>

Tuesday, October 11, 2011

AI Class II

Ok then. They got the Search lessons up. And are promising to post a homework assignment by about 4 hours ago... Also the quiz-post-refusal thing seems to have been a server loading problem and I didn't have any trouble posting answers today. So still a bit behind the curve here, but moving in the right direction.

There are some slips, probably mostly on my part. Like the quiz question about whether a Depth First Search is guaranteed to find a goal and be complete. I forgot to remember that the lecturer mentioned that we were dealing with an infinite depth search tree for this particular incident. So, more minus-quiz points for me. Gotta hang onto every word apparently.

<Edit mode="stew">
Overnight I realized that there were two (my count) examples of lapses in pedagogical technique in the first week's videos -- three if you count not defining Rationality but then including it in the summary slide for Unit 1.

First is the Depth First Search question above. I replayed the lesson -- unit 2.20 -- and he does say  "...lets move .... to infinite trees..." 20 seconds before completion of the quiz question presentation. So I should have remembered it. But, if he had repeated the infinite tree condition at the end of the question I might have caught on to what he was getting at.

The second was in unit 2.31. He describes the simple "Vacuum World" environment and calculates the number of states in a two position system by writing 2 x 2 x 2 = 8. This is the correct number but not the right calculation, and -- my excuse for failing the quiz at the end of the next unit -- when the system is scaled up with more positions one needs to use the right calculation, which is: 2 x 2^2 (notice that 2 is one of two values where this is equivalent to the previous multiplication, ?maybe three if zero^zero is a number?). This is because there are X possible conditions for N positions -- every position can be either clean or dirty -- so the total number of environmental states is X^N, not X*N. I merrily went along with the multiplication paradigm when it came to scaling up to 10 positions and multiplied 2 times 10 instead of raising 2 to the 10th power. Again I might have caught on, and had a better understanding of the issue, if it had been treated more rigorously in the introductory case.
</Edit>

In a different example they present the idea that you can use an estimated-cost-to-goal value to guide a search in fruitful directions. This is called an "Heuristic". However they never defined the word but just started using it in the middle of describing some algorithms. Lucky me, I'd already read the Book so I knew. Just like the "Rationality" thing in Unit 1...

Have an online study group meeting tomorrow (Weds) night in which we are supposed to discuss homework confusions (among others). So I hope we get the homework in time...

Monday, October 10, 2011

Artificial Intelligence, Class I

This is gonna might be painful...

The Stanford AI class started today with the posting of a few introductory video instruction units. Most of these "Unit 1" vids were a camera on a writing pad making a few notes with a voice-over, and each concluded with a little "quiz" implemented as a javascript overlay on the video. On the plus side the videos are edited so there's not a lot of hemming and hawing (compared to the Khan Academy math lectures which are information packed but drag along as the presenter erases and re-writes his mistakes). On the minus side:
  1. The first set of quizzes were setup so as to lead into the next lesson and had nothing to do with what was covered in the actual video;
  2. The quiz answer system balked at about 2/3 to 3/4 of my responses and just refused to post them;
  3. The final set of videos and quizzes were concerned with an attempt to translate a Chinese Menu. If one already knows the ideograms one could tell them apart in the low rez video, but as an added insult the little quiz boxes obscured parts of the elements one was supposed to recognize and check off:


So I'm batting 62% on Chinese translation. Fortunately the inline quizzes don't count toward your grade. I just hope the real questions are not so well obscured.

Of a little more concern to me is:
  • First -- The videos ended with these Introduction to AI bits that were mostly information free, even thought the schedule for the online class says day one also covers "Search" and has a homework assignment. In contrast the schedule for the real class has three days of lectures and a programming assignment(!?)
  • Second -- The last Summary video listed the things that were covered. The last on the list was "Rationality" which was not mentioned in any of the lessons -- that I remember seeing. It is a key concept in their approach and is covered in some depth in Chapter 2 of the text book, where there are some probing exercise questions based on the definition.
So there's some slips between cup and lip in getting this thing off the ground...We'll stay tuna-ed.

Monday, October 3, 2011

Artificial Intelligence -- back to

Now that I seem to be in recovery from yesterday's post -- and past the 48 hour brain damage danger zone --  lets get back to pursuing more edifying subjects.

I signed up to take the Artificial Intelligence class being offered online (as an experiment in monetizing the extended educational experience) by Stanford University. Anyone can join -- at least until it starts next week -- and over 130 THOUSAND folks have. So it oughta be interesting.

To try to get a flavor of what I've gotten myself into I started reading the book and looking through the demo code. I'm posting my notes for all to wonder at. The book is pretty well written but the questions at the end of each chapter seem to be from the Advanced, not the Introductory, class, as they refer to topics that are only barely mentioned in the text. Given that I dropped out of more CS classes than I completed, 35 years ago, I'm having trouble groking the required level of "proof" and "show that" requested. Hopefully the video lectures and actual homework assignments will be a bit more illuminating.

Looking at the code takes me right back to the days of trying to understand the work of my professional peers with advanced degrees. I posit that the sets ComputerScientist and SoftwareEngineer are Almost Disjoint. Therefore what looks like a really swell algorithm in a text book may need a bit of patching for the real world. I try to address some of these "issues" in my notes on specific code blocks. My natural tendency is to re-write everything I come across -- Hi Brian -- so I have to be careful. And bite my tongue.

In any case I can always drop down a notch to the Basic level which has no feedback requirements and just go along for the ride. But for now I hope to keep posting what may be useful information...don't touch that dial.

Sunday, October 2, 2011

Natural Stupidity (not Artificial Intelligence)

Did a Faceplant from a Firetruck yesterday. Fortunately it was during a training so everything could grind to a halt while they prepped me for the required ambulance ride to the ER...





my own work

after they were finished

later that day

24 hours in


We were pulling the supply hose from the roof bed of the tanker when it got hung up someplace. So I climbed up a couple steps on the back of the truck to see what was what. I pulled. Nada. I pulled real-way-more harder and the hose gave way suddenly causing me to lose my grip -- as if I hadn't already -- and fly off the steps. I performed a perfect three point landing in the gravel on my hands and face. Did the Klingon modifications to my face as shown, stained my neck, and compressed both wrists, but only bruised one palm.

I was not wearing my helmet, which might have protected my face at the expense of breaking my neck, so I guess inattention to detail is sometimes useful.