Wednesday, February 27, 2013

Local Color XIII -- GPS edition


This poor guy tried to make the turn at Canyon and East Palace:

Feb 27, 2013 New Mexican Article

which is not that easy even with my Tundra pickup:

Thanks Google!
He believed what his GPS told him about being able to drive down Canyon Road to get back to civilization. Aside from not really grokking the angles involved, the GPS thought that Canyon was one-way the other way. Had he actually made this turn I think the truck would have become a permanent installation on America's Gallery Row. I award one point to technology in the mind-share game.


Tuesday, February 26, 2013

Lazy Red Foxes

If you've ever tested a mechanical typewriter you know this sentence which contains every letter of the English alphabet:

The quick red fox jumps over the lazy brown dog.

Although the distribution of letters differs somewhat from the language at large they do not appear with equal probability either. Thus the information entropy of the letters is less than the maximum one would expect and this suggests that the sentence may not be a random agglomeration.

Looking a little deeper we can see that there is a certain amount of mutual information in letter sequences, i.e., 'h' is always followed by 'e' in this tiny sample.

It also parses into convenient words when broken at the spaces, and these words are all found in the dictionary. Even more surprisingly the word order matches the language's Syntax perfectly:

Noun-phrase Verb-phrase Object-phrase

Maybe it means something? Hmm, let's just see... Each phrase seems to make sense. Based on an exhaustive search of the corpus of written knowledge, adjectives modify nouns in an appropriate manner and the verb phrase stands up to the same scrutiny. Everything is Semantically copacetic and thus we have a candidate for a meaningful utterance.

Of course in amongst all the rule fitting -- we know it when we see it -- the sentence actually does mean something. It communicates the description of an event that we can easily picture occurring.

Now lets just mess things up a bit. There are 10! (>36 million) possible sequences of these words (actually not quite because the "the" appears twice but I'm not smart enough to figure out that probability). We can reject most of these sequences since only a few remain syntactically and semantically proper. From the reduced set of candidates for meaningfulnesses, consider:

The quick brown fox jumps over the lazy red dog.

Still makes good sense. Different colored canines are well within the scope of meaningful utterance. However, how about:

The lazy red dog jumps over the quick brown fox.

This makes semantic sense but lacks plausibility. Because we seldom experience a lazy thing getting one over on a quick one, it is hermeneutically surprising. (I would use semiotically here but it is over-over-loaded with other meanings and I've always liked the sound of hermeneutic. I'm also taking the surprise factor from explanations of information entropy that we started with -- low probability and/or completely random occurrences are more surprising to behold because we expect them less.)

Therefore I propose that Hermeneutic Surprise (HS) be added to the set of Information Measures. It is probably one of those things that peaks in the middle of its range. Low HS is meaningful but of little interest: "Apples are red." And high HS may be poetic but meaningless in experience. E.g. the example from my Another Chinese Room post: "The green bunny was elected president of the atomic bomb senate."

The trouble is going to be figuring out how to measure Hermeneutic Surprise...because right now we just know it when we see it...

Thursday, February 21, 2013

More Games, in Theory

I've finally figured out what it is that annoys me about game theory. It's the -- usually unspoken -- assumptions made when determining what the rational strategy should be.

I started down this road in my AI-Class G-T post here, but I think I can put it in better terms now. Given the Prisoner's Dilemma payouts in that post the presumption is that one should always play Defect because:
  • A. You risk doing serious time if the other player Defects and you don't;
  • B. You could get a reward if you catch the other player Cooperating.
This makes some sense in a one-shot game where you expect to never see the other player again. But if you are playing more than one round -- unless your opponent is Christ-on-the-Cross (and probably even for that first round as well) -- everyone is going to play Defect. This makes the total payout for both players worse than if they had always Cooperated.

Sure. Sure. Maybe you "won" the first round and are ahead by a big six points after the hundredth round at -98 to -104. Big Whoop...Pride goeth before the Fall...

So, why is Defect-Defect assumed to be the rational strategy? It's because each player is afraid that the other player is just as greedy as they believe themselves to be. Afraid and Greedy are strong terms for risk-adverse and advantage-seeking, but there they are in plain daylight. Fear and Greed doth also lead to falling.

I think one can make the same argument for other canonical games:
  • Chicken: Really just P-D with worse outcomes;
  • Stag-Hare: The Hare player is afraid of being abandoned and selects the option which guarantees some self-advantage.
In all cases Cooperation leads to a better outcome for both players over time. In fact Christ-on-the-Cross might really be the best option all around.

So, why do we not Cooperate? My claim is that Fear and Greed are natural responses to evolving in an adverse environment with limited resources. Even single-celled organisms recoil from harmful substances and pursue the useful ones. Scale this up and over-amp it with competition and you get Defection as the rational response. If we had developed in a benign and plentiful environment we might have little need for risk-aversion and advantage-seeking. Perhaps then we would believe that the rational strategy is one which best benefits all the players.

I'm going to carry this even further and posit that all animal life on earth have developed four natural, one might even say knee-jerk, responses in order to survive:
  1. Fear -- Risk aversion;
  2. Greed -- Advantage maximization;
  3. Disgust -- Recoil, e.g., from excrement or dead bodies (probably better represented by its opposite, Desire, but I like to keep things negative whenever possible);
  4. Anger -- Blanking out fear and disgust in order to persevere.
These are what we commonly call emotions. Therefore the so-called rational game strategies are actually emotionally driven.

If only we lived in a world of bunnies and unicorns, eh?

Monday, February 11, 2013

Another Chinese Room

Searl's Chinese Room thought experiment posits that one could have a program which carries on a conversation in a language unknown to the program's executor, i.e., the thing -- or person -- executing the program has no idea what it is saying, but an external participant can believe that it is having a meaningful conversation. The program passes the Turing Test but doesn't actually have a mind of its own. Proper syntax masks semantic meaning. This is similar to Chalmer's Zombie hypothesis, and they may both use assumptions that beg the actual question of when and where "minds" exist...

But here I propose a slightly different experiment which could separate the men from the machines. I posit that the real issue of meaning in the Searl experiment appears when a new utterance is made; a relationship which has never been expressed in the given language but is nevertheless congruent with (so called) reality. We can easily make nonsense sentences, "The green bunny was elected president of the atomic bomb senate." But it's harder to generate ones that are less poetic.

The Schip Box

Just to keep it simple lets suppose that we have three letters that take the form:
  • A = B * C
Where each letter stands for some physical quantity, e.g., A is Acceleration. We can make triplets like the following which are valid physical laws:
  • F = M * A (Force = Mass * Acceleration)
or
  • P = I * E (Power = Current * Voltage)
But we can also come up with things like:
  • M = P * I (Mass = Power * Current)
Which is apparently meaningless, or at least incorrect.

Then we build a box which takes each of these triplets and rings a bell if it is a valid relationship and buzzes otherwise. In order to distinguish the two, the box could do an exhaustive search of all knowledge (which I think is the way Google now recognizes pictures of cats). It could get fancier by doing a dimensional analysis of the terms to see if they make any sense before-hand.

Then the question is: How would this box recognize a completely new valid representation that is not found in the knowledge base? This would require understanding what the symbols actually mean in the world, and how they relate, as well as developing experiments to validate them.

Isn't this the crux of the syntactic/semantic mind-matter?