Monday, February 11, 2013

Another Chinese Room

Searl's Chinese Room thought experiment posits that one could have a program which carries on a conversation in a language unknown to the program's executor, i.e., the thing -- or person -- executing the program has no idea what it is saying, but an external participant can believe that it is having a meaningful conversation. The program passes the Turing Test but doesn't actually have a mind of its own. Proper syntax masks semantic meaning. This is similar to Chalmer's Zombie hypothesis, and they may both use assumptions that beg the actual question of when and where "minds" exist...

But here I propose a slightly different experiment which could separate the men from the machines. I posit that the real issue of meaning in the Searl experiment appears when a new utterance is made; a relationship which has never been expressed in the given language but is nevertheless congruent with (so called) reality. We can easily make nonsense sentences, "The green bunny was elected president of the atomic bomb senate." But it's harder to generate ones that are less poetic.

The Schip Box

Just to keep it simple lets suppose that we have three letters that take the form:
  • A = B * C
Where each letter stands for some physical quantity, e.g., A is Acceleration. We can make triplets like the following which are valid physical laws:
  • F = M * A (Force = Mass * Acceleration)
or
  • P = I * E (Power = Current * Voltage)
But we can also come up with things like:
  • M = P * I (Mass = Power * Current)
Which is apparently meaningless, or at least incorrect.

Then we build a box which takes each of these triplets and rings a bell if it is a valid relationship and buzzes otherwise. In order to distinguish the two, the box could do an exhaustive search of all knowledge (which I think is the way Google now recognizes pictures of cats). It could get fancier by doing a dimensional analysis of the terms to see if they make any sense before-hand.

Then the question is: How would this box recognize a completely new valid representation that is not found in the knowledge base? This would require understanding what the symbols actually mean in the world, and how they relate, as well as developing experiments to validate them.

Isn't this the crux of the syntactic/semantic mind-matter?


No comments:

Post a Comment