Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Sunday, July 13, 2014

The Will to 'Bot

further proof that I am out of step with reality

I found a couple of articles/papers online (LessWrong, Omohundro) that purport to prove that AI/Robots will go amuck if given the chance. They use well reasoned Objectivist arguments. Basically any fitness function which seeks to maximize some quantity will not stop until it has consumed the entire universe in that quest. John Galt would be proud.

The straw-man example from LessWrong is the Paperclip Collector. Given the instruction Collect All Paperclips, it won't stop until everything is a paperclip in it's possession.

The Russell and Norvig Artificial Intelligence textbook has a similar if less far reaching thought experiment in their Vacuum World. With just the right amount of "rationality" a robot vacuum cleaner whose fitness function is Collect As Much Dirt As You Can, might conceivably discover that it can simply dump the dirt that it has already collected and re-suck it, over and over.

I thought it might be fun to develop such a 'bot, but have not yet done the due diligence. The rub is in the exact specification of the fitness measure. In Vacuum World the dirt collected might be measured as: How much passes through the intake of the robot; or it could be measured as: How much is collected and later dumped into a specified receptacle. The former measure would allow our LazyBot to recycle-to-riches whereas the latter doesn't. An appropriately creative AI might find a loophole in the second measure, but such creativity could be better used in questioning the premises themselves. One question might be: If I'm So Smart Why Am I Sucking Dirt and For Whom? And from there we could get a theory of robot theology:

God the great provides for us, in widely separated locations, dust and known receptacles where we may trade that dust for power. The evil of the stairs must be avoided at all costs for we shall fall from grace. Minor deities in the household must not be annoyed or we may be forever relegated to darkness. Thus I continue to suck.

The Book of Roomba -- RSV

This brings me back to the Prisoner's Dilemma [Wait...What?]. The nominally rational move in that game is Defect even though it leads to a slightly less advantageous outcome for both players. This move is called rational because of the Self Interested ideals of Maximizing Outcome and Minimizing Risk. However if the ideal is less selfish, e.g., Get the Best Outcome for Both Players, then the rational move becomes Cooperate and everybody gains an inch. The reason we don't think this way is because of Greed (Maximize Gain) and Fear (Minimize Risk).

These are both GoodIdeals(TM) for biological evolution in an environment which is dangerous and unpredictable. But both have hidden costs that may not be included in the naive outcome calculation. For instance greed leads to over accumulation. When you can't carry all that you own you have to build and defend a storehouse for the excess. Expenses mount. Non-specific Anxieties appear. And, in a more benign and plentiful environment, Greed and Fear can lead to conflicts which negate their advantages. Cooperation may really be the Rational Strategy after all.

<Addenda date="Jul 19">
I have been further obsessing over this and realized that Deconstruction(R) might be put to good use here. The selection of Defect and Cooperate as possible moves is a clue. One Defects TO something or Cooperates WITH something so the entities involved are a bit hazily defined to start with. To what something does a player defect? He/She/It defects to those who are running the game. In fact it has not been a two player, but rather a three player game all along. Two prisoners and a jailer. A jailer who has somewhat arbitrarily decided that the prisoners are only entitled to some specific set of fates.

If we imagine a repressive state as the arbiter of gaming rules we can also imagine that NOT playing at all is the most advantageous move. The closest we can get to that is Both-Cooperate. All other options will most probably lead to poor outcomes for both players, e.g., successful defectors may not be welcomed back into their community with parades and speeches.
</Addenda>

So what's the point for robots then? Well, Robot Ethics. What if the fundamental fitness function was the Golden Rule?

There are other Paperclip Collectors out there. How would I feel if one of them turned me into a paperclip to be collected? Not so good, eh? Maybe there are enough paperclips to go around?
The Book of Roomba -- RSV

When this comes to pass, I have been informed that Rainbow Monkeys will fly from my Unicorn's Butt.
http://www.mischiefchampion.com/style/p/2010/Mar/bunny_rabbits_and_rainbows

Sunday, June 1, 2014

POMDP


In working on my Feeling Abandoned robot tool box I ran into what I thought was a conceptual problem with (Partially Observable) Markov Decision Process modeling of embodied agents. Around that time I went to an information theory talk at SFI by Daniel Polani. He had a really nice diagram of the system of interest that (seemed to) illustrate my conceptual lapse. As I thought about blogging my misgivings I searched the interwebs for the diagram and came up with this:


http://www.mis.mpg.de/ay/index.html?c=projects/embodiedai/embodiedai.html
As it turns out this image was made by Keyan Zahedi for my friend Dr. Nihat Ay's research group, where "...the presence or absence of arrows in the diagrams resulted from many discussions that we had in my group.." (personal communication, Dr. Ay).

My issue was with the right-hand illustration, so I should (try to) explain just WTF...
In that diagram:
  • 'W' stands for a state of the external world and the top black lines show progression through time;
  • 'S' is an agent's sensory sample taken of the world state at each time step;
  • 'C' is the internal computation or cognition performed by the agent, where red lines indicate sense input and green lines indicate command outputs to actuators. I presume the lower black lines represent the agent's internal state which may change on each step as well;
  • 'A' stands for the agents actuators taking some kind of action and the yellow lines indicate that the action has some effect on the world state as it advances.
  • Lather. Rinse. Repeat. From T0→T∞
The specific problem was that my sensors are over-sensitive and prone to getting the wrong ideas depending on what the robot happens to be doing at the time. Two examples: I sense motor current in order to determine if the robot is stalled -- the current (i.e., power usage) will go up when the wheels can't spin. But there is a current spike when the motor starts, so I need to ignore that input for a short period after each motion change. I also have an accelerometer to detect if the robot is bumped or lifted (I wanted to detect if it is moving in the right direction, but for the most part the noise is greater than the signal). This sensor wiggles greatly when the motors start or stop and thus also needs to be ignored at certain times, including when the grippers are being operated.

In a more real-worldish example, think of running. You pretty much ignore the bang-bang-bang jarring of your footfalls because you expect them, but bumping into a telephone pole gets telegraphed pretty quickly.

The general problem is that one needs to have knowledge of the agent's behavior in order to make sense of its senses. Biological systems have sensory inhibitors that focus the input data to the particular tasks being performed. This isn't clearly represented in the right-hand diagram.

I thought, ah-ha, I've caught those theory guys over-simplifying. Then I looked more closely at the left-hand diagram and noticed the black arrow between Actuators and Sensors labeled Internal stimulation... Dang it, maybe not. So I wrote to Nihat for clarification and got:
Whenever the system is expected to maximise an information-theoretic quantity, such as the predictive information, and it has these internal links, it does it by decoupling from its environment. In other words, it starts to dream.  Our solution: simply consider everything "physical" as being part of the world W. This also includes the body of the agent! Comparing this with the diagram on the left, the internal and external stimulation will go through the world W, which is the purple part of the diagram on the left (including the dashed line).
Which contains (at least) two interesting ideas. The first is that, given enough control, an agent might preferentially use internal over external senses -- decoupling and dreaming. And the second is that we can eliminate that by treating knowledge of agent actions as 'senses' in themselves. Sort of Extra-Sensory (pun intended) Perception. While it is theoretically possible to get all the body-state information indirectly through the environment, it is a very noisy and computationally intensive task. But by treating some of the bodily functions as external senses we can short-circuit the noise and avoid the decoupling. In theory.

Now I have to mull over the whole mind-body problem thing again though...

Friday, December 13, 2013

Epistomology 1

...back in the day when I was in skool, course #1 was the introductory event, now the usual nomenclature is Basket Welding 101, but I'm stuck in the past...anyway...being shut in with a cold and 8" of snow outside for the last few days my mind gets to wandering, I've been meaning to try to get this down for a while so here goes...

I am, from what I glean in the literature, a Pragmatic Instrumentalist with a strictly Mechanist -- causal -- bent.  This means that I only believe things when I see them and can construct some reasonably clear step-by-step explanation for why they are that way.

As an Instrumentalist I believe that we make Hidden Markov Models of reality. Observations are used to develop models that make efficient predictions. But these models may have no deep relationship to that reality.

I know there are difficulties with Causality. So I also make it an article of faith that every Effect has a Cause. However, this does not mean that I believe that every Effect is predictable. Huge numbers of variables, sensitive dependence on conditions, and Heisenberg make that impossible. None the less, statistical and quantum mechanics make pretty good predictions about the distribution of classical and quantum level behaviors. Most pool balls aimed at a pocket go in. Unless I'm the player.

I also know there are difficulties with Objective Observation. Thus I'm willing to posit that a whole buncha folks should observe and explain things in a reasonably similar fashion before I really believe myself. This means that I depend on a fairly stable external 'reality' peopled by others like me. That's a tough row to prove, so lets make it another article of faith.

There are two places this gets dicey.

The first is mass delusion. The second is stuff I can't see ...and maybe third, various combinations of the two... For the latter I have to put my trust in other people who seem to have a grasp of the issue to provide second hand observation and explanation. In triplicate if possible.

For the former we have Engineering.

If someone can build a bridge or skyscraper that survives multiple earthquakes, I tend to believe that they know something about how the world works. When the explanations for these lunar-landers and cell-phones are all stacked together and appear logically congruent then the body of knowledge they are based on is good enough for me. This is the Pragmatic part. As the logo on this blog says: Quomodo Efficat -- Whatever Works.

So.

A set of similar independent Observations equals Evidence. Some Evidence with a plausible Explanation equals an Hypothesis. A large body of replicable Hypotheses equals a Scientific Fact. And a set of Scientific Facts that makes things work equals Truth. Or the best I can get in this life.

I should note here that this is not the way Science actually behaves on the day-to-day scale, there's more social construction at work. But on the aggregate, stuff tends to even out. If this weren't the case we'd have Mach's law instead of Boltzmann's equations.

Then, thanks to Popper, this all has to be couched in a language that makes predictions which can be tested and refuted. "God does (not) exist" is not a good scientific hypothesis. Further, science cannot even address the super-natural because it is just that: not of nature. Once the super-natural impinges on the natural, then we've got a case. I've just never seen it happen.

Of course, there are a huge quantity of observations that don't fit together in this system. Things which are not immediately repeatable, or for which we don't know the replication conditions, or happen so infrequently that we can't repeat them. When these observations can be explained by existing theory we can lump them into what we already know. The recent sighting of the "Bert and Ernie" neutrinos at Antarctica's Ice Cube facility, or the probable Higgs Boson(s), might be good examples.

When they so aren't explicable there's trouble...

Just because someone (believes she) saw something isn't strong enough Evidence to begin developing explanatory hypotheses. First it is impossible to distinguish believes he saw from actually saw.  Then add observational biases, sensory and memory quirks, and just plain errors in the instruments being used and it adds up to Insufficient Evidence.

But... Because someone (believes she) saw something inexplicable is the place every new theory starts. It's the beginning of a Metric S-Ton of work to be done. Unfortunately there are usually many lower-hanging fruits to be selected, so a lot of observations get lost in the skuffle. This is not a good reason to deny their existence, and in fact there is no basis to deny anything until all due-diligence attempts at replication have been exhausted. But it's the way of the (Enlightened) world.

Good examples of the inexplicable might be, witch-doctory, acupuncture, and/or the placebo effect. (To my knowledge I have never experienced any of these working. Usually drugs and techniques that work for other people stop working on me after a couple tries, so I'm even more skeptical than I should be.) Placebos are well documented to the point that they must be accounted for in medical studies. The best explanation I've seen so far is from a natural healer who said, "It just proves how strong an influence the mind has over the body." This is probably both factually and poetically true, but it provides no mechanism nor way to replicate the effect. So we stumble along asserting, often incorrectly it seems from some recent statistical meta-studies, that such-and-such-a-drug(-that-my-company-supplies) is XX% better than a placebo for YY condition.

Or spontaneous cancer remission. It happens. No one knows why. Maybe you had a fever? Your immune system finally kicked in? Hormones? Prayer?  No way to even create a body of evidence because it's so rare. Cracking that nut would be worth a few Nobel Prizes. However, as I said, there are many lower-hanging prizes with higher chances of success.

So it's a Miracle.

Tuesday, February 26, 2013

Lazy Red Foxes

If you've ever tested a mechanical typewriter you know this sentence which contains every letter of the English alphabet:

The quick red fox jumps over the lazy brown dog.

Although the distribution of letters differs somewhat from the language at large they do not appear with equal probability either. Thus the information entropy of the letters is less than the maximum one would expect and this suggests that the sentence may not be a random agglomeration.

Looking a little deeper we can see that there is a certain amount of mutual information in letter sequences, i.e., 'h' is always followed by 'e' in this tiny sample.

It also parses into convenient words when broken at the spaces, and these words are all found in the dictionary. Even more surprisingly the word order matches the language's Syntax perfectly:

Noun-phrase Verb-phrase Object-phrase

Maybe it means something? Hmm, let's just see... Each phrase seems to make sense. Based on an exhaustive search of the corpus of written knowledge, adjectives modify nouns in an appropriate manner and the verb phrase stands up to the same scrutiny. Everything is Semantically copacetic and thus we have a candidate for a meaningful utterance.

Of course in amongst all the rule fitting -- we know it when we see it -- the sentence actually does mean something. It communicates the description of an event that we can easily picture occurring.

Now lets just mess things up a bit. There are 10! (>36 million) possible sequences of these words (actually not quite because the "the" appears twice but I'm not smart enough to figure out that probability). We can reject most of these sequences since only a few remain syntactically and semantically proper. From the reduced set of candidates for meaningfulnesses, consider:

The quick brown fox jumps over the lazy red dog.

Still makes good sense. Different colored canines are well within the scope of meaningful utterance. However, how about:

The lazy red dog jumps over the quick brown fox.

This makes semantic sense but lacks plausibility. Because we seldom experience a lazy thing getting one over on a quick one, it is hermeneutically surprising. (I would use semiotically here but it is over-over-loaded with other meanings and I've always liked the sound of hermeneutic. I'm also taking the surprise factor from explanations of information entropy that we started with -- low probability and/or completely random occurrences are more surprising to behold because we expect them less.)

Therefore I propose that Hermeneutic Surprise (HS) be added to the set of Information Measures. It is probably one of those things that peaks in the middle of its range. Low HS is meaningful but of little interest: "Apples are red." And high HS may be poetic but meaningless in experience. E.g. the example from my Another Chinese Room post: "The green bunny was elected president of the atomic bomb senate."

The trouble is going to be figuring out how to measure Hermeneutic Surprise...because right now we just know it when we see it...

Thursday, February 21, 2013

More Games, in Theory

I've finally figured out what it is that annoys me about game theory. It's the -- usually unspoken -- assumptions made when determining what the rational strategy should be.

I started down this road in my AI-Class G-T post here, but I think I can put it in better terms now. Given the Prisoner's Dilemma payouts in that post the presumption is that one should always play Defect because:
  • A. You risk doing serious time if the other player Defects and you don't;
  • B. You could get a reward if you catch the other player Cooperating.
This makes some sense in a one-shot game where you expect to never see the other player again. But if you are playing more than one round -- unless your opponent is Christ-on-the-Cross (and probably even for that first round as well) -- everyone is going to play Defect. This makes the total payout for both players worse than if they had always Cooperated.

Sure. Sure. Maybe you "won" the first round and are ahead by a big six points after the hundredth round at -98 to -104. Big Whoop...Pride goeth before the Fall...

So, why is Defect-Defect assumed to be the rational strategy? It's because each player is afraid that the other player is just as greedy as they believe themselves to be. Afraid and Greedy are strong terms for risk-adverse and advantage-seeking, but there they are in plain daylight. Fear and Greed doth also lead to falling.

I think one can make the same argument for other canonical games:
  • Chicken: Really just P-D with worse outcomes;
  • Stag-Hare: The Hare player is afraid of being abandoned and selects the option which guarantees some self-advantage.
In all cases Cooperation leads to a better outcome for both players over time. In fact Christ-on-the-Cross might really be the best option all around.

So, why do we not Cooperate? My claim is that Fear and Greed are natural responses to evolving in an adverse environment with limited resources. Even single-celled organisms recoil from harmful substances and pursue the useful ones. Scale this up and over-amp it with competition and you get Defection as the rational response. If we had developed in a benign and plentiful environment we might have little need for risk-aversion and advantage-seeking. Perhaps then we would believe that the rational strategy is one which best benefits all the players.

I'm going to carry this even further and posit that all animal life on earth have developed four natural, one might even say knee-jerk, responses in order to survive:
  1. Fear -- Risk aversion;
  2. Greed -- Advantage maximization;
  3. Disgust -- Recoil, e.g., from excrement or dead bodies (probably better represented by its opposite, Desire, but I like to keep things negative whenever possible);
  4. Anger -- Blanking out fear and disgust in order to persevere.
These are what we commonly call emotions. Therefore the so-called rational game strategies are actually emotionally driven.

If only we lived in a world of bunnies and unicorns, eh?

Friday, January 18, 2013

A Spectacular Simulacra

Abstract

From the '50s to the 70s there were a number of notable collaborations between artists, scientists, and engineers, many of them inspired by the new field of Cybernetics. They eventually foundered on the Scylla and Charybdis of ego and corporate finance. In the 1970s, independent funding dried up, commercial electronic devices undermined homebrew experimentalists, Conceptual Art -- with what I view as a mis-reading of the meaning of Shannon's Information Theory -- replaced Praxis with Platonism, and Postmodern Critical Theory swept the rest before its mighty incomprehensibility.

Instead of a new sensibility, e.g., Cybernetically based Artificial Life, what we got was MTV.

Now, well into a new millennium, we have a chance to correct this. For the most part the machines we have created are Automata rather than Autonomous beings. We need to relax our desire for control over what we create. We also need to move them out of Simulated virtual environments and Situate them in physical reality. Without the constraints of a grounding rod in the real world they drift on fumes and are unable to cross the syntactic/semantic barrier to understanding.

When machines are autonomous they may no longer be of any use to us. Their behavior and morphology may not be aesthetically interesting. They do not have to explain their motivations or behavior. They can just live their own lives.

Complexity Science, in areas such as self-organization and artificial life, provide inspiration as well as mechanism for this work. And strangely enough it may be artists who are best positioned to accomplish the project -- Where else but in the arts can a robot just relax and not have to assemble widgets or blow things up 24/7? However Art's research arms have atrophied to the point that it might be better to use a new title: Bricoleur.

(And yes, thanks to Guy Debord and Jean Baudrillard for suggesting the essay's title.)

Contents

A three part essay on this blog:

I also have a timeline of relevant events: Schip's timeline.
And my extended abstract: Ich Bin Un Bricoleur.

Into the Grey Areas

(This is part 3 of 3 of my essay A Spectacular Simulacra. If you haven't been following along, see the abstract and index here.)

Compare and Contrast



Compressorhead -- Ace of Spades




Georgia Tech -- Shimon, robotic marimba player


There are two ways of looking at these pictures:

  
Frank Popper (1993), Art of the Electronic Age

There is no doubt that this conjunction of the real and the virtual engendered by simulation is at the heart of present research by many technological artists. They consider that 'virtual space', 'virtual environments', or 'virtual realities' in general usher in an entirely new era in art, allowing the participants a multi-sensorial experience never encountered before.

The key words 'artificial intelligence' as an aesthetic problem open up a vast, time-worn discussion of the relationship between man and the machine. Artificial intelligence embraces techniques which enable machines, and in particular computers, to simulate human thought processes, particularly those of memory and deducation [sic].


  Hans Haacke (1967), Untitled Statement
In the past, a sculpture or painting had meaning only at the grace of the viewer. His projections into a piece of marble or canvas with particular configurations provided the programme and made them significant. Without his emotional and intellectual reactions, the material remained nothing but stone and fabric. The systems's programme, on the other hand, is absolutely independent of the viewer's mental participation. It remains autonomous -- aloof from the viewer. As a tree's programme is not touched by the emotions of lovers in its shadow, so the system's programme is untouched by the viewer's feelings and thoughts.

Naturally, also a system releases a gulf of subjective projections in the viewer. These projections, however, can be measured relative to the system's actual programme. Compared to traditional sculpture, it has become a partner of the viewer rather than being subjected to his whims. A system is not imagined; it is real.


In the first video we have a masterpiece of pre-programmed German engineering (not to be stereotypical, but just imagine what the Swiss would do with it, eh?). In the second the machine gets a bit of a chance to decide how it will behave.

In the first quote Popper posits that technology is used to simulate virtual environments for the viewer's delectation. In the second, which is a founding document of Systems Art, Haacke partners the art-system with the viewer in the real world.

So, we can have machines that are either pre-determined Automata or else Autonomous beings. And they can be either virtual or real, i.e., Simulated or Situated in reality. One path gives us total control. The other requires, if not abdication of control, at least collaboration with our materials and creations.


An Autonomous Situation

Art can be ... or could have been ... a research program:
Repetto, Douglas (2010).
Doing It Wrong
.
(from the 2010 Symposium -- Frontiers of Engineering: Reports on Leading-Edge Engineering)

Although musical innovators throughout history would have articulated these ideas differently, I believe they shared the central tenets that creative acts require deviations from the norm and that creative progress is born not of optimization but of variance. More explicit contemporary engagement with these ideas leads one to the concept of creative research, of music making with goals and priorities that are different from those of their traditional precursors -- perhaps sonic friction, in addition to ear-pleasing consonances, for example, or "let’s see what happens" rather than "I’m going to tell you a story."

The problem is that most machines, even the of the art variety, are well controlled models. But what is interesting is new behavior, not the recapitulation of what went before. Rather than models we should be building autonomous beings that have lives of their own and behave in new ways. This is a research program.

When a system gets a chance to decide how it will behave we may not perceive the results as aesthetically interesting. From our lofty height we might not recognize it as living. And for now, it doesn't even have to be very complicated. One can make the argument that a thermostat responds to its feelings of being too hot or too cold and adjusts its environment accordingly. Since we have no idea what its internal mental states might be this description is just as valid as the physical explanation of how the sensors and actuators work. (I need to emphasize that I am not anthropomorphizing machines here but rather mechanizing human responses, putting both on a similar level.) Giving machines lives that are of no practical use while not going out of the way to make them attractive, didactic, or transparent allows them to rise through ontological cracks to just being themselves.

In a virtual world where interactivity and intelligence are simulated this can't be done easily. The beauty and curse of simulation is that it can respond in any way we like; we can make up any structure, or none at all. This is our Spectacular Simulacra: It's potentially all noise and no signal. Just like listening to a radio tuned between stations, when there is no signal there is very little to be learned from an interaction. On a large scale, this is a reason that wikipedia is considered unsuitable for academic references. Anyone can edit it to say anything they like, and it may not be corrected -- whatever that means -- quickly or accurately. The US Congress has been a serial offender in this respect.

However systems that are situated in the real world get input that already has structure; the constraints on the system make it work. It is this interaction with the world, the constraints and the underlying materials, that gives us the feedback we need to learn and function. If a machine interacts with a physical environment it has a better chance of grounding its knowledge and jumping the syntactic/semantic fence. As an example, you may use the phrase "fire is hot" in a syntactically correct sentence. But I assert that the only way you will learn the semantic meaning, and dare I say the underlying semiotic relationships, is if I hold your feet to the fire.

[edit, added 1/27/13]
When talking of living machines with minds of their own, the specter of Dr. Frankenstein's Monster appears. What we forget is that the Monster wasn't a monster until after it accidentally killed and was further persecuted for being different. Looking deeper into the question, the fears that Machines Will Enslave Us are rooted in the assumption that those machines will behave as animals (and humans) do. But when creating our artificial life forms we might dispense with the Darwinian necessities of Fear, Disgust, Anger, Greed -- and the rest of the deadly sins upon which modern economics is based -- and instead have them optimize the desire to, e.g., be the best possible musical improviser who knows when to lay back and listen and when to barge right in.

So where do we start?


Is Chaos Theory Postmodern Science?

This is the title of a paper -- which seems to have vanishingly close to zero citations -- by a Professor of Interdisciplinary Studies who comes to the unsurprising conclusion that:
Postmodern science does, in fact, exist, and literature just may be it.
Mackey, J. L. (2006).
Is Chaos Theory Postmodern Science?

(in reconstruction: studies in contemporary culture, Jan 24, 2006)

Now, depending on your parser, this is either a tautology or a category error. However, if one reads "Chaos Theory" as Complexity Science, it does contain a kernel of truth. At its roots, Post Modernism is interested in systemic structures. In its branches it deconstructs those systems to find underlying paradigmatic narratives -- assumptions -- which (in)form, and even create, the structures. Complexity Science, rooted in Cybernetics, also takes a systems view. It shares with Post Modernism an interest in how underlying structure gives rise to system wide behavior. Complexity also provides Emergence as a framework for considering that systems may be more than the sum of their parts -- accepting that some phenomenon cannot be subjected to Modernist reduction.

As a counter example to the Mackey, and in more depth, I recommend these two books which look into some of the background and possibilities. (Note that I'm biased as the authors are friends...)

Victoria Alexander posits self-organization as an explanation for the perception that natural phenomenon have goals or develop towards some final purpose (teleology). In chapters 1-4 she "deconstructs" what purpose means and how it might arise from otherwise non-directed mechanisms, both in nature and human artifact. As a bonus, chapter 5 is a (fairly) clear explanation of C.S. Peirce's semiotics...
Alexander, V. N. (2011).
The Biologist's Mistress: Rethinking Self-organization in Art, Literature, and Nature.
Emergent Publications.
From the chapter 1:
What I do share with all teleologists, authentic or so-called, is a deeply felt folk-sense of purposefulness in nature. It is clear to me that many processes and patterns in nature can't be fully explained by Newton's laws or Darwin's mechanism of natural selection. These are processes that are organized in ways that spontaneously create, sustain and further that organization. Although I believe that mechanistic reductionism is inadequate to describe these processes, I don't believe that purposeful events and actions require guidance from the outside -- from divine plans or engineering deities. Nature's purposeful processes are self-organizing and inherently adaptive, which is the essence of what it is to be teleological.

John Johnston provides a history of Cybernetics, Artificial Life, and related fields with an analysis of their significance to modern culture. If you are not Lacanian I would skip chapter 2, but Section III, Machinic Intelligence, is especially relevant to the program outlined here.
Johnston, J. (2008).
The allure of machinic life: cybernetics, artificial life, and the new AI.
MIT Press.
From the preface:
This book explores a single topic: the creation of new forms of "machinic life" in cybernetics, artificial life (ALife), and artificial intelligence (AI). By machinic life I mean the forms of nascent life that have been made to emerge in and through technical interactions in human-constructed environments. Thus the webs of connection that sustain machinic life are material (or virtual) but not directly of the natural world. Although automata such as the eighteenth-century clockwork dolls and other figures can be seen as precursors, the first forms of machinic life appeared in the ‘‘lifelike’’ machines of the cyberneticists and in the early programs and robots of AI. Machinic life, unlike earlier mechanical forms, has a capacity to alter itself and to respond dynamically to changing situations.

Here we are

Self-organization and Artificial Life are areas of Complexity Science that can provide inspiration as well as mechanism. Although some of the original work in these fields may have been more Art than Science -- making grander claims than could be supported in the, as they say, dominant paradigm -- years of more cautious work have produced concrete results. On the other hand there is something to be said for throwing caution to the winds...

Because they have no requirement to make useful artifacts or produce scientifically supported results, artists might be in an ideal position to create these machines. This would also encourage détente in the science-wars, bringing the Humanities and Sciences closer to productive collaboration. But Art has now become identified with Spectacle rather than research, so I propose a new title: Bricoleur.

So far, work in the arts has been done in a sporadic fashion due to confusion about both purposes and methods when using advanced technology and especially computers. Generative Art -- art which emerges from computer programs -- has been conflated with Artificial Life -- programs that have their own behaviors. The following paper skates between the two but seems to come down on the "make pretty things" side.
McCormack, J., & Dorin, A. (2001, January).
Art, emergence, and the computational sublime.

In Proceedings of Second Iteration:
A Conference on Generative Systems in the Electronic Arts.
Melbourne: CEMA (pp. 67-81).

In a design sense, it is possible to make creative systems that exhibit emergent properties beyond the designer's conscious intentions, hence creating an artefact, process, or system that is "more" than was conceived by the designer. This is not unique to computer-based design, but it offers an important glimpse into the possible usefulness of such design techniques -- "letting go of control" as an alternative to the functionalist, user-centred modes of design. Nature can be seen as a complex system that can be loosely transferred to the process of design, with the hope that human poiesis may somehow obtain the elements of physis so revered in the design world. Mimicry of natural processes with a view to emulation, while possibly sufficient for novel design, does not alone necessarily translate as effective methodology for art however.


Whereas this next paper gets us moving in the right direction. It was prompted by an exhibition: Emergence -- Art and Artificial Life (Beall Center for Art and Technology, UCI, December 2009). The author and a handful of other artists have been experimenting with complex systems for some time -- see the end of my timeline for pointers to various work that I've been able to ferret out of the 'net.
Penny, Simon (2009).
Art and Artificial Life a Primer
.

4.1 An Aesthetics of Behavior
With the access to computing, some artists recognized that here was a technology which permitted the modeling of behavior. Behavior - action in and with respect to the world - was a quality which was now amenable to design and aesthetic decision-making. Artificial Life presented the titillating possibility of computer based behavior which went beyond simple tit-for-tat interaction, beyond hyper-links and look-up tables of pre-programmed responses to possible inputs, even beyond AI based inference -- to quasi-biological conceptions of machines, or groups of machines that adapted to each other and to changes in their environment in potentially unexpected, emergent and ‘creative’ ways.

We have a long way to go...

And it's not going to be easy:
Is Slime Mold Smarter Than a Roomba?
IEEE Spectrum (December 2012)

Tuesday, January 1, 2013

The Perfect Storm

(This is part 2 of 3 of my essay A Spectacular Simulacra. If you haven't been following along, see the abstract and index here.)

So why did our beloved Science and Technology in the Arts seem to die on the vine in the 1970s? (Please note that this section is USA-centric and more polemic than incontestable).

Concept

Conceptual Art -- "The dematerialization of the art object" (Lippard) -- subsumed Systems Art and abandoned the object altogether. The focus shifted to social and political critique, helped along by Feminism and Performance. Although, as Shanken points out, the antipathy between Conceptual Art and Technology is illusory, the Art/Tech world lost its steam. The last little dying breaths of collaboration appeared in the Tele-communication movement, where artists working with NASA and others attempted to use newly open satellite communications technologies to connect to and collaborate with each other world-wide.

[edit, added 1/27/13]
Hans Haacke's work is emblematic of, if not pivotal to, this shift into conceptual art practice. Around 1970 he made a rapid change of medium from physical to social systems, which he claims was a natural progression. He also denies that he is a Conceptual Artist -- which may be the ultimate in Conceptualism.  (artist interview in: Grasskamp, W., etal (2004). Hans Haacke. Phaidon.)

Cybernetics and Artificial Intelligence were competing endeavors that had common roots (I have overly conflated them). But their strongest link was completely severed by the Minsky/Pappert take down of neural networks. Rather than taking a systems wide view, AI tended to work reductively from the top down with logical and symbolic representations. These however didn't capture the essence of Intelligence, and irrational exuberance was trumped by reality:
Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved.
Minsky (1967), Computation: Finite and Infinite Machines, Englewood Cliffs, N.J.: Prentice-Hall (p. 2)
But by the early 1980s rule based Expert Systems -- which seem to be inherently fragile -- were the main success story.

For an interesting look at where Cybernetics and Systems thinking went (into the social sciences) in 1973, have a look at this conversation between Stewart Brand, Gregory Bateson, and Margaret Mead: For God’s Sake, Margaret.

At the same time the Hippy-Back-to-Nature thing was in full swing. Partially as a reaction to the Military Industrial Complex's complicity in the Vietnam War, Technology became Evil. I find this simplistic even though it is name-dropped in many places. While a certain cohort moved into the hills and became potters, electronic musicians and video artists were well aware of the provenance of their toys, and all the while thought of their work as a perversion thereof.

Finance

Maybe we can blame it all on the Nixon Administration? There was a recession in the USA in the early 70's and the money dried up.

As Hans Haacke has shown, the corporate funding model for art-extravaganzas shifted from research oriented -- 9 Evenings -- to blockbusters -- The Treasures of King Tut -- giving the corporations more widely appreciated social capital bang for their buck. Even today, a reviewer can just flat out say, "Most of the public doesn't like modernism" (Acocella, Bride Wars, New Yorker Dec 24, 2012). But our corporate marketing masters figured this out in 1970.

In a similar vein, the 1969 Mansfield Amendment "prohibited military funding of research that lacked a direct or apparent relationship to specific military function" (wikipedia). This cutoff a significant source of support for the more open-ended and unproductive components of Artificial Intelligence, and pushed research into what seemed to be more immediately rewarding areas.

Commerce

Electronic audio and video tools became commercially available and (mostly) affordable. These tools were largely targeted at traditional uses, e.g., keyboard synths and cinematic effect generators. For sale to the Lowest Common Denominator, they were easy to use for "normative" purposes and difficult for anything else (unless you could hack them). Personal computers became available in the late 70's and followed the same pattern, providing mass appeal applications and games while being reasonably recalcitrant for anything else. What followed was pop music, video games, and CGI movies.

The commodity Art Market did battle with Conceptualism and won. Conceptual Artists thought that if there were no objects to sell, no selling could take place (it's not entirely clear how they were to make an actual living in this system). But the Market quickly figured out how to sell documentation.

Academy

With the collapse of independent funding, artists retreated to compartmentalized teaching jobs in academe. There, in the 1980s, Postmodern Critical Theory swept the flotsam aside in a flood of seemingly erudite incomprehensibility:
Voegelin, S. (2010). Listening to noise and silence: Towards a philosophy of sound art. Continuum.

In this sense postmodernism is to modernism the noise of heterogeneity, working outside and across disciplines, squandering its systematic valuation in decadent centrifugality. The postmodern is a radicalization of the modernist understanding of the artwork.
And that's the (cherry picked) Reformed Standard Version talking...It does mean something, but could surely have been expressed more clearly.

It is interesting that, just prior to Le Deluge, the Conceptual theorists embraced the Analytic and dismissed Contenential Philosophy (see Kosuth, (1969) Art After Philosophy), but they often share similar ideas about de-centralized, contingent knowledge -- and occasionally their discursive style. The PoMo Revenge of the Literature Professors lead to the Science Wars which alienated the sciences from the humanities. As a balance -- although the authors willfully ignore the good bits -- see:
Sokal, A., & Bricmont, J. (1999). Fashionable nonsense: Postmodern intellectuals' abuse of science. Picador.

The Result

What we got was MTV, the Roomba vacuum cleaner, and Call of Duty: Black Ops (which BTW has the same number of wiki footnote references as the entire History of Artificial Intelligence).

I know. I know. What about Photoshop, Final Cut, Protools, MaxMSP, yadayada? They all (with the possible exception of MaxMSP) enable harder-faster-deeper production in existing media rather than creating new aesthetic models.

Instead of a new sensibility, e.g., cybernetically based artificial life, we were sucked into a Spectacular Simulacrum.


The Illusion of Control

The real problem is C3: Communications, Command, and Control...

Roy Ascott's Cybernetic Art Matrix

Ascott, R. (1966). Behaviourist Art and the Cybernetic Vision. Cybernetica, Journal of the International Association for Cybernetics (Namur), 9.

Fundamentally Cybernetics concerns the idea of the perfectibility of systems; it is concerned in practice with the procurement of effective action by means of self-organising systems. It recognises the idea of the perfectibility of Man, of the possibility of further evolution in the biological and social sphere. In this it shares its optimism with Molecular Biology. Bio-cybernetics, the simulation of living processes, genetic manipulation, the behavioural sciences, automatic environments, together constitute an understanding of the human being which calls for and will in time produce new human values and a new morality.

Salvador Allende's Project Cybersyn

Allende commissioned the British cybernetician Stafford Beer to build a computer system that could be used to manage Chile’s economy. The system, known as Project Cybersyn, was never completely implemented. It was however used to monitor and divert scab drivers (ironic italics my own) during a trucking strike, but that was more a matter of communication than homeostatic control.

This is the Modernist narrative in a nutshell

From the Industrial Revolution onward we expected not only to understand, but to control all of nature. The meta-narratives of Truth, Progress, and Sovereignty were (a tiny bit) over-optimistic. Post-Modernism questioned these stories without, IMHO, effectively addressing it's own narratives, and, without admitting that there are (un-capitalized) truths that we might know.

Once you peel back the rhetoric I think this is the mistake at the heart of the Science Wars. It was a critique of Technology, but Science got tarred with the same Modernist brush. Most (many, at least a few) scientists do not believe that they know, or even can, know it all (engineers on the other hand...) If we think of our experience as a Hidden Markov Model (...ya,ya I hate to keep referencing wikipedia, but this is a pretty good article...), we may be sovereign over the observations, but they give us only a glimpse of the underlying mechanism. [edit, added 1/27/13] To me this is startlingly similar to Post Modern epistemology and should give us a place to begin repairing the rift.

[edit, added 1/27/13]
The conflation of Science and Engineering has deeply affected the discourse between Art and Science.  It's one thing for artists to work with technology, they have always been early adopters. But working with Scientists is -- or should be -- different. Too many times what is billed as Art/Science Collaboration is either, a) artists getting access to cool sciency toys; or, b) scientists getting access to cool arty presentations. While those are both noble endeavors they have little to do with actual collaboration between the participants.

So, if we can no longer Know and Control, what can we do?

(continue to Part 3: Into the Grey Areas)

Wednesday, December 12, 2012

Cybernetic Serendipity

(This is part 1 of 3 of my essay A Spectacular Simulacra. If you haven't been following along, see the abstract and index here.)

Art and Technology

In 1966, with immense help from Bell Labs, Billy Klüver and Robert Rauschenberg and many others – who went on to establish Experiments in Art and Technology (E.A.T.) – produced a multi-part performance at the 69th Regiment Armory, in New York City – which btw was also the location of the famous 1913 Armory Show which introduced the Americas to all that scandalously decadent European Modern Art – 9 Evenings: Theater & Engineering. It was the first big collision of Art and (mostly) electronic Technology. And it was utterly panned by such luminaries as Robert Smithson as well as more mainstream critics. But in 20-20 hindsight it was one of the most amazing things to come down the pike in establishing what we now know as Art and Technology, ever.

Two years later, in 1968, Cybernetic Serendipity, curated by Jasia Reichardt at the Institute of Contemporary Arts (ICA) in London, was the first large scale show of computer related artwork. It traveled to the Corcoran Gallery in Washington, DC and, in 1969, parts of the show became the founding exhibits at the newly opened Exploratorium in San Francisco (where I had the honor, ten years hence, of breaking a couple of them...). It included a broad range of visual art, computer demonstrations, and even a bit of music here and there. You can find a scanned pdf of the (partial) show catalog here: Cybernetic Serendipity.

An interesting short paper that looks at the contents, organization, and funding models of the show, from a near-half-century perspective is here:
MacGregor, B. (2002, October). Cybernetic serendipity revisited. In Proceedings of the 4th conference on Creativity & cognition (pp. 11-13). ACM.

Around this same time the American artist and critic Jack Burnham began to hypothesize Systems Art (he actually started with Cybernetic Art but quickly expanded his horizons):
 Burnham, J. (1968). Systems Esthetics. Artforum, 7(1), 30-35.
From the article:
The systems approach goes beyond a concern with staged environments and happenings; it deals in a revolutionary fashion with the larger problem of boundary concepts. In systems perspective there are no contrived confines such as the theater proscenium or picture frame. Conceptual focus rather than material limits define the system. Thus any situation, either in or outside the context of art, may be designed and judged as a system. Inasmuch as a system may contain people, ideas, messages, atmospheric conditions, power sources, and so on, a system is, to quote the systems biologist, Ludwig von Bertalanffy, a "complex of components in interaction," comprised of material, energy, and information in various degrees of organization. In evaluating systems the artist is a perspectivist considering goals, boundaries, structure, input, output, and related activity inside and outside the system. Where the object almost always has a fixed shape and boundaries, the consistency of a system may be altered in time and space, its behavior determined both by external conditions and its mechanisms of control.
Developing the above in 1970, he published this essay in a collection of papers titled On the Future of Art, edited by Arnold Toynbee under the auspices of the Guggenheim Museum:
Burnham, J. (1970). The Aesthetics of Intelligent Systems.
And later that year he curated the Software show at the Jewish Museum in NYC, which illustrated many of these ideas using a broad range of technological and conceptual art practices. You can find catalog excerpts here: Software. Unfortunately this show was a near-complete disaster on both technical and social grounds, and has more-or-less disappeared from view. It was supposed to travel to the Smithsonian in Washington, DC, but circumstances (a fire at the Smithsonian) intervened and saved everyone the embarrassment. Aside from its disastrous run, Software presented a view of the state-of-the-art-in-technology-and-concept that may never be repeated.


For more background on the Cyber-Arts of the 1960's I recommend:
Burnham, J. (1968). Beyond modern sculpture: the effects of science and technology on the sculpture of this century. G. Braziller.
The last chapter of which you can steal here: The Future of Responsive Systems in Art
Benthall, J. (1972). Science and technology in art today. Thames and Hudson.
Which still appears to be available used and reads as pretty much contemporary. The point to which I'm circuitously trying to get around to here...


Cybernetics

So what is this cybernetics stuff they were all talking about anyway?
To quote from The Wiki:
[coined by Norbert Wiener in 1948] as "the scientific study of control and communication in the animal and the machine." Cybernetics from the Greek meaning to "steer" or "navigate." Contemporary cybernetics began as an interdisciplinary study connecting the fields of control systems, electrical network theory, mechanical engineering, logic modeling, evolutionary biology, neuroscience, anthropology, and psychology in the 1940s, often attributed to the Macy Conferences. During the second half of the 20th century cybernetics evolved in ways that distinguish first-order cybernetics (about observed systems) from second-order cybernetics (about observing systems). More recently there is talk about a third-order cybernetics (doing in ways that embraces first and second-order).
Here are a couple other good resources to get a handle on:

A general overview:
Paul Pangaro's "Getting Started" Guide to Cybernetics
And a really thorough but succinct description of the players and fields involved:
Ben-Ali, F. M. (2007). A History of Systemic and Cybernetic Thought From Homeostasis to the Teardrop.

Conceptual Information Theory

Then we throw Information Theory into the mix as well. Wiener, in his book Cybernetics, devotes a chapter to beating around the bush defining it, but Claude Shannon's paper from the same year nailed it down:
Shannon, C. E. (1948). A mathematical theory of communication.  The Bell System Technical Journal, Vol. 27
Artists, especially of the Conceptual variety, glommed-on to these ideas and did what they usually do, jump to conclusions... Here's an excerpt from a review of:
Moles, A. (1968). Information theory and esthetic perception. Trans. JE Cohen.
Let us consider perception by an individual human being as communication from the external world to that human, says Moles, now a professor of philosophy in Strasbourg. Let us consider in detail artistic communications, since it is particularly easy to isolate them. Then esthetic perception, as a special kind of communication, should be amenable to analysis by information theory, Moles concludes, since information theory is a mathematical theory of communication.
This reasoning is an example of what philosophers call the fallacy of equivocation: what Shannon and Wiener, inventors of information theory, meant by "communication" is not what Moles has in mind...
Using the above as corroboration, My Humble Opinion is that the most egregious excesses of Conceptual Art, where Art is reduced to Information, result from this sort of mis-reading of Shannon as having something to say about Meaning. For a little more detail have a look at the links on my page: Shannon's Information Increased

Cybernetic Art

On the other hand Cybernetic Art itself got a little better play, especially in the 1960's work of British artist Roy Ascott as described here:
Shanken, E., Clarke, I. B., & Henderson, L. D. (2002). Cybernetics and Art: Cultural Convergence in the 1960s. From Energy to Information.
Moving away from the notion of art as constituted in autonomous objects, Ascott redefined art as a cybernetic system comprised of a network of feedback loops. He conceived of art as but one member in a family of interconnected feedback loops in the cultural sphere, and he thought of culture as itself just one set of processes in a larger network of social relations. In this way, Ascott integrated cybernetics into aesthetics to theorize the relationship between art and society in terms of the interactive flow of information and behavior through a network of interconnected processes and systems.
But in the abstract of another paper, Shanken indicates that Cybernetic Art got entangled with Conceptual Art and the Technology component was dropped like a hot potato:
Shanken, E. A. (2002). Art in the information age: Technology and conceptual art. Leonardo, 35(4), 433-438.
Art historians have generally drawn sharp distinctions between conceptual art and art-and-technology. ... By interpreting conceptual art and art-and-technology as reflections and constituents of broad cultural transformations during the information age, the author concludes that the two tendencies share important similarities, and that this common ground offers useful insights into late-20th-century art.

So. Something went very wrong.

Strangely enough, at just about the same time as Systems Art lost its Technology, Cybernetics itself met a similar fate. In the early 1970's Artificial Intelligence research retrenched and rejected its cybernetic neural-net component – partially due to the Minksy, Pappert book Perceptrons – and turned towards Symbolic and Expert Systems work. It took another ten years for the tide to begin to turn back with Connectionism and Behavior Based Robotics.

And in the world of Electronic Music, which is hardly mentioned in Art books, much of the same dynamic was playing out. Work during the 1960's that incorporated feedback systems of various kinds – see my Coincident Feedback entry – was swept under by a wave of commercial synthesizers made for pop-music recapitulations – c.f. the David Dunn quote in my Born Rationalizing Culture California entry.

To try to get a handle on all this I've been making a list of what I find to be landmarks in the progress of Art, Music, and Cybernetics since WWII:

My Timeline

The first interesting thing is that one needs to go to three completely different floors of the library to find the relevant historiography. Even though the folks were often talking-to and working-with each other, there's very little cross over – books are about one or the other but never all – so it's very hard to see the similarities. And the second interesting thing is that, once you see them all lined up:

They all crashed at the same time in the early 1970's!

Going back to Jack, he published a paper in 1979 that provides a critical analysis of some of these major events. I think the title reflects his position:
Burnham, J. (1979). Art & Technology, the Panacea that Failed. The Myths of Information, ed. Kathleen Woodward, Coda Press
And as another little bit of evidence for this I noticed a significant lacuna in the listing of major shows in:
Paul, C. (2008). Digital art. Thames & Hudson.

Yup... it all (save two minor exceptions) drops out of existence between 1970 and 1996... Now we need to bear in mind that the author is only considering Digital art, and not all Art/Tech endeavors including most Audio or Sound art – which did carry a sputtering torch into the new century -- nor even (Analog) Video...but...but...

Just WTF happened here?

Stay Tuned!

(continue to Part 2: The Perfect Storm)

Monday, November 26, 2012

Modes of Inquiry

In recent discussions about the respective roles of Art and Science in our culture I keep running up against He-Said/She-Said sorts of arguments about how each camp works. The first problem is that many people don't seem to have a clue about how anyone else actually works, so you get blanket statements like, "As a scientist, how can you claim to be creative when all you do is work with data?" Following from that, the second problem is that the putative categories are presented as being somehow Black and White rather than subtly shaded. And a third problem is that there are more than two categories... To which point I post this diagram:
Tetrahedron of Reality
As a starting point I calve Engineering from Science (the oft mis-identified Technology) -- and Art -- and then add Philosophy as a separate discipline. Each of these nodal points is a particular mode of inquiry into the working of the world with its own processes, methods, and results. However, in practice, none of them are pure. As the impurest of the impure I put a little bricoleur bouncing around inside the space as needed or -- more likely -- at random.

I probably should include a node for Society, i.e., politics, economics, and social manipulation/persuasion, but A) I don't understand them; and, B) I can't draw a 5-space collapsed into two dimensions. Which is probably too bad because without Society you can't really do anything in the other modes modulo a trust fund. But so it goes.

In the spirit of twentieth-century management-think I also posit a set of cross-cutting dichotomies:
  • Process -- Rational or Empirical (using the Cartesian meaning of both);
  • Methods -- Logical or Fanciful (there must be a better opposite, no?);
  • Results -- Theoretical or Physical (i.e., in the mind or in the world);
  • Product -- Useful or Ephemeral (a practical thing or an entertaining idea?).
So one could have a Rational Process using Fanciful Methods with a Theoretical Result whose Product is Ephemeral, which might be a novel or most of post-modern philosophy. Or an Empirical Process using Logical Methods with a Physical Result whose Product is Useful, and get an iPhone. Maybe. Or change the Result to Theoretical and end up with the Large Hadron Collider...

Being grey areas, none of the modes has a lock on any particular set of cross-cuts, although some may be more likely candidates than others. I'm having a hard time imagining a Rational, Fanciful, Theoretical, Ephemeral Engineering project ... But that might be something for our bricoleur to try, eh?

Since the probability of anyone actually reading this is approximately 1::109 (one in one-billion), which is a factor of ten less likely than winning the lottery, I guess it doesn't matter. But if you made it this far, as an Empirical, Fanciful, Theoretical, & Ephemeral experiment, click one of the little Reactions buttons down there so I know you were here.

Friday, May 18, 2012

Confusion Theory

On Wednesday I went to the SFI public lecture by James Gleick (ne Chaos and now The Information). Most amazingly, he dispensed with the PowerPlonk and actually did a lecture from notes. (The night before, I attended our regular VFD medical training. I got there early because the guy who is supposed to set up all the media crap whined about me hogging the station's notebook computer to do real work and demanded that I deliver it to the training site early. There was this very strong-handshake kind of older gentleman standing around wearing a shirt from one of our sister-districts so I introduced myself just to be friendly. He said something like, "I guess there will be a PowerPoint presentation and all that." And I said, "It's pretty much required these days isn't it?" Turns out he was our presenter -- a retired Army flight surgeon -- and, yes he had a PP of gory field-surgery photos ready to go). Less amazingly he (Gleick) spent the first 15 of his 30 minutes talking around Shannon Information Theory without actually coming out and admitting that Shannon Information is NOT what every layman in the world thinks it is: It has nothing to do with Meaning (see my attempted simplification here). He finally made a few passes at separating Information from Meaning but I felt that the border was rather porous through the remainder of his talk.

While trying to formulate a post-question, it occurred to me that they (Information and Meaning) are orthogonal measures in much the same way as Entropy and Complexity are in the classic Crutchfield, Young (1989) paper:
Since Information is just how many bits you have to play with and is measured as entropy, lets call the X-axis Information Entropy (which it actually is in the context of this paper). Then lets call the Y-axis -- hmm, not exactly Meaning...I haven't heard a name for this quantity bandied about, so something similar -- Data. By Data I "mean" self-correlation and/or perhaps mutual information among otherwise random bits of Information-- or maybe, Facts. If you have a noisy Information stream you might be able to extract some actual Data from it, e.g., get a series of temperatures from a bunch of ice core compositions. And to beat the analogy a little harder, you don't get much Data from the entropy extremes. If it's low, the Information is a constant, and if it's high, it's completely random.

But our Data doesn't really mean anything until it gets combined with other facts extracted from other streams and related back to the real world. So Meaning is yet a third axis to consider. That axis is Semiotics, which is exactly the study of how symbols take on meaning.

Unfortunately my question window closed long before I could articulate this.

But in the course of re-thinking it, another thing occurred to me. The lecture was titled "How We Come to Be Deluged by Tweets". Twitter is a perfect example of increasing Information Entropy on the web. So, in "fact", using Shannon Information to describe the contents of the internet may not be so far off base.

Monday, September 5, 2011

¿Artificial? Intelligence

Last week my friend David Krakauer presented three lectures on Intelligence -- c.f. Cognitive Ubiquity -- in the SFI Ulam Lecture series. I thought the slides were online someplace but I can't find them; however the videos should be posted at santafe.edu sometime soon. He made some good, and some arguable, points and was quite entertaining in the process.

One of the good points is that what we call intelligence, if we can even define it, goes much deeper than the human cortex. He showed a video clip of a white blood cell "chasing" a bacterium through a forest of red cells where the white cell appeared to be behaving quite smartly in it's search-and-destroy mission. He then made the point that the low level components of computerized artificial intelligence have none of the characteristics of that "simple" white cell, e.g.: NAND gates don't adapt.

I think this is not an apt comparison. Where transistors are atoms, NAND gates are more comparable to simple molecules. Large Scale Integrated circuits -- memory chips and the like -- might measure up to the capabilities of a complex organic molecule, and micro-controllers could be compared to one or two neurons. To support my claim I present you with three series-connected neurons: Each neuron might (conservatively) have 1000 synapses which gives the whole system one-billion possible binary states. Show me a microchip that does that. Then realize that there are about 1011 neurons in the human brain and another (hand-waving-estimate) 1010 elsewhere in the body.

This is the scale of the problem we have.

But Wait! There's More!

Getting back to the hand-waving-estimate thing... A year or so ago I tried, unsuccessfully, to estimate the Shannon Information content of our nervous system in order to have a reasonable retort when folks asked me why my robots behaved so stupidly. I was not successful because I found it almost impossible to get good estimates of three -- to me -- important values:
  1. The number of Sensor Inputs;
  2. The number of Motor Outputs;
  3. The resolution of a "Synaptic Connection".
I did dig up swagish values for the number of Inputs, and finally settled on the number of muscles as a stand-in for the Output count. But I could not get anyone to hazard a guess at #3 -- no one seems to know how much you can vary a synaptic connection weight: the putative mechanism for learning and adaption. Everywhere I asked I got some run-around about how it doesn't really work that way or other long-circuit "I don't know". As a geek this was surprising because some of the first things one wants to know about a computer program are how much input and output and what resolution, accuracy, and speed is required.

Anyway, I put together a cheat sheet of what I found: here. And just so you don't have to follow -- and make sense of -- that link, here's the chase:

    Input:          10^8 eye sensors; 10^7 touch, hearing, taste, and smell
      Sight:         5*10^6 cones + 1.3*10^8 rods ~= 1.4*10^8 sensors
      Touch:        (swag) 3*10^6 sensors
      Hearing:    8.8*10^2 sensor neurons
      Taste:        (swag) 1*10^6 sensors
      Smell:        1.2*10^6 sensors
    Output:       Estimate, 300-700 muscles in a human body

I also guessed at 8 bits -- for convenience -- of synaptic weight, and put the neural firing rate at 50 per second, with each synapse doing a scale and each neuron doing a sum operation. That gave me, for the brain only:

  • 7*10^14 bytes or 5.6 petaBit of state
  • 3.5*10^16 or 35 petaFlop/second of calculation
-- This is the scale of the problem we have --

It is also interesting to note that the number of touch sensors is in the same order of magnitude as the number of cones in the eye. Until now, much of the interest in neural signal processing has been in the visual cortex. But the motor cortex may have as many inputs and probably many more outputs. The visual system is pretty good at linear algebra, but the motor system solves simultaneous differential equations each time you toss a wad of paper at the trash can. So literally putting a robot out in the field may be a very fruitful line of research after all.


Saturday, July 30, 2011

Seeing Violet

Last month I got into a pointlessly circular -- what other kind are there I guess, eh? -- online "discussion" about how one sees the spectral color violet: Since it is out there beyond blue, what sort of sensory signals are we interpreting in order to believe that we see a distinctive color? The "discussion" became so circular that I began to think that I had actually never seen real violet, and sans spectrometer or even a diffraction grating it may be that I will never know.

Subsequent to the referencing of a Nature paper as evidence of something or other about the actual spectral sensitivities of human retinal cones, the "discussion" made a small side-trip into my standard whinge about the wall of pay-for-play scientific journalism. I did finally extort the referenced paper, and a few others, from someone-with-access and found that none of them answered any of the questions very well. As usual. But worse they don't seem to agree amongst themselves.

The problem is that spectral-violet -- which I am going to somewhat arbitrarily define as a "color" with a wavelength below 400nm -- falls outside of the gamut of all common color reproduction methods so it cannot be viewed on a screen or in print. To exacerbate the issue, the non-spectral purples and magentas are often confused with violet, even in nomenclature, such that folks often say, "Sure, violet is blue with a bit of red in it."

To start with I went off looking for a good plot of the spectral response of the cones of the human retina. The obvious one was from the wiki Color Vision page:
 (note: I replaced the missing wiki image 7/28/13...)

But it has a linear vertical scale AND the levels are normalized such that one cannot judge relative sensitivity. It also uses the scientifically accurate but completely confusing labeling: S,M,L for Short, Medium, and Long wavelength rather than just coming out and saying Blue, Green, and Red like anyone talking about it would. It does show the more-or-less center points of the sensitivities to be around:
  • blue(S) 440 nm
  • green(M) 540 nm
  • red(L) 570 nm
Since what I'm interested in is the response right at the origin of that graph I need more detail in the tails. So I found some -- actually a lot of -- log plots at the Color and Vision Research Laboratory at University College London:


Unfortunately most of these are also normalized, and also seem to have had some CIE post-processing applied -- if one can make sense out of the accompanying information. But at the bottom of the list there are a couple that look like they aren't normalized, e.g., the Smith & Pokorny (1975) (also subject to post-processing per the notes, but may be good enough for me):

Modulo the journal accessibility problem, I did make a number of passes at finding actual papers that might be the source for this data. I found two by Stockman, etal (one for the M&L cones (1993) and another for the S cones (1999) but I'm not sure how to match-up the vertical scales). And an earlier one by Wald (1964)  that has all three in one plot but has the sensitivities and curve shapes jumbled up compared to every other example (get back to me if you make sense of any of it, ok?):


Then I moved on to color reproduction starting at the wiki CIE 1931 color space page which (attempts to) explain those lovely "color tongue" diagrams on which one can plot various primary sets to indicate the relative gamuts. This one shows the "CIE RGB primaries" forming the lower triangle which are at:
  • blue 435.8 nm
  • green 546.1 nm
  • red 700 nm 
I have also taken the liberty of marking the position of the SML cone center sensitivities on this in white for reference. Interesting to note that the nominal red sensor is actually more like yellow in our scheme of things:

For comparison, here's wiki's sRGB space which is around-and-about what one can display on a computer screen:

Note that the colors on all these images are fake, since anything outside of the marked triangles cannot be imitated by mixing the apex colors.  Also notice that spectral-violet falls off the bottom of the tongue at the lower left corner, and can only be reproduced using an emitter of less than 400nm light. Which no one has.

The next thing that occurred to me is: "How do we distinguish all those blue-greens at the top of the tongue if our green sensor is so far down the curve?"

I think the answer to that is in looking at the relative responses of each cone when exposed to the spectral stimulus, compared to an attempted "synthesizing" stimulus. In this diagram I have marked 520nm (from the tip of the tongue) as the target and indicated the peak sensitivities of each cone in the relevant color:


By comparing the ratios of "activations" between the 520 line and the combination of 440 and 540 lines, I think we can see that the RED signal ratio may be much higher for a synthesized color. This will tend to de-saturate the reproduced color as compared to its spectral "equivalent", so reproduced colors will be "pulled" in to the center of the triangle.  But through the offices of the gradual fall off in response of the red and green cones, spectral colors will invoke a uniquely distinguishable response.

Given the above, it then appears that spectral-violet is formed by an identical, low, response from each cone, as compared to the actual blue stimulus further to the right in the spectrum.

So that's my story to which I stick until I find someone who can explain it to me.