Showing posts with label Systems Art. Show all posts
Showing posts with label Systems Art. Show all posts

Tuesday, June 18, 2013

Artistic Rendering

Artists use external systems, anything from simple tools to complex human interactions, to bring their ideas into the world. When using a system we provide inputs, turn a crank of some kind, look at what comes out the other end, and then try to intuit its actual behavior. When the behavior is desirable we want to control it to some purpose.  In the Arts this purpose is usually to produce -- render -- a tangible product such as a print, film scene, or musical interlude.

At it's Latin roots the word render means to give or put. This made its way into English meaning as:
  1. Transmit to another (render a verdict);
  2. Create an image (render the model);
  3. Cover a surface (render stucco);
  4. Extract by melting (render lard).
The purpose of most (all?) human activity is to transmit ideas or goods and services, and artistic activity brings the first definition to the fore. In the Media Arts the second definition applies directly to the activity of using a computer to make an end product. However, I submit that this usage, more often than not, devolves to one or both of the last two senses, either covering, resulting in a thin facade, or melting, resulting in schmaltz.

There was an Alternative


In the late 1960s Jack Burnham hypothesized something he called Systems Art (for a start see my Cybernetic Serendipity entry). His ideas were quickly absorbed and diluted in the Conceptual Art wave leaving us with two divergent online definitions of Systems Art.

Wikipedia says:
In systems art the concept and ideas of process related systems and systems theory [c.f Cybernetics] are involved in the work [and] take precedence over traditional aesthetic object related and material concerns.
ArtNet.com (now a defunct link, but from the Grove Dictionary of Art via the Wayback Machine) says:
[Systems art is a] Term loosely applied to art produced by means of a systematic or highly organized approach to an image or concept.

The latter refers to what are called Generative Systems, into which the products of artistic rendering usually fall. The system produces a product and we don't really much care what happens inside the black box. It's all about the Form of what is produced.

The former is more interested in the system's behavior and provenance. What is interesting is the Function of the system itself. In the 1960s interesting behavior was not easy to produce, but in the 2010s we have the capability to create much more complex systems. Systems which have lives of their own.

An Aesthetics of Function rather than Form


With an Aesthetics of Function we consider the qualities of the system's behavior, and by extension into second-order-cybernetics, the quality of the relationship of the observer to the observed -- the artist-viewer's relationship with the art-system.

While there are systems which have no inputs, the entire Universe might be an example, we are more interested in those with the structure described above, where by providing appropriate inputs we can mold the outputs in certain ways:

Input - Process - Output

For artificial systems this requires input sensors, internal cross connections, and output actuators. This usually invokes Art and Technology, by which we usually mean Electronic Technology, which in this century usually means Computer Technology. And this is the medium at the heart of "interactive" New Media art.

I put "interactive" in quotes because it is usually a mis-applied description. Most "interactive" art is better described, at a somewhat lower level of function, as "responsive". In order to clarify this I propose the following continuum of system function and behavior.

Responsive:

A doorbell responds by ringing when we push the button. The old fashioned ding-dong type might even be described as interactive because the button push causes the ding and the release causes the dong, giving us some modicum of control over the proceedings. But the same inputs always produce the same outputs.

Interactive:

Learning to use a tool or play a musical instrument is interactive in the sense that we have to experiment to find the capabilities and interfaces that allow us to use the system. While the ding-dong-bell may fit this description, its state-space -- the number of different conditions it might be in -- is very small and easily explored.  Playing a piano requires the manipulation of a much larger set of states with varying inputs and outputs. The user and the system form a feedback loop which ultimately produces the output, but only the user changes his/her behavior.

Adaptive:

If the system changes its behavior as we use it -- generally we like it better when the changes benefit our intentions but it could also be an obstinate SOB -- it is adapting. To do this it needs a large state-space which changes over time and this requires memory. Pushing the interactive tool analogy rather harder than it should be, tuning a guitar while learning to play it might be considered to be adaptive on the part of the guitar. The unfortunate thing is that there are very few examples of adaptive behavior in the arts. Some video games or those films in which one can vote for various outcomes might fit the bill.

Collaborative:

If both the system and the user adapt to each other in order to render a result we have the start of collaboration.  I know of no complete examples of this in the artificial art-world.

This set of way-points is ordered by increasing autonomy and independence of control. Responsive systems have very little control over their behavior whereas a collaborative system ideally shares control equally.  Another way to put it is that they are increasingly lifelike. Or Artificial Life Like.

The Musical Analog


Musical production provides better examples of my categories, and in general, has made more progress with both humans and their instruments. Gordon Mumma's Hornpipe (1967), for waldhorn, valvehorn & cybersonics, is an early example of an interactive and adaptive system of performer, instrument, and space.  George Lewis's player algorithms, e.g., "Rainbow Family" (1984), for soloists with multiple interactive computer systems, which he described in such terms as (from my memory of a talk he gave at Mills College in October, 1984), "This guy is sort of a backup player where this other guy really likes to play lead," ventured into the collaborative.

In music we might consider a symphony orchestra to be, ideally, responsive to exactly the requirements of the score and conductor. In reality of course the conductor and players interact and adapt to each other. But in the extreme, consider Stockhausen's use of computers to render his compositions such that he had complete (well, almost) control over all the parameters of pitch, timbre, and time.

A string quartet provides a better example of interaction. Traditionally there is a detailed score under which each player has some autonomy of interpretation, and the ensemble as a whole must interact to produce the result.

A group composition, for instance a popular band developing a song, may cover the ground from interaction to collaboration but probably displays more of the features of adaptation. Each player makes a 'riff' off of the suggested material and all of these inputs are adapted to each other resulting in a more-or-less fixed end product.

A free jazz ensemble -- when they actually listen to each other -- is an example of a collaborative system. Each player makes an equal contribution while interacting with and adapting to the other players.

Conclusion


Learning to use any system is interactive in the sense that we need to probe it and learn from its responses. In this process we are adaptive, so the system as a whole exhibits that property.  However the external system may not learn anything about us. When the system passively adapts in some form we have a master-slave relationship, but also the beginning of a dialog. When the system experiments with us -- hopefully benignly -- and we adapt in turn, then we have at last the beginnings of a true collaboration.

This should be the goal of Artificial Life in the Arts.


Friday, January 18, 2013

A Spectacular Simulacra

Abstract

From the '50s to the 70s there were a number of notable collaborations between artists, scientists, and engineers, many of them inspired by the new field of Cybernetics. They eventually foundered on the Scylla and Charybdis of ego and corporate finance. In the 1970s, independent funding dried up, commercial electronic devices undermined homebrew experimentalists, Conceptual Art -- with what I view as a mis-reading of the meaning of Shannon's Information Theory -- replaced Praxis with Platonism, and Postmodern Critical Theory swept the rest before its mighty incomprehensibility.

Instead of a new sensibility, e.g., Cybernetically based Artificial Life, what we got was MTV.

Now, well into a new millennium, we have a chance to correct this. For the most part the machines we have created are Automata rather than Autonomous beings. We need to relax our desire for control over what we create. We also need to move them out of Simulated virtual environments and Situate them in physical reality. Without the constraints of a grounding rod in the real world they drift on fumes and are unable to cross the syntactic/semantic barrier to understanding.

When machines are autonomous they may no longer be of any use to us. Their behavior and morphology may not be aesthetically interesting. They do not have to explain their motivations or behavior. They can just live their own lives.

Complexity Science, in areas such as self-organization and artificial life, provide inspiration as well as mechanism for this work. And strangely enough it may be artists who are best positioned to accomplish the project -- Where else but in the arts can a robot just relax and not have to assemble widgets or blow things up 24/7? However Art's research arms have atrophied to the point that it might be better to use a new title: Bricoleur.

(And yes, thanks to Guy Debord and Jean Baudrillard for suggesting the essay's title.)

Contents

A three part essay on this blog:

I also have a timeline of relevant events: Schip's timeline.
And my extended abstract: Ich Bin Un Bricoleur.

Into the Grey Areas

(This is part 3 of 3 of my essay A Spectacular Simulacra. If you haven't been following along, see the abstract and index here.)

Compare and Contrast



Compressorhead -- Ace of Spades




Georgia Tech -- Shimon, robotic marimba player


There are two ways of looking at these pictures:

  
Frank Popper (1993), Art of the Electronic Age

There is no doubt that this conjunction of the real and the virtual engendered by simulation is at the heart of present research by many technological artists. They consider that 'virtual space', 'virtual environments', or 'virtual realities' in general usher in an entirely new era in art, allowing the participants a multi-sensorial experience never encountered before.

The key words 'artificial intelligence' as an aesthetic problem open up a vast, time-worn discussion of the relationship between man and the machine. Artificial intelligence embraces techniques which enable machines, and in particular computers, to simulate human thought processes, particularly those of memory and deducation [sic].


  Hans Haacke (1967), Untitled Statement
In the past, a sculpture or painting had meaning only at the grace of the viewer. His projections into a piece of marble or canvas with particular configurations provided the programme and made them significant. Without his emotional and intellectual reactions, the material remained nothing but stone and fabric. The systems's programme, on the other hand, is absolutely independent of the viewer's mental participation. It remains autonomous -- aloof from the viewer. As a tree's programme is not touched by the emotions of lovers in its shadow, so the system's programme is untouched by the viewer's feelings and thoughts.

Naturally, also a system releases a gulf of subjective projections in the viewer. These projections, however, can be measured relative to the system's actual programme. Compared to traditional sculpture, it has become a partner of the viewer rather than being subjected to his whims. A system is not imagined; it is real.


In the first video we have a masterpiece of pre-programmed German engineering (not to be stereotypical, but just imagine what the Swiss would do with it, eh?). In the second the machine gets a bit of a chance to decide how it will behave.

In the first quote Popper posits that technology is used to simulate virtual environments for the viewer's delectation. In the second, which is a founding document of Systems Art, Haacke partners the art-system with the viewer in the real world.

So, we can have machines that are either pre-determined Automata or else Autonomous beings. And they can be either virtual or real, i.e., Simulated or Situated in reality. One path gives us total control. The other requires, if not abdication of control, at least collaboration with our materials and creations.


An Autonomous Situation

Art can be ... or could have been ... a research program:
Repetto, Douglas (2010).
Doing It Wrong
.
(from the 2010 Symposium -- Frontiers of Engineering: Reports on Leading-Edge Engineering)

Although musical innovators throughout history would have articulated these ideas differently, I believe they shared the central tenets that creative acts require deviations from the norm and that creative progress is born not of optimization but of variance. More explicit contemporary engagement with these ideas leads one to the concept of creative research, of music making with goals and priorities that are different from those of their traditional precursors -- perhaps sonic friction, in addition to ear-pleasing consonances, for example, or "let’s see what happens" rather than "I’m going to tell you a story."

The problem is that most machines, even the of the art variety, are well controlled models. But what is interesting is new behavior, not the recapitulation of what went before. Rather than models we should be building autonomous beings that have lives of their own and behave in new ways. This is a research program.

When a system gets a chance to decide how it will behave we may not perceive the results as aesthetically interesting. From our lofty height we might not recognize it as living. And for now, it doesn't even have to be very complicated. One can make the argument that a thermostat responds to its feelings of being too hot or too cold and adjusts its environment accordingly. Since we have no idea what its internal mental states might be this description is just as valid as the physical explanation of how the sensors and actuators work. (I need to emphasize that I am not anthropomorphizing machines here but rather mechanizing human responses, putting both on a similar level.) Giving machines lives that are of no practical use while not going out of the way to make them attractive, didactic, or transparent allows them to rise through ontological cracks to just being themselves.

In a virtual world where interactivity and intelligence are simulated this can't be done easily. The beauty and curse of simulation is that it can respond in any way we like; we can make up any structure, or none at all. This is our Spectacular Simulacra: It's potentially all noise and no signal. Just like listening to a radio tuned between stations, when there is no signal there is very little to be learned from an interaction. On a large scale, this is a reason that wikipedia is considered unsuitable for academic references. Anyone can edit it to say anything they like, and it may not be corrected -- whatever that means -- quickly or accurately. The US Congress has been a serial offender in this respect.

However systems that are situated in the real world get input that already has structure; the constraints on the system make it work. It is this interaction with the world, the constraints and the underlying materials, that gives us the feedback we need to learn and function. If a machine interacts with a physical environment it has a better chance of grounding its knowledge and jumping the syntactic/semantic fence. As an example, you may use the phrase "fire is hot" in a syntactically correct sentence. But I assert that the only way you will learn the semantic meaning, and dare I say the underlying semiotic relationships, is if I hold your feet to the fire.

[edit, added 1/27/13]
When talking of living machines with minds of their own, the specter of Dr. Frankenstein's Monster appears. What we forget is that the Monster wasn't a monster until after it accidentally killed and was further persecuted for being different. Looking deeper into the question, the fears that Machines Will Enslave Us are rooted in the assumption that those machines will behave as animals (and humans) do. But when creating our artificial life forms we might dispense with the Darwinian necessities of Fear, Disgust, Anger, Greed -- and the rest of the deadly sins upon which modern economics is based -- and instead have them optimize the desire to, e.g., be the best possible musical improviser who knows when to lay back and listen and when to barge right in.

So where do we start?


Is Chaos Theory Postmodern Science?

This is the title of a paper -- which seems to have vanishingly close to zero citations -- by a Professor of Interdisciplinary Studies who comes to the unsurprising conclusion that:
Postmodern science does, in fact, exist, and literature just may be it.
Mackey, J. L. (2006).
Is Chaos Theory Postmodern Science?

(in reconstruction: studies in contemporary culture, Jan 24, 2006)

Now, depending on your parser, this is either a tautology or a category error. However, if one reads "Chaos Theory" as Complexity Science, it does contain a kernel of truth. At its roots, Post Modernism is interested in systemic structures. In its branches it deconstructs those systems to find underlying paradigmatic narratives -- assumptions -- which (in)form, and even create, the structures. Complexity Science, rooted in Cybernetics, also takes a systems view. It shares with Post Modernism an interest in how underlying structure gives rise to system wide behavior. Complexity also provides Emergence as a framework for considering that systems may be more than the sum of their parts -- accepting that some phenomenon cannot be subjected to Modernist reduction.

As a counter example to the Mackey, and in more depth, I recommend these two books which look into some of the background and possibilities. (Note that I'm biased as the authors are friends...)

Victoria Alexander posits self-organization as an explanation for the perception that natural phenomenon have goals or develop towards some final purpose (teleology). In chapters 1-4 she "deconstructs" what purpose means and how it might arise from otherwise non-directed mechanisms, both in nature and human artifact. As a bonus, chapter 5 is a (fairly) clear explanation of C.S. Peirce's semiotics...
Alexander, V. N. (2011).
The Biologist's Mistress: Rethinking Self-organization in Art, Literature, and Nature.
Emergent Publications.
From the chapter 1:
What I do share with all teleologists, authentic or so-called, is a deeply felt folk-sense of purposefulness in nature. It is clear to me that many processes and patterns in nature can't be fully explained by Newton's laws or Darwin's mechanism of natural selection. These are processes that are organized in ways that spontaneously create, sustain and further that organization. Although I believe that mechanistic reductionism is inadequate to describe these processes, I don't believe that purposeful events and actions require guidance from the outside -- from divine plans or engineering deities. Nature's purposeful processes are self-organizing and inherently adaptive, which is the essence of what it is to be teleological.

John Johnston provides a history of Cybernetics, Artificial Life, and related fields with an analysis of their significance to modern culture. If you are not Lacanian I would skip chapter 2, but Section III, Machinic Intelligence, is especially relevant to the program outlined here.
Johnston, J. (2008).
The allure of machinic life: cybernetics, artificial life, and the new AI.
MIT Press.
From the preface:
This book explores a single topic: the creation of new forms of "machinic life" in cybernetics, artificial life (ALife), and artificial intelligence (AI). By machinic life I mean the forms of nascent life that have been made to emerge in and through technical interactions in human-constructed environments. Thus the webs of connection that sustain machinic life are material (or virtual) but not directly of the natural world. Although automata such as the eighteenth-century clockwork dolls and other figures can be seen as precursors, the first forms of machinic life appeared in the ‘‘lifelike’’ machines of the cyberneticists and in the early programs and robots of AI. Machinic life, unlike earlier mechanical forms, has a capacity to alter itself and to respond dynamically to changing situations.

Here we are

Self-organization and Artificial Life are areas of Complexity Science that can provide inspiration as well as mechanism. Although some of the original work in these fields may have been more Art than Science -- making grander claims than could be supported in the, as they say, dominant paradigm -- years of more cautious work have produced concrete results. On the other hand there is something to be said for throwing caution to the winds...

Because they have no requirement to make useful artifacts or produce scientifically supported results, artists might be in an ideal position to create these machines. This would also encourage détente in the science-wars, bringing the Humanities and Sciences closer to productive collaboration. But Art has now become identified with Spectacle rather than research, so I propose a new title: Bricoleur.

So far, work in the arts has been done in a sporadic fashion due to confusion about both purposes and methods when using advanced technology and especially computers. Generative Art -- art which emerges from computer programs -- has been conflated with Artificial Life -- programs that have their own behaviors. The following paper skates between the two but seems to come down on the "make pretty things" side.
McCormack, J., & Dorin, A. (2001, January).
Art, emergence, and the computational sublime.

In Proceedings of Second Iteration:
A Conference on Generative Systems in the Electronic Arts.
Melbourne: CEMA (pp. 67-81).

In a design sense, it is possible to make creative systems that exhibit emergent properties beyond the designer's conscious intentions, hence creating an artefact, process, or system that is "more" than was conceived by the designer. This is not unique to computer-based design, but it offers an important glimpse into the possible usefulness of such design techniques -- "letting go of control" as an alternative to the functionalist, user-centred modes of design. Nature can be seen as a complex system that can be loosely transferred to the process of design, with the hope that human poiesis may somehow obtain the elements of physis so revered in the design world. Mimicry of natural processes with a view to emulation, while possibly sufficient for novel design, does not alone necessarily translate as effective methodology for art however.


Whereas this next paper gets us moving in the right direction. It was prompted by an exhibition: Emergence -- Art and Artificial Life (Beall Center for Art and Technology, UCI, December 2009). The author and a handful of other artists have been experimenting with complex systems for some time -- see the end of my timeline for pointers to various work that I've been able to ferret out of the 'net.
Penny, Simon (2009).
Art and Artificial Life a Primer
.

4.1 An Aesthetics of Behavior
With the access to computing, some artists recognized that here was a technology which permitted the modeling of behavior. Behavior - action in and with respect to the world - was a quality which was now amenable to design and aesthetic decision-making. Artificial Life presented the titillating possibility of computer based behavior which went beyond simple tit-for-tat interaction, beyond hyper-links and look-up tables of pre-programmed responses to possible inputs, even beyond AI based inference -- to quasi-biological conceptions of machines, or groups of machines that adapted to each other and to changes in their environment in potentially unexpected, emergent and ‘creative’ ways.

We have a long way to go...

And it's not going to be easy:
Is Slime Mold Smarter Than a Roomba?
IEEE Spectrum (December 2012)

Tuesday, January 1, 2013

The Perfect Storm

(This is part 2 of 3 of my essay A Spectacular Simulacra. If you haven't been following along, see the abstract and index here.)

So why did our beloved Science and Technology in the Arts seem to die on the vine in the 1970s? (Please note that this section is USA-centric and more polemic than incontestable).

Concept

Conceptual Art -- "The dematerialization of the art object" (Lippard) -- subsumed Systems Art and abandoned the object altogether. The focus shifted to social and political critique, helped along by Feminism and Performance. Although, as Shanken points out, the antipathy between Conceptual Art and Technology is illusory, the Art/Tech world lost its steam. The last little dying breaths of collaboration appeared in the Tele-communication movement, where artists working with NASA and others attempted to use newly open satellite communications technologies to connect to and collaborate with each other world-wide.

[edit, added 1/27/13]
Hans Haacke's work is emblematic of, if not pivotal to, this shift into conceptual art practice. Around 1970 he made a rapid change of medium from physical to social systems, which he claims was a natural progression. He also denies that he is a Conceptual Artist -- which may be the ultimate in Conceptualism.  (artist interview in: Grasskamp, W., etal (2004). Hans Haacke. Phaidon.)

Cybernetics and Artificial Intelligence were competing endeavors that had common roots (I have overly conflated them). But their strongest link was completely severed by the Minsky/Pappert take down of neural networks. Rather than taking a systems wide view, AI tended to work reductively from the top down with logical and symbolic representations. These however didn't capture the essence of Intelligence, and irrational exuberance was trumped by reality:
Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved.
Minsky (1967), Computation: Finite and Infinite Machines, Englewood Cliffs, N.J.: Prentice-Hall (p. 2)
But by the early 1980s rule based Expert Systems -- which seem to be inherently fragile -- were the main success story.

For an interesting look at where Cybernetics and Systems thinking went (into the social sciences) in 1973, have a look at this conversation between Stewart Brand, Gregory Bateson, and Margaret Mead: For God’s Sake, Margaret.

At the same time the Hippy-Back-to-Nature thing was in full swing. Partially as a reaction to the Military Industrial Complex's complicity in the Vietnam War, Technology became Evil. I find this simplistic even though it is name-dropped in many places. While a certain cohort moved into the hills and became potters, electronic musicians and video artists were well aware of the provenance of their toys, and all the while thought of their work as a perversion thereof.

Finance

Maybe we can blame it all on the Nixon Administration? There was a recession in the USA in the early 70's and the money dried up.

As Hans Haacke has shown, the corporate funding model for art-extravaganzas shifted from research oriented -- 9 Evenings -- to blockbusters -- The Treasures of King Tut -- giving the corporations more widely appreciated social capital bang for their buck. Even today, a reviewer can just flat out say, "Most of the public doesn't like modernism" (Acocella, Bride Wars, New Yorker Dec 24, 2012). But our corporate marketing masters figured this out in 1970.

In a similar vein, the 1969 Mansfield Amendment "prohibited military funding of research that lacked a direct or apparent relationship to specific military function" (wikipedia). This cutoff a significant source of support for the more open-ended and unproductive components of Artificial Intelligence, and pushed research into what seemed to be more immediately rewarding areas.

Commerce

Electronic audio and video tools became commercially available and (mostly) affordable. These tools were largely targeted at traditional uses, e.g., keyboard synths and cinematic effect generators. For sale to the Lowest Common Denominator, they were easy to use for "normative" purposes and difficult for anything else (unless you could hack them). Personal computers became available in the late 70's and followed the same pattern, providing mass appeal applications and games while being reasonably recalcitrant for anything else. What followed was pop music, video games, and CGI movies.

The commodity Art Market did battle with Conceptualism and won. Conceptual Artists thought that if there were no objects to sell, no selling could take place (it's not entirely clear how they were to make an actual living in this system). But the Market quickly figured out how to sell documentation.

Academy

With the collapse of independent funding, artists retreated to compartmentalized teaching jobs in academe. There, in the 1980s, Postmodern Critical Theory swept the flotsam aside in a flood of seemingly erudite incomprehensibility:
Voegelin, S. (2010). Listening to noise and silence: Towards a philosophy of sound art. Continuum.

In this sense postmodernism is to modernism the noise of heterogeneity, working outside and across disciplines, squandering its systematic valuation in decadent centrifugality. The postmodern is a radicalization of the modernist understanding of the artwork.
And that's the (cherry picked) Reformed Standard Version talking...It does mean something, but could surely have been expressed more clearly.

It is interesting that, just prior to Le Deluge, the Conceptual theorists embraced the Analytic and dismissed Contenential Philosophy (see Kosuth, (1969) Art After Philosophy), but they often share similar ideas about de-centralized, contingent knowledge -- and occasionally their discursive style. The PoMo Revenge of the Literature Professors lead to the Science Wars which alienated the sciences from the humanities. As a balance -- although the authors willfully ignore the good bits -- see:
Sokal, A., & Bricmont, J. (1999). Fashionable nonsense: Postmodern intellectuals' abuse of science. Picador.

The Result

What we got was MTV, the Roomba vacuum cleaner, and Call of Duty: Black Ops (which BTW has the same number of wiki footnote references as the entire History of Artificial Intelligence).

I know. I know. What about Photoshop, Final Cut, Protools, MaxMSP, yadayada? They all (with the possible exception of MaxMSP) enable harder-faster-deeper production in existing media rather than creating new aesthetic models.

Instead of a new sensibility, e.g., cybernetically based artificial life, we were sucked into a Spectacular Simulacrum.


The Illusion of Control

The real problem is C3: Communications, Command, and Control...

Roy Ascott's Cybernetic Art Matrix

Ascott, R. (1966). Behaviourist Art and the Cybernetic Vision. Cybernetica, Journal of the International Association for Cybernetics (Namur), 9.

Fundamentally Cybernetics concerns the idea of the perfectibility of systems; it is concerned in practice with the procurement of effective action by means of self-organising systems. It recognises the idea of the perfectibility of Man, of the possibility of further evolution in the biological and social sphere. In this it shares its optimism with Molecular Biology. Bio-cybernetics, the simulation of living processes, genetic manipulation, the behavioural sciences, automatic environments, together constitute an understanding of the human being which calls for and will in time produce new human values and a new morality.

Salvador Allende's Project Cybersyn

Allende commissioned the British cybernetician Stafford Beer to build a computer system that could be used to manage Chile’s economy. The system, known as Project Cybersyn, was never completely implemented. It was however used to monitor and divert scab drivers (ironic italics my own) during a trucking strike, but that was more a matter of communication than homeostatic control.

This is the Modernist narrative in a nutshell

From the Industrial Revolution onward we expected not only to understand, but to control all of nature. The meta-narratives of Truth, Progress, and Sovereignty were (a tiny bit) over-optimistic. Post-Modernism questioned these stories without, IMHO, effectively addressing it's own narratives, and, without admitting that there are (un-capitalized) truths that we might know.

Once you peel back the rhetoric I think this is the mistake at the heart of the Science Wars. It was a critique of Technology, but Science got tarred with the same Modernist brush. Most (many, at least a few) scientists do not believe that they know, or even can, know it all (engineers on the other hand...) If we think of our experience as a Hidden Markov Model (...ya,ya I hate to keep referencing wikipedia, but this is a pretty good article...), we may be sovereign over the observations, but they give us only a glimpse of the underlying mechanism. [edit, added 1/27/13] To me this is startlingly similar to Post Modern epistemology and should give us a place to begin repairing the rift.

[edit, added 1/27/13]
The conflation of Science and Engineering has deeply affected the discourse between Art and Science.  It's one thing for artists to work with technology, they have always been early adopters. But working with Scientists is -- or should be -- different. Too many times what is billed as Art/Science Collaboration is either, a) artists getting access to cool sciency toys; or, b) scientists getting access to cool arty presentations. While those are both noble endeavors they have little to do with actual collaboration between the participants.

So, if we can no longer Know and Control, what can we do?

(continue to Part 3: Into the Grey Areas)

Wednesday, December 12, 2012

Cybernetic Serendipity

(This is part 1 of 3 of my essay A Spectacular Simulacra. If you haven't been following along, see the abstract and index here.)

Art and Technology

In 1966, with immense help from Bell Labs, Billy Klüver and Robert Rauschenberg and many others – who went on to establish Experiments in Art and Technology (E.A.T.) – produced a multi-part performance at the 69th Regiment Armory, in New York City – which btw was also the location of the famous 1913 Armory Show which introduced the Americas to all that scandalously decadent European Modern Art – 9 Evenings: Theater & Engineering. It was the first big collision of Art and (mostly) electronic Technology. And it was utterly panned by such luminaries as Robert Smithson as well as more mainstream critics. But in 20-20 hindsight it was one of the most amazing things to come down the pike in establishing what we now know as Art and Technology, ever.

Two years later, in 1968, Cybernetic Serendipity, curated by Jasia Reichardt at the Institute of Contemporary Arts (ICA) in London, was the first large scale show of computer related artwork. It traveled to the Corcoran Gallery in Washington, DC and, in 1969, parts of the show became the founding exhibits at the newly opened Exploratorium in San Francisco (where I had the honor, ten years hence, of breaking a couple of them...). It included a broad range of visual art, computer demonstrations, and even a bit of music here and there. You can find a scanned pdf of the (partial) show catalog here: Cybernetic Serendipity.

An interesting short paper that looks at the contents, organization, and funding models of the show, from a near-half-century perspective is here:
MacGregor, B. (2002, October). Cybernetic serendipity revisited. In Proceedings of the 4th conference on Creativity & cognition (pp. 11-13). ACM.

Around this same time the American artist and critic Jack Burnham began to hypothesize Systems Art (he actually started with Cybernetic Art but quickly expanded his horizons):
 Burnham, J. (1968). Systems Esthetics. Artforum, 7(1), 30-35.
From the article:
The systems approach goes beyond a concern with staged environments and happenings; it deals in a revolutionary fashion with the larger problem of boundary concepts. In systems perspective there are no contrived confines such as the theater proscenium or picture frame. Conceptual focus rather than material limits define the system. Thus any situation, either in or outside the context of art, may be designed and judged as a system. Inasmuch as a system may contain people, ideas, messages, atmospheric conditions, power sources, and so on, a system is, to quote the systems biologist, Ludwig von Bertalanffy, a "complex of components in interaction," comprised of material, energy, and information in various degrees of organization. In evaluating systems the artist is a perspectivist considering goals, boundaries, structure, input, output, and related activity inside and outside the system. Where the object almost always has a fixed shape and boundaries, the consistency of a system may be altered in time and space, its behavior determined both by external conditions and its mechanisms of control.
Developing the above in 1970, he published this essay in a collection of papers titled On the Future of Art, edited by Arnold Toynbee under the auspices of the Guggenheim Museum:
Burnham, J. (1970). The Aesthetics of Intelligent Systems.
And later that year he curated the Software show at the Jewish Museum in NYC, which illustrated many of these ideas using a broad range of technological and conceptual art practices. You can find catalog excerpts here: Software. Unfortunately this show was a near-complete disaster on both technical and social grounds, and has more-or-less disappeared from view. It was supposed to travel to the Smithsonian in Washington, DC, but circumstances (a fire at the Smithsonian) intervened and saved everyone the embarrassment. Aside from its disastrous run, Software presented a view of the state-of-the-art-in-technology-and-concept that may never be repeated.


For more background on the Cyber-Arts of the 1960's I recommend:
Burnham, J. (1968). Beyond modern sculpture: the effects of science and technology on the sculpture of this century. G. Braziller.
The last chapter of which you can steal here: The Future of Responsive Systems in Art
Benthall, J. (1972). Science and technology in art today. Thames and Hudson.
Which still appears to be available used and reads as pretty much contemporary. The point to which I'm circuitously trying to get around to here...


Cybernetics

So what is this cybernetics stuff they were all talking about anyway?
To quote from The Wiki:
[coined by Norbert Wiener in 1948] as "the scientific study of control and communication in the animal and the machine." Cybernetics from the Greek meaning to "steer" or "navigate." Contemporary cybernetics began as an interdisciplinary study connecting the fields of control systems, electrical network theory, mechanical engineering, logic modeling, evolutionary biology, neuroscience, anthropology, and psychology in the 1940s, often attributed to the Macy Conferences. During the second half of the 20th century cybernetics evolved in ways that distinguish first-order cybernetics (about observed systems) from second-order cybernetics (about observing systems). More recently there is talk about a third-order cybernetics (doing in ways that embraces first and second-order).
Here are a couple other good resources to get a handle on:

A general overview:
Paul Pangaro's "Getting Started" Guide to Cybernetics
And a really thorough but succinct description of the players and fields involved:
Ben-Ali, F. M. (2007). A History of Systemic and Cybernetic Thought From Homeostasis to the Teardrop.

Conceptual Information Theory

Then we throw Information Theory into the mix as well. Wiener, in his book Cybernetics, devotes a chapter to beating around the bush defining it, but Claude Shannon's paper from the same year nailed it down:
Shannon, C. E. (1948). A mathematical theory of communication.  The Bell System Technical Journal, Vol. 27
Artists, especially of the Conceptual variety, glommed-on to these ideas and did what they usually do, jump to conclusions... Here's an excerpt from a review of:
Moles, A. (1968). Information theory and esthetic perception. Trans. JE Cohen.
Let us consider perception by an individual human being as communication from the external world to that human, says Moles, now a professor of philosophy in Strasbourg. Let us consider in detail artistic communications, since it is particularly easy to isolate them. Then esthetic perception, as a special kind of communication, should be amenable to analysis by information theory, Moles concludes, since information theory is a mathematical theory of communication.
This reasoning is an example of what philosophers call the fallacy of equivocation: what Shannon and Wiener, inventors of information theory, meant by "communication" is not what Moles has in mind...
Using the above as corroboration, My Humble Opinion is that the most egregious excesses of Conceptual Art, where Art is reduced to Information, result from this sort of mis-reading of Shannon as having something to say about Meaning. For a little more detail have a look at the links on my page: Shannon's Information Increased

Cybernetic Art

On the other hand Cybernetic Art itself got a little better play, especially in the 1960's work of British artist Roy Ascott as described here:
Shanken, E., Clarke, I. B., & Henderson, L. D. (2002). Cybernetics and Art: Cultural Convergence in the 1960s. From Energy to Information.
Moving away from the notion of art as constituted in autonomous objects, Ascott redefined art as a cybernetic system comprised of a network of feedback loops. He conceived of art as but one member in a family of interconnected feedback loops in the cultural sphere, and he thought of culture as itself just one set of processes in a larger network of social relations. In this way, Ascott integrated cybernetics into aesthetics to theorize the relationship between art and society in terms of the interactive flow of information and behavior through a network of interconnected processes and systems.
But in the abstract of another paper, Shanken indicates that Cybernetic Art got entangled with Conceptual Art and the Technology component was dropped like a hot potato:
Shanken, E. A. (2002). Art in the information age: Technology and conceptual art. Leonardo, 35(4), 433-438.
Art historians have generally drawn sharp distinctions between conceptual art and art-and-technology. ... By interpreting conceptual art and art-and-technology as reflections and constituents of broad cultural transformations during the information age, the author concludes that the two tendencies share important similarities, and that this common ground offers useful insights into late-20th-century art.

So. Something went very wrong.

Strangely enough, at just about the same time as Systems Art lost its Technology, Cybernetics itself met a similar fate. In the early 1970's Artificial Intelligence research retrenched and rejected its cybernetic neural-net component – partially due to the Minksy, Pappert book Perceptrons – and turned towards Symbolic and Expert Systems work. It took another ten years for the tide to begin to turn back with Connectionism and Behavior Based Robotics.

And in the world of Electronic Music, which is hardly mentioned in Art books, much of the same dynamic was playing out. Work during the 1960's that incorporated feedback systems of various kinds – see my Coincident Feedback entry – was swept under by a wave of commercial synthesizers made for pop-music recapitulations – c.f. the David Dunn quote in my Born Rationalizing Culture California entry.

To try to get a handle on all this I've been making a list of what I find to be landmarks in the progress of Art, Music, and Cybernetics since WWII:

My Timeline

The first interesting thing is that one needs to go to three completely different floors of the library to find the relevant historiography. Even though the folks were often talking-to and working-with each other, there's very little cross over – books are about one or the other but never all – so it's very hard to see the similarities. And the second interesting thing is that, once you see them all lined up:

They all crashed at the same time in the early 1970's!

Going back to Jack, he published a paper in 1979 that provides a critical analysis of some of these major events. I think the title reflects his position:
Burnham, J. (1979). Art & Technology, the Panacea that Failed. The Myths of Information, ed. Kathleen Woodward, Coda Press
And as another little bit of evidence for this I noticed a significant lacuna in the listing of major shows in:
Paul, C. (2008). Digital art. Thames & Hudson.

Yup... it all (save two minor exceptions) drops out of existence between 1970 and 1996... Now we need to bear in mind that the author is only considering Digital art, and not all Art/Tech endeavors including most Audio or Sound art – which did carry a sputtering torch into the new century -- nor even (Analog) Video...but...but...

Just WTF happened here?

Stay Tuned!

(continue to Part 2: The Perfect Storm)

Monday, November 26, 2012

Modes of Inquiry

In recent discussions about the respective roles of Art and Science in our culture I keep running up against He-Said/She-Said sorts of arguments about how each camp works. The first problem is that many people don't seem to have a clue about how anyone else actually works, so you get blanket statements like, "As a scientist, how can you claim to be creative when all you do is work with data?" Following from that, the second problem is that the putative categories are presented as being somehow Black and White rather than subtly shaded. And a third problem is that there are more than two categories... To which point I post this diagram:
Tetrahedron of Reality
As a starting point I calve Engineering from Science (the oft mis-identified Technology) -- and Art -- and then add Philosophy as a separate discipline. Each of these nodal points is a particular mode of inquiry into the working of the world with its own processes, methods, and results. However, in practice, none of them are pure. As the impurest of the impure I put a little bricoleur bouncing around inside the space as needed or -- more likely -- at random.

I probably should include a node for Society, i.e., politics, economics, and social manipulation/persuasion, but A) I don't understand them; and, B) I can't draw a 5-space collapsed into two dimensions. Which is probably too bad because without Society you can't really do anything in the other modes modulo a trust fund. But so it goes.

In the spirit of twentieth-century management-think I also posit a set of cross-cutting dichotomies:
  • Process -- Rational or Empirical (using the Cartesian meaning of both);
  • Methods -- Logical or Fanciful (there must be a better opposite, no?);
  • Results -- Theoretical or Physical (i.e., in the mind or in the world);
  • Product -- Useful or Ephemeral (a practical thing or an entertaining idea?).
So one could have a Rational Process using Fanciful Methods with a Theoretical Result whose Product is Ephemeral, which might be a novel or most of post-modern philosophy. Or an Empirical Process using Logical Methods with a Physical Result whose Product is Useful, and get an iPhone. Maybe. Or change the Result to Theoretical and end up with the Large Hadron Collider...

Being grey areas, none of the modes has a lock on any particular set of cross-cuts, although some may be more likely candidates than others. I'm having a hard time imagining a Rational, Fanciful, Theoretical, Ephemeral Engineering project ... But that might be something for our bricoleur to try, eh?

Since the probability of anyone actually reading this is approximately 1::109 (one in one-billion), which is a factor of ten less likely than winning the lottery, I guess it doesn't matter. But if you made it this far, as an Empirical, Fanciful, Theoretical, & Ephemeral experiment, click one of the little Reactions buttons down there so I know you were here.

Sunday, November 11, 2012

Born Rationalizing Culture California

Although it sounds like a genetic-predisposition/lifestyle-choice it is also an actual bibliographic reference:

Born, Georgina
Rationalizing Culture -- IRCAM, Boulez, and the Institutionalization of the Musical Avant Garde
1995, University of California Press

If you still don't believe me here is a scan of the binding, which is why I kept it on my bookshelf for so long while making only half-hearted attempts at reading it:


In the course of purging post-modernism from my collection I figured that I could at least force myself through the conclusion section. As a result I may keep it a bit longer, or else send it to Sudhu as mulch for his electronic ethno-musicology dissertation.

The first interesting thing is that IRCAM was founded in 1977 and funded by the French government. The second thing is that is was meant to be an Art/Science research facility. The third thing is that it was a rather stratified environment: Composers, who supposedly knew -- but often didn't recognize -- the stuff they wanted to do were always superior to Tutors, who actually knew how to do the stuff. And the fourth thing is that it looks like this Institutionalization took the wind out of the sails of the Avant Garde that it was supposed to bolster.

That last point may be reaching -- in order to support my own thesis that everything went to hell in the '70's -- but there it is.

To add more support I quote from David Dunn's A History of Electronic Music Pioneers, which was published in the catalog for the 1992 Eigenwelt der Apparate-welt show of electronic artists. He valorizes the '60's composers whom we all know and love for being in a sweet spot of technical and aesthetic development, then asserts that it was all co-opted:

What began in this century as a utopian and vaguely Romantic passion, namely that technology offered an opportunity to expand human perception and provide new avenues for the discovery of reality, subsequently evolved through the 1960's into an intoxication with this humanistic agenda as a social critique and counter-cultural movement. The irony is that many of the artist's who were most concerned with technology as a counter-cultural social critique built tools that ultimately became the resources for an industrial movement that in large part eradicated their ideological concerns. Most of these artists and their work have fallen into the anonymous cracks of a consumer culture that now regards their experimentation merely as inherited technical R & D. While the mass distribution of the electronic means of musical production appears to be an egalitarian success, as a worst case scenario it may also signify the suffocation of the modernist dream at the hands of industrial profiteering.

Wednesday, September 12, 2012

Turbulent Hans Haacke

Last Saturday I went to the New York New Museum's Ghosts in the Machine show and found that Hans Haacke has already done it all. In 1965. On show were two of his pieces using wind blowers to move objects around in much more mesmerizing ways than my packing pellets.

http://mlkshk.com/r/4WH6

The first, Blue Sail is a big, well not to put too fine a point on it, blue sail of light fabric tethered and weighted at the corners. A household variety oscillating fan underneath makes it billow and flow in apparently random ways. The second, Kugel in Schragen Luftstrahl, is a small helium weather balloon bobbing around in the mid-air Bernoulli effect of a hairdryer blower. When the guard wasn't looking I waved my hand over the blower outlet and got it to bobble even more. Way nicer, and quieter, than the volley-ball-in-traffic-cone version we had at the Explo, which, while doing all this putzing around, I didn't think of either.

Fortunately no one knows about Haacke and his hot air, aside from the hundreds of tourists that go to NYC Museums, so I may be safe.




http://www.artnet.com/Magazine/features/cone/Images/cone8-6-11.jpg
Right next to Haacke's balloon was a piece by Gunther Uecker called New York Dancer IV -- also from bloody 1965. A human sized shroud of canvas pierced through all over with various sizes of iron nails. It just hung around until 4pm when I was lucky enough to stumble into the room in time for it's daily demo. A stunningly gorgeous young woman came darting into the space, uncovered a red switch-box in the corner, meticulously donned cotton curatorial gloves, and with a fairly bored expression pushed the button to make the thing slowly spin. As it got up to speed the fabric billowed out and the nails flailed around in pleasant wavy ways. Then some bit would get bound up in the mechanism causing the whole thing to slam crashing around, at which point our Muse let off the gas for a bit to slow it down. And...Repeat.

<sorry...no photo>

Grasping for a conversational gambit afterwards, I asked her if it had been the artist's intention to have those transitions from simple waving to complex crashing. She said he had actually demonstrated it being operated as such, although she described it as, "Really stomping on the switch..." So it's not entirely clear if complex behavioral transitions were a consciously desired result or just serendipity.

The rest of the show was a mixed bag of mechanical objects and mechanical drawings, all owing their raison to Duchamp's Bachelors -- if one is to believe the curatorial introduction in the catalog. Much of it was fascinating historically and some was still engaging. However, while there was a bit of work from the post-1970's, it stopped short of the Software show debacle and Burnham's subsequent Panacea that Failed analysis of the whole ArtTech scene. This may, by its absence, provide another leg for my hypothesis -- that all we got out of the era was MTV -- to stand upon.

Anyway....I get the Axle truck in about an hour to start the actual installation...

Tuesday, August 21, 2012

Turbulent Software

The control system and software work about as well as they are going to. I get the truck on Sept 12 to see just how good that is:
System Block Diagram

 We hope to go live before the Sept 16 After Dark party in Santa Fe's Railyard. I will also have two "studies" in the ISEA Residencies show at the UNM Architecture Gallery. The opening is Sept 22 (I think). Schedule to follow.

Sunday, June 12, 2011

Media Art was the Booby Prize

For me, most techno-art devolves into a catalog of effects that someone found they could generate with their synthesizer, video camera, or software package. It's a lack of constraint, in any number of senses. I think this is the result of the1970's commercialization of the efforts of earlier electronic art hackers. What could have been a new aesthetic sensibility became Digital Media Arts. More. Faster. Easier. But not necessarily: Better.

Two years ago I discovered, mostly by accident, that there was an alternative. There were a number of efforts to integrate Cybernetics and the nascent field of Artificial Intelligence into the Arts. Jack Burnham's Systems Art was the most formalized of these. It all happened just before I started studying this stuff in college and, by virtue of being in the California Livin' backwater of Santa Cruz, I was never notified of it's death. I dug in a bit and found a plethora of material: Schip's Systems Art Area

In a strange coincidence, the Systems approach to art faltered at almost the same time that AI took a turn to the symbolic top-down approach which gave us Expert Systems but nothing like a Human Intelligence. It was  another 15 years before AI restarted from the behavior based bottom-up, and -- coincidence again? -- Artificial Life research took off on its short flight of fancy. A-Life also faltered due to, among others, making too many un-supportable claims, which is really  -- in another strange coincidence --  the job of Art not Science.

For a somewhat different take of where we are and what we need to do I've written YAM (Yet Another Manifesto)...I just can't figure out how to implement any of it. Yet.