Re: [-empyre-] next (robotic) steps
jumping in late with just a few notes/observations from a world away:
in my theatrical practice the mimetic versus 'dumb' debate is central.
Simply because in asking an actor to play a role, a character - to act 'as
if' human - recourse to mimesis serves as an obstacle to producing a
convincing, compelling, performance. 'Dumb' here relates to task-based
actions: enter the stage sit on a chair stand on the chair and say the
lines. Neither is on its own satisfactory.
In the first case, the actor resorts to images of a type or the type of
behaviours - internal and external - we call human. These images are
provided to a large degree by popular psychology - or at least the
psychology popular within the profession. What is my motivation? cause and
effect mechanisms or considerations of 'super-objective' desire-economy
reflections ensue. It is very difficult to rid an actor of an image of or a
habit of thinking about thinking. It is especially difficult when the
assumption underlying that image or habit is that action is contingent upon
Action-based 'dumb' presentation of character, role, mask and finally actor
(which exist in a sort of circular relation and cycle through in rehearsal -
meaning repetition) - as and when opposed to 'representation' of the idea or
image, culturally informed - hits not only against the wall of the actor's
initial resistance to objectification in abrogating accepted 'human' thought
processes but already raises interesting questions about possible in- or
anti- human becomings: e.g. ask someone to be a tree - they'll do it; but
ask someone to be a man or a woman being a tree - they'll ask why. (Even if
they don't, an internal dialogue will have been set up involving the
non-existent tree as a thing, the non-existent person as an image and the
actor or actress: generally it's the prejudgement as to what constitutes the
human that wins out - the cultural image.)
The more tasks an actor is asked to do, the more complex the performance.
But the leap to 'life' or 'the thought' or consciousness is unmistakeable
and cannot be substituted. Time and repetition have a place in breaking down
habitual patterns of thought - along with more extreme measures - but in
order to build and present complex human behaviour for on the stage, it
seems to me, the best available route is the in- or anti-human. The
anti-psychological, anyway, has as long a history here as does the
In looking at - to relativise - artificial life is there not already in the
onlooker a change, however slight, an internal shift of perspective, not at
all anthropomorphic - except perhaps vis-a-vis the observer - whereby that
'thing', in its in- or anti- humanity, projects itself upon the
consciousness as a possible humanity or a human possibility?
----- Original Message -----
To: "soft_skinned_space" <email@example.com>
Sent: Sunday, December 19, 2004 5:14 PM
Subject: RE: [-empyre-] next (robotic) steps
> Complexity is whole and part, the difficulty is whose "whole" is it?
> Personally I see the notion of complexity as an engagement of ongoing
> Mapping the brain is only another action of these interrelations.
> How important is it to do that? To locate or pin where these
> occur, to further the "knowledge" format in which it, the brain, exists?
> Moving from "world views" to interrelations, one can allow a robot to do
> whatever it is required to overcome the limitations of the humans. In this
> we find what robots do that humans can't but base them not on what we
> or have retained in "knowledge" as human limits, but allow the notion of
> to range within interrelations that are pleasant for our interrelations or
> The concern that robots may overstep the bounds of assistance, or
> say it is for art, doesn't exist if you insist on interrelations or
> Philip Agre did some work on this, and in his early work stood against
> views, as directorial, and for improvisation, and he later used the notion
> coordinates in dealing with robotics. If we all exist in these coordinates
> robots work within the parameters of each, rather than with conflicting
> views" and their construction would eventually reflect this.
> Importantly, also, if the robots work of each, then it also recognizes
> are active not existent in some one possessed world view.
> As regards mimesis, it is of course a "copy" of something. Technology
> a potentially useful and synchronized action, and if we use that
> of actions, rather than as a copy than we are moving toward clarity in its
> Quoting Jim Andrews <firstname.lastname@example.org>:
> > > -----Original Message-----
> > > From: email@example.com
> > > [mailto:firstname.lastname@example.org]On Behalf Of Nicholas
> > > Stedman
> > > Sent: Friday, December 17, 2004 7:31 PM
> > > To: soft_skinned_space
> > > Subject: Re: [-empyre-] next (robotic) steps
> > >
> > >
> > > >what is the next most essential 'human' sense for a robot,
> > > >in your opinion - artificial life - artificial intelligence
> > > >- ethical robots - what would *your* priorities be?
> > >
> > >
> > > Hello,
> > > This is my first post to empyre.
> > > In the last few years the idea has become quite popular that
> > > intelligence is a complex system that arises from the interaction of
> > > small, discreet particles performing simple tasks in relation to one
> > > another. Very recently I've been hearing about 'dumb' machines. This
> > > seems to remove the overarching goal of emulating complex behaviour,
> > > refocuses the energy on just exploring simple, 'dumb' tasks.
> > > 'Intelligence, ethics and human priorities' it is said are far too
> > > complex to represent through current technology, let alone understand
> > > for ourselves. The New York Times Magazine has a small blurb on it
> > > week (you can read it online but have to subscribe). If anyone knows
> > > about this topic it would be good to hear more.
> > >
> > > My questions to you then are what do you think of approaching machines
> > > as 'dumb', as unlike us? Why use mimesis as a guiding principle in
> > > robotic design instead of other principles and references, or are they
> > > by definition mimetic? I ask because I'm currently confronting this in
> > > my own work. Some of my machines to date have been explorations of the
> > > boundary between things that are like us and things that aren't by
> > > rolling them into the same object. Lately, I've been more curious
> > > machines that are complementary instead of similar to us, but these
> > > not necessarilly mutually exclusive ideas.
> > >
> > > Best,
> > > Nicholas Stedman
> > > http://nickstedman.banff.org
> > I wonder how this relates to the earlier observation about the 'animal'?
> > a species, there was certainly a time when our language capabilities did
> > not
> > include speech, or very much at all of conscious symbolic understanding,
> > and
> > that is recapitulated, to some extent, as we grow from babes.
> > How important is the notion of a 'world view' to the 'animal' or the
> > 'dumb'?
> > Not that they need be the same thing, of course.
> > "...a world view...is essentially a model of the world as the preson
> > perceives it. the model contains information about all objects which are
> > known to exist, the attributes of those objects and the relationships
> > between them....To endow machines with similarly huge and intricate
> > of the world appears very difficult, and is certainly well byond our
> > current
> > capabilities (1988). One reason for this, which may be short-lived, is
> > the capacity of the largest computer memories is far less than that of
> > human brain. A more significant reason is that no generally convenient
> > method of representing knowledge in a computer has yet been discovered.
> > The
> > major problem is not in storing knowledge but in recognizing when
> > particular
> > items are relevant, and in retrieving them as required. A simple list of
> > facts is inadequate: the relationships between facts are of crucial
> > importance. Attempts to represent such relationships by complex data
> > structures always seem to founder on the same two difficulties. The
> > is to know what relationships are relevant. The second is the time taken
> > locate and retrieve all information relevant to the problem at hand.
> > time rapidly becomes infeasible as the body of knowledge increases."
> > from 'Computer Science--A Modern  Introduction' by Goldschlager
> > Lister
> > ja
> > http://vispo.com
> > _______________________________________________
> > empyre forum
> > email@example.com
> > http://www.subtle.net/empyre
> Susan Aaron, MA
> 80 Montclair Ave, #303
> Toronto, ON M5P 1P8
> empyre forum
This archive was generated by a fusion of
Pipermail 0.09 (Mailman edition) and