Re: [-empyre-] next (robotic) steps
 
what is the next most essential 'human' sense for a robot,
in your opinion - artificial life - artificial intelligence 
- ethical robots - what would *your* priorities be?
Hello,
This is my first post to empyre.
In the last few years the idea has become quite popular that 
intelligence is a complex system that arises from the interaction of 
small, discreet particles performing simple tasks in relation to one 
another. Very recently I've been hearing about 'dumb' machines. This 
seems to remove the overarching goal of emulating complex behaviour, and 
refocuses the energy on just exploring simple, 'dumb' tasks. 
'Intelligence, ethics and human priorities' it is said are far too 
complex to represent through current technology, let alone understand 
for ourselves. The New York Times Magazine has a small blurb on it this 
week (you can read it online but have to subscribe). If anyone knows 
about this topic it would be good to hear more.
My questions to you then are what do you think of approaching machines 
as 'dumb', as unlike us? Why use mimesis as a guiding principle in 
robotic design instead of other principles and references, or are they 
by definition mimetic? I ask because I'm currently confronting this in 
my own work. Some of my machines to date have been explorations of the 
boundary between things that are like us and things that aren't by 
rolling them into the same object. Lately, I've been more curious about 
machines that are complementary instead of similar to us, but these are 
not necessarilly mutually exclusive ideas.
Best,
Nicholas Stedman
http://nickstedman.banff.org
     
     This archive was generated by a fusion of 
     Pipermail 0.09 (Mailman edition) and 
     MHonArc 2.6.8.