21.393 machines don't care

From: Humanist Discussion Group (by way of Willard McCarty willard.mccarty_at_kcl.ac.uk>
Date: Wed, 5 Dec 2007 08:38:07 +0000

               Humanist Discussion Group, Vol. 21, No. 393.
       Centre for Computing in the Humanities, King's College London
                     Submit to: humanist_at_princeton.edu

   [1] From: Willard McCarty <willard.mccarty_at_kcl.ac.uk> (169)
         Subject: Fwd: 21.390 machines don't care

   [2] From: abrook <abrook_at_ccs.carleton.ca> (22)
         Subject: Re: 21.390 machines don't care

   [3] From: Geoffrey Rockwell <georock_at_mcmaster.ca> (34)
         Subject: Re: 21.387 machines don't care

         Date: Tue, 04 Dec 2007 08:54:24 +0000
         From: Willard McCarty <willard.mccarty_at_kcl.ac.uk>
         Subject: Fwd: 21.390 machines don't care

In the best spirit of Humanist, Eric's refining of my musings on
"machines don't care" takes the discussion forward by pointing out
the obvious, that we can as a matter of perceived necessity or
otherwise steel ourselves against feeling. We can become in effect
less complex beings, even verge on a robotic state of purely
bio-mechanical action, as soldiers do in parade. When, say, we act as
a surgeon, or a policeman, or a bank teller, or indeed as a
university teacher (who officially should not care about a student's
personal difficulties, at least up to a point), what distinguishes us
from the machine that can now or will, perhaps, replace us someday?
Isn't it the point at which we step out of the role Eric speaks of,
the point at which the policeman steps outside of his or her official
set of constraints and helps the person in trouble directly?

When Western Europeans (I assume sometime in the 17C) first
encountered the great apes, a fair amount of anxiety was generated
because of the obvious similarities, raising the obvious question of
what the differences were. (See Swift's Gulliver's Travels.) Now
primatologists study how much we are like the apes, and some people
argue that we'd be much better off if we were more like them. Still
it's the difference that is of interest. When humans do something
bestial, what makes it so? What makes the bestial action so much
worse than the same action performed by beasts?

I come back to my revised Turing Test: can the unknown actor step
outside the robotic role and display empathy? This has to be a rather
subtle test, as we know from Joseph Weizenbaum's Eliza. How would we know?


         Date: Wed, 05 Dec 2007 07:43:15 +0000
         From: abrook <abrook_at_ccs.carleton.ca>
         Subject: Re: 21.390 machines don't care

Dear Eric and Willard,

There are levels and levels of not caring. The problem with machines
is that the machine itself contains no restricts of any kind on how
it is used. Exactly the same technology is used to administer
life-saving transfusions and (in the US) lethal injections. For the
machine, it is all the same. This, if I remember correctly, was a
main reason why Heidegger was so hostile to the increasing
enslavement of industrial countries by their technologies -- computer
technologies and the internet, for example. It is the substitution of
purposelessness for purpose, emotionlessness for emotion, utter lack
of sociality for social connectedness, and so on.


Andrew Brook
Chancellor's Professor of Philosophy
Director, Institute of Cognitive Science
Member, Canadian Psychoanalytic Society
2217 Dunton Tower, Carleton University
Ottawa ON, Canada   K1S 5B6
Ph:  613 520-3597
Fax: 613 520-3985
Web: www.carleton.ca/~abrook
         Date: Wed, 05 Dec 2007 07:43:46 +0000
         From: Geoffrey Rockwell <georock_at_mcmaster.ca>
         Subject: Re: 21.387 machines don't care
Dear Willard,
Has someone pointed you to Richard Powers excellent novel about AI
"Galatea 2.2". The narrator, Powers his unreliable self, and others
accept a challenge to develop an AI that can pass an English MA
comprehensive. Along the way Powers looks at the Turing Test and its
ethical dimension - would we deny humanity to someone whose
disabilities made them unable to pass a Turning test? I won't spoil
the end, but the caring and love of an AI is the twist that animates
the ending. If you could create such an AI so it cared, what would it
think about itself and its inability to physically express its love?
What makes the novel particularly interesting is its realism - Powers
studied at the University of Illinois, was a programmer, and returned
to U of Illinois as a writer-in-residence - an experience that he
plays with for Galatea 2.2 to the point you can almost believe it
happened. That this account reads so authentically about academic
life and research make it all the more compelling as a essay on the
ethics of AI. It is about what some of us may be doing already.
I should also mention a short story by Powers, called "Literary
Devices" Zoetrope: All-Story (Winter 2002, Vol. 6, No. 4). The blurb
on the Zoetrope site (where you can order it) reads:
"Richard Powers speculates about the narrative complications of the
digital age. By invitation of an anonymous e-mail, his narrator tests
a computer program, DIALOGOS, which weaves its stories from bits of
data around the web, without any central
authorship. He sends out e- mails into the
DIALOGOS void-- to Emma Thompson, to an estranged
friend, to Goethe, to Emily Dickinson. Each missive is returned with
implausible accuracy. In time, the narrator gets lost in DIALOGOS,
and wonders if he has lost touch with reality or if the reality of
narrative has in fact been changed."
I haven't been able to find DIALOGOS, but not for lack of trying.
Perhaps I have lost touch with reality.
Geoffrey Rockwell
Received on Wed Dec 05 2007 - 03:54:08 EST

This archive was generated by hypermail 2.2.0 : Wed Dec 05 2007 - 03:54:08 EST