Home About Subscribe Search Member Area

Humanist Discussion Group


< Back to Volume 33

Humanist Archives: May 23, 2019, 6:52 a.m. Humanist 33.36 - immitative vs non-immitative AI?

                  Humanist Discussion Group, Vol. 33, No. 36.
            Department of Digital Humanities, King's College London
                   Hosted by King's Digital Lab
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2019-05-23 05:43:30+00:00
        From: Willard McCarty 
        Subject: immitative vs non-immitative AI?

It strikes me that nearly all of what is written about artificial
intelligence is based on the assumption that the goal is to get beyond
Masahiro Mori's "uncanny valley" -- the phase in which the scary bits
force us to confront the differences, and so ourselves -- to machines
not just "like me" but indistinguishable. There have been strong
arguments of fundamental, ineradicable difference for a long time, e.g.
cognitive psychologist Ulric Neisser's, in "The Imitation of Man by
Machine", Science NS 139.3551 (1963): 193-7. In a sense, the idea of an
artificially intelligent creature has been colonised in a remarkably
similar fashion to the colonisation of non-Western peoples: like
Westerners but not quite with it, in need of a clearer, 'scientific'
vision of things -- in the case of an AI, in need of better computer
science.

What we need, I'd like to suggest, is something like an anthropology of
the artificially intelligent, respectful of that persistent difference.

Neisser again (in the idiom of his time): "The deep difference between
the thinking of men and machines has been intuitively recognized by
those who fear that machines may somehow come to regulate our society...
But computer intelligence is indeed 'inhuman': it does not grow, has no
emotional basis, and is shallowly motivated... The very concept of
'artificial intelligence' suggests the rationalist's ancient assumption
that man's intelligence is a faulty independent of the rest of human
life. Happily it is not." (p. 197)

There is very good, exciting work in the cognitive sciences following in
the path suggested by James Gibson's "The Theory of Affordances" (1979)
and developed e.g. by Edwin Hutchins in Cognition in the Wild (1995)
that explores a much bigger idea of mind than the computational one
responsible for the latest version of the assumption Neisser fingers.
But who in the last decade or so has pointed this out?

Suggestions of where to go with this would be most welcome, esp if
backtracking and rethinking are recommended.

Yours,
WM
--
Willard McCarty (www.mccarty.org.uk/),
Professor emeritus, Department of Digital Humanities, King's College
London; Editor, Interdisciplinary Science Reviews
(www.tandfonline.com/loi/yisr20) and Humanist (www.dhhumanist.org)




_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php


Editor: Willard McCarty (King's College London, U.K.; Western Sydney University, Australia)
Software designer: Malgosia Askanas (Mind-Crafts)

This site is maintained under a service level agreement by King's Digital Lab.