Home About Subscribe Search Member Area

Humanist Discussion Group

< Back to Volume 33

Humanist Archives: June 13, 2019, 6:29 a.m. Humanist 33.82 - immitative vs non-immitative AI

                  Humanist Discussion Group, Vol. 33, No. 82.
            Department of Digital Humanities, King's College London
                   Hosted by King's Digital Lab
                Submit to: humanist@dhhumanist.org

        Date: 2019-06-12 19:32:37+00:00
        From: Bill Benzon 
        Subject: Re: [Humanist] 33.36: immitative vs non-immitative AI?

Willard, comments below.

>        Date: 2019-05-23 05:43:30+00:00
>        From: Willard McCarty 
>        Subject: immitative vs non-immitative AI?
> It strikes me that nearly all of what is written about artificial
> intelligence is based on the assumption that the goal is to get beyond
> Masahiro Mori's "uncanny valley" -- the phase in which the scary bits
> force us to confront the differences, and so ourselves -- to machines
> not just "like me" but indistinguishable. There have been strong
> arguments of fundamental, ineradicable difference for a long time, e.g.
> cognitive psychologist Ulric Neisser's, in "The Imitation of Man by
> Machine", Science NS 139.3551 (1963): 193-7. In a sense, the idea of an
> artificially intelligent creature has been colonised in a remarkably
> similar fashion to the colonisation of non-Western peoples: like
> Westerners but not quite with it, in need of a clearer, 'scientific'
> vision of things -- in the case of an AI, in need of better computer
> science.
> What we need, I'd like to suggest, is something like an anthropology of
> the artificially intelligent, respectful of that persistent difference.


> Suggestions of where to go with this would be most welcome, esp if
> backtracking and rethinking are recommended.

I'm not at all sure what you have in mind, but I suggest you look at Michael
Jordan, Artificial Intelligence - The Revolution Hasn't Happened Yet:
happened-yet-5e1d5812e1e7 .

He argues, for example:

> Since the 1960s much progress has been made, but it has arguably not
> come about from the pursuit of human-imitative AI. Rather, as in the
> case of the Apollo spaceships, these ideas have often been hidden
> behind the scenes, and have been the handiwork of researchers focused
> on specific engineering challenges. Although not visible to the
> general public, research and systems-building in areas such as
> document retrieval, text classification, fraud detection,
> recommendation systems, personalized search, social network analysis,
> planning, diagnostics and A/B testing have been a major success —
> these are the advances that have powered companies such as Google,
> NetSix, Facebook and Amazon.
> One could simply agree to refer to all of this as 'AI',€ and
> indeed that is what appears to have happened. Such labeling may come
> as a surprise to optimization or statistics researchers, who wake up
> to find themselves suddenly referred to as 'AI researchers'. But
> labeling of researchers aside, the bigger problem is that the use of
> this single, ill-defined acronym prevents a clear understanding of the
> range of intellectual and commercial issues at play.

Much of that work has roots earlier than classical AI, in statistics, operations
research, cybernetics, and control theory. So:

> It was John McCarthy (while a professor at Dartmouth, and soon to
> take a position at MIT) who coined the term 'AI', apparently to
> distinguish his budding research agenda from that of Norbert Wiener
> (then an older professor at MIT). Wiener had coined 'cybernetics' to
> refer to his own vision of intelligent systems - a vision that was
> closely tied to operations research, statistics, pattern recognition,
> information theory and control theory. McCarthy, on the other hand,
> emphasized the ties to logic. In an interesting reversal, it is
> Wiener's intellectual agenda that has come to dominate in the current
> era, under the banner of McCarthy's terminology. (This state of
> affairs is surely, however, only temporary; the pendulum swings more
> in AI than in most fields.)
> But we need to move beyond the particular historical perspectives of
> McCarthy and Wiener.

Beyond/other than AI Jordan argues for:

> The past two decades have seen major progress - in industry and
> academia - in a complementary aspiration to human-imitative AI that
> is often referred to as 'Intelligence Augmentation' (IA). Here
> computation and data are used to create services that augment human
> intelligence and creativity. A search engine can be viewed as an
> example of AI (it augments human memory and factual knowledge), as
> can natural language translation (it augments the ability of a human
> to communicate). Computing-based generation of sounds and images
> serves as a palette and creativity enhancer for artists.

And then for:

> Hoping that the reader will tolerate one last acronym, let us
> conceive broadly of a discipline of 'Intelligent Infrastructure'
> (II), whereby a web of computation, data and physical entities exists
> that makes human environments more supportive, interesting and safe.
> Such infrastructure is beginning to make its appearance in domains
> such as transportation, medicine, commerce and finance, with vast
> implications for individual humans and societies.

Jordan argues IA and II both involve issues that do not arise in AI. Thus:

> A related argument is that human intelligence is the only kind of
> intelligence that we know, and that we should aim to mimic it as a
> first step. But humans are in fact not very good at some kinds of
> reasoning -- we have our lapses, biases and limitations. Moreover,
> critically, we did not evolve to perform the kinds of large-scale
> decision-making that modern II systems must face, nor to cope with
> the kinds of uncertainty that arise in II contexts.

He concludes:

> Moreover, we should embrace the fact that what we are witnessing is
> the creation of a new branch of engineering. The term 'engineering'
> is often invoked in a narrow sense - in academia and beyond - with
> overtones of cold, affectless machinery, and negative connotations of
> loss of control by humans. But an engineering discipline can be what
> we want it to be.
> In the current era, we have a real opportunity to conceive of
> something historically new - a human-centric engineering discipline.
> I will resist giving this emerging discipline a name, but if the
> acronym 'AI' continues to be used as placeholder nomenclature going
> forward, let's be aware of the very real limitations of this
> placeholder. Let's broaden our scope, tone down the hype and
> recognize the serious challenges ahead.

Bill Benzon





Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php

Editor: Willard McCarty (King's College London, U.K.; Western Sydney University, Australia)
Software designer: Malgosia Askanas (Mind-Crafts)

This site is maintained under a service level agreement by King's Digital Lab.