Home About Subscribe Search Member Area

Humanist Discussion Group


< Back to Volume 34

Humanist Archives: Aug. 11, 2020, 10:08 a.m. Humanist 34.227 - on GPT-3 and imitation

                  Humanist Discussion Group, Vol. 34, No. 227.
            Department of Digital Humanities, King's College London
                   Hosted by King's Digital Lab
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: Bill Benzon 
           Subject: Re: [Humanist] 34.222: on GPT-3 and imitation (117)

    [2]    From: Willard McCarty 
           Subject: more on imitation (50)


--[1]------------------------------------------------------------------------
        Date: 2020-08-10 11:17:03+00:00
        From: Bill Benzon 
        Subject: Re: [Humanist] 34.222: on GPT-3 and imitation

Might I hazard a somewhat contrary response to François Lachance [Humanist
34.222]?

> On Aug 10, 2020, at 1:36 AM, Humanist  wrote:
>
> This thread on machines and imitation makes me wonder what is being imitated.
I
> know the focus has been on intelligence. But is there not a gamut: reflex,
> tropism, instinct, intelligence, personhood (as in agent for speech act)? What
> value might there be in approaching the problematic from alternative
categories:
> to what extent might we speak of a machine-reflex?


It's not always about imitation. Take chess for example. Chess is a classic
domain for AI investigation and, as far as I know, the primary objective has
almost never been to play chess like a human. Oh, psychologists have
investigated how humans play chess and that work may well have been employed in
programming chess into computers, but the objective was not to play chess like
humans, but to beat humans at chess. Determining whether or not a player has won
a game, or at least forced a draw, is easily done, so the question of whether or
not humans are being imitated is at best only a secondary issue. In the case of
the recent AlphaGo, I believe it has been remarked that it exhibits stylistic
traits unlike any exhibited by humans, and yet it always wins.

I presume that AI investigators settled on chess in part because it is a game
known to enable displays of great intelligence (for want of a better term). So,
the reasoning would seem, if a computer can beat a human at chess, that computer
must be intelligent. Does anyone think of these chess-playing computers as being
intelligent? Does anyone care? I'm told, by the way, that the most exciting
chess in the world these days is that played by teams involving humans and
computers. Do the human members of these teams regard their computers as being
highly intelligent, of do they think of them as very useful, nay indispensable,
aids? [BTW, John McCarthy is said to have remarked that if geneticists treated
drosophila like AI have treated chess, that we'd have a lot of very fast house
flies but scant knowledge of genetics.]

Now, Frederik Schodt has remarked, Inside the Robot Kingdom: Japan,
Mechatronics, and the Coming Robotopia (1988), that, back in the early days of
industrial robotics, a Shinto ceremony would be performed to welcome a new robot
to the assembly line. Industrial robots look nothing like humans nor do they
behave like humans. They perform narrowly defined tasks with great precision,
tirelessly, time after time. How did the human workers, and the Shinto priest,
think of their robot compatriots? One of Schodt's themes in that book is that
the Japanese have different conceptions of robots from Westerners. Why? Is it,
for example, the influence of Buddhism?

It was the Japanese who introduced the Tamagotchi back in 1996. As you may
recall these are small hand-held devices that function as virtual pets.
Wikipedia says (https://en.wikipedia.org/wiki/Tamagotchi):

> Upon activating the toy, an egg appears on the screen. After setting the clock
on the device, the egg will wiggle for several minutes, and then hatch into a
small pet . In later versions, inputting the
player's name and birthday is also required when setting the clock, and at
birth, the player can name the pet and learn of its family group and/or gender.
The player can care for the pet as much or as little as they choose, and the
outcome depends on the player's actions. The first Tamagotchis could only be
paused by going to set the clock, effectively stopping the passage of time in
the game, but in later models a pause function was included.
>
> Pets have a Hunger meter, Happy meter, Bracelet meter, and Discipline meter to
determine how healthy and well behaved the pet is. There is also an age and
weight check function for the current age and weight of the pet. […] The pet
goes through several distinct stages of development throughout its life cycle.
Each stage lasts a set amount of time, depending on the model of the toy, and
when it reaches a new stage, the toy plays a jingle, and the pet's appearance
changes. The pet can "die" due to poor care, old age, sickness, and in a few
versions, predators. The pet's life cycle stages are Baby, Child, Teenager, and
Adult. Later Tamagotchi models have added a Senior model. Usually the pet's age
will increase once it has awakened from its sleep time.
>

And so forth.

The Tamagotchi doesn't look like a dog, a cat, a hamster, a turtle, a gold
fish, or any other animal people use as pets, and yet serves as a pet for those
who have them. Is there any imitation involved?

It's not only that imitation would seem to require conscious intention, as Jim
Rovira has argued (Humanist 34.218), but that the terms of imitation must be
defined as well. In the visual arts, caricature is a kind of imitation; the
subject of the caricature must be recognizable in the image. At the same time,
the image is never a realistic portrayal; there is always an element of
exaggeration. What are the terms of correspondence between the caricature and
its subject? The same can be asked of those comedians who do impressions of
others: What are the terms of correspondence between the impression and the
subject?

And so we can ask of the Tamagotchi, what are the terms of correspondence
between the Tamagotchi and a real (live) pet? What affordances must the
Tamagotchi exhibit in order to be taken for a pet?

It's but a step from Tamagotchis to'companion robots' (do a search on the
term), robots designed as companions for humans, particularly the elderly and
young children. The Japanese have pioneered here as well and are using companion
robots. Some of these companions are cute and fuzzy, but are not designed to
imitate real cute and fuzzy animals (thus avoiding the uncanny valley? – a
Japanese conception by the way). Others look more sleekly robotic but still
somehow companionable.

As for machine-reflex, what about Roomba, the autonomous robot vacuum cleaner?
When it bumps into a wall it backs away and changes direction. Is that a reflex?
By what criteria? What about photo-tropism in flowers, flowers changing their
orientation during the day so as to follow the sun? Reflex or not? What are the
criteria?

Bill Benzon
bbenzon@mindspring.com

917-717-9841

http://new-savanna.blogspot.com/ 
http://www.facebook.com/bill.benzon 
http://www.flickr.com/photos/stc4blues/
https://independent.academia.edu/BillBenzon
http://www.bergenarches.com 

--[2]------------------------------------------------------------------------
        Date: 2020-08-10 08:44:15+00:00
        From: Willard McCarty 
        Subject: more on imitation

I'd like to invite comments on a passage in the Preface of Kersten
Dautenhahn and Chrystopher L. Nehaniv's Imitation in Animals and
Artifacts (MIT Press, 2002):

> Imitation is often considered an important mechanism whereby
> knowledge can be transferred between agents (biological,
> computational, or robotic autonomous systems). Over the last decade,
> the topic of imitation has emerged in various areas close to
> artificial intelligence including cognitive and social sciences,
> developmental psychology, animal behavior, robotics, programming by
> demonstration, machine learning, and user-interface design. The
> importance of imitation has grown increasingly apparent to
> psychologists, ethologists, philosophers, linguists, cognitive
> scientists, computer scientists, mathematicians, biologists,
> anthropologists, and roboticists. Yet the workers confronted with the
> need to understand imitation are often unaware of relevant research
> in other disciplines. Until now, the study of imitation has lacked a
> rigorous foundation, with no major interdisciplinary framework
> available on the subject for workers in computer science, robotics,
> or the biological sciences. This volume is aimed toward remedying
> this. (p. xi)

The intention I find admirable, but what strikes me as curious at this
very early stage in the book are the implicit steps by which the
interdisciplinary ground is established for the benefit of those with
technical concerns. Note in particular the following:

'mechanism'
'knowledge ... transferred'
'agents'
'systems'
'rigorous foundation'
'framework'

To an ear trained in the reading of poetry, questions are quietly being
put to one side while features amenable for computational formalisation
are foregrounded. There's nothing wrong with doing this, but (if I am
right) work remains to be done -- by us, likely no one else?

Comments?

Yours,
WM
--
Willard McCarty (www.mccarty.org.uk/),
Professor emeritus, Department of Digital Humanities, King's College
London; Editor, Interdisciplinary Science Reviews
(www.tandfonline.com/loi/yisr20) and Humanist (www.dhhumanist.org)




_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php


Editor: Willard McCarty (King's College London, U.K.; Western Sydney University, Australia)
Software designer: Malgosia Askanas (Mind-Crafts)

This site is maintained under a service level agreement by King's Digital Lab.