Home About Subscribe Search Member Area

Humanist Discussion Group

< Back to Volume 34

Humanist Archives: Aug. 4, 2020, 7:35 a.m. Humanist 34.209 - on GPT-3

                  Humanist Discussion Group, Vol. 34, No. 209.
            Department of Digital Humanities, King's College London
                   Hosted by King's Digital Lab
                Submit to: humanist@dhhumanist.org

    [1]    From: Tim Smithers 
           Subject: Re: [Humanist] 34.203: on GPT-3 (189)

    [2]    From: Henry Schaffer 
           Subject: Re: [Humanist] 34.207: on GPT-3 (15)

    [3]    From: Henry Schaffer 
           Subject: Re: [Humanist] 34.207: on GPT-3 (15)

    [4]    From: Bill Benzon 
           Subject: Re: [Humanist] 34.207: on GPT-3 (62)

        Date: 2020-08-03 10:54:25+00:00
        From: Tim Smithers 
        Subject: Re: [Humanist] 34.203: on GPT-3

Dear Brigitte, Bill, Willard, and Gabriel,

Some responses.  Brief though I'd like this to be, short it
are not.  (Perhaps GPT-3 could re-do it in fewer words?)

Bill: I apologise for misrepresenting your position.  Perhaps
I was somehow mistaken in how I understood your words "GPT-3
represents an achievement of a high order."  And perhaps I
somehow didn't understand well your apparent swift dismissal
of Hollis Robbins's judgement of what you called a sonnet by
GPT-3.  I'm sorry.

Brigitte: I welcome your remarks on GPT-3.  The new questions
it provokes are, I think, the main value of what GPT-3 does,
and how it does it.  You have the knowledge and understanding
to tell us about some of these new questions, while I do not,
though perhaps some of what follows responds to your

Willard: As I see it, 'intelligence' is a label we have for
something we don't yet know and understand what it is a label

This does not mean we cannot, or should not, enquire after
this something, but it does, I think, suggest care is needed
when trying to understand the outcomes of our enquiries.

Gabriel (I hope you won't mind me putting you last, despite
you being the first to ask).

I know -- rather well, after many failed attempts to argue
this -- that what follows looks like "mere semantics," but,
when it comes to humans playing two-player games like chess,
there is a middle ground between losing and winning.  Just
because you, a human, lost doesn't necessarily mean the other
won.  If the other is a human chess player -- someone we can
know as a human chess player -- I think it does mean they won.
If the other is a machine, or a non-communicating alien, I do
not want to say it won the game (or games) of chess.  I want
to only say the human chess player lost the game (or games) of
chess.  Chess playing we know of as a human activity.  Playing
chess well, all the way up to being the best in the world,
involves passion, dedication, considerable and deep knowledge
and understanding of chess in all its aspects as a game humans
play, together with rare skills, aptitudes, and attitudes.
This, to me, is what the term "playing chess" means, and what
it brings into a conversation that uses this term.  Just
moving the chess pieces in proper turn, and in proper ways
according to the rules of the game, in such a way that the
best human chess play cannot win the game, does not, in my
view warrant the label "chess player" because all the rest of
what it is to play chess is not there.

If we call anything that wins at chess, humans who have
dedicate much of their lives to learning, studying,
practicing, and playing serious chess, or machines, or
non-communicating aliens, that the best human chess-players
cannot win against, then, it seems to me, we either empty out
almost all of the meaning and use we (humans) have of the term
"chess playing," or we accept that it is reasonable to
attribute the full richness of what this term means to us
humans to these other things: machines and non-communicating
aliens.  But what do we gain from doing either of these
things?  Emptying out the rich meanings of terms or names we
have for kinds of rich human activities just makes it harder
to talk about these human activities, and harder to understand
them.  Attributing to machines what we know of (good) human
chess-players is to give away important qualities of being
human to things that do not have these qualities, and cannot
have these qualities, not at least without first showing that
they too, as machines and aliens, can take part in the the
world of chess playing as we know it, and in ways that we
(humans) and they can share and appreciate.

(There is a third option on how use the term 'chess-player,'
but I'll not try to deal with this here.)

Your questions are made up (hypothetical would be the posh
term), and, as such, should, in my view, be prefaced by a
phrase saying something like "Presuming all to be possible and
true that needs to be possible and true for the following to
make any sense ..."  This preface on the front of your two
questions, about non-communicating aliens and a deaf, mute,
illiterate human, would lead me to respond: (1) I'll tell you
about the alien when we see this happen, because until then
it's not useful to try; and (2) you tell me how the deaf,
mute, illiterate person learned how to play chess, and I will
tell you if I think it is reasonable to call this person a
chess-player.  (Actually, I don't think we'll ever see a
non-communicating alien move chess pieces according to the
rule of the game, and I also don't think a deaf, mute,
illiterate person is likely to become seriously good at
playing chess.  But, I could of course be mistaken.)

It's not asking questions I'm worrying about.  It's what kinds
of questions are useful questions.  I think Brigitte's
questions are useful questions.  I think the performance of
Deep Blue II raised useful questions in a similar way.  Why is
it possible to replicate the surface performance of certain
kinds of human activities that we know require unusual levels
and kinds of knowledge, understanding, skills, and abilities,
in humans?  Good story telling, or good chess-playing, for
example.  What does this tell us about these kinds of human
intelligent behaviour?  Kinds of human behaviour we, as
humans, value, and describe as high achievements.  Wherein
lies the basis for this value and measure of achievement if the
surface performance can be replicated by machines that are
nothing like humans?

Work in AI has been raising this kind of question since its
beginnings, I think.  But we seem to get somewhat blinded to
them by falling into the Eliza Trap.  For me, AI, as a
research field, seeks to investigate intelligent behaviour by
trying to replicate it in the artificial.  Since human
behaviour, in all it's rich variations and variety, is the
almost exclusive example base of intelligent behaviour we have
(until the aliens comes along), we should expect outcomes of
AI research to help us understand more about our own
intelligent behaviour, if the research goes well, and its
outcomes are properly understood.

There is, however, a hazard to doing AI research, I think, and
I pointed to this above in my reply to Willard.  Care is
needed in understanding what the outcomes of AI research might
tell us about human intelligent behaviour.  Likeness and close
similarity, including superior likeness, are not necessarily
the result of a replication of the intelligent behaviour being
studied.  We need to apply what I call the Flower Test, and do
this rather carefully.

The Flower Test is supposed to distinguish what kind of
artificial we have: an Artificial Flower kind of artificial,
of an Artificial Light kind of artificial.  The fact that we
-- well some knowledgeable and skilled people -- can make
artificial flowers that most of the rest of us have a hard
time not seeing as real flowers, until we are able to inspect
them very closely, tells us something about how we (visually)
perceive things in the world.  It is not mere trickery.  But,
the making of very convincing artificial flowers doesn't tell
us very much about the how and why of real flowers.  Indeed,
making good artificial flowers depends upon a good knowledge
and understanding of real flowers that has been otherwise

The fact that we can make artificial light that is the same as
natural light does help us understand and know things about
real light, because, of course, it is the same thing that is
made in each case.  And, it turns out, we cannot make
artificial light that is an artificial flower kind of
artificial light.  That too, tell us something interesting.

To finish.  I don't find the various outputs of GPT-3 very
interesting.  It doesn't surprise me that, given sufficient
examples of human texts (in English), together with sufficient
computation, a machine of the kind GPT-3 is, would reproduce
examples of texts that look very like those written by humans.
Why would we expect anything else?  What else could GPT-3 do?
Why, exactly, is anybody surprised by what it does?  There
must be some statistical structure and regularity in all this
text, else it wouldn't be text written by humans for other
humans to read and understand, and sometimes enjoy and marvel
at.  So, if GPT-3 is well built and works properly, why would
it fail to capture these structures and regularities and then
use these to produce human-like texts?  But, just because it
does, why do we think this has anything to do with human
intelligence, or any kind of intelligence at all?

What I think GPT-3 is showing us, like other AI constructions
before it, is that there is a whole lot more to human
intelligent behaviour than statistical pattern extraction and
pattern generation mechanisms.  I would say we kind of knew
this already, but, as Brigitte showed us, each time we get a
new AI doing something that looks like human level surface
performance, we can see new questions to ask about it, and
thus perhaps see more of how what underlies human intelligent
behaviour is not like this, and still not yet well understood.

Best regards,


PS: Does anybody here know how the people who built GPT-3
    have shown how its outputs are not in anyway influenced by
    artifacts of the encoding used for the texts it has been
    trained with?

        Date: 2020-08-03 23:29:37+00:00
        From: Henry Schaffer 
        Subject: Re: [Humanist] 34.207: on GPT-3

More reading - emphasizing the technology, but with examples of text
generation - HT oreilly.com






        Date: 2020-08-03 23:48:33+00:00
        From: Henry Schaffer 
        Subject: Re: [Humanist] 34.207: on GPT-3

And then some more reading suggestions from my colleagues - not on the

The last one is of particular interest to me as I've long had an interest
in plagiarism - e.g. see
https://projects.ncsu.edu/it/open_source/howtoplagiarize.html  Is this
"algorithmically generated" product a work of someone else? Or is it your
product since you clicked?


        Date: 2020-08-03 19:49:42+00:00
        From: Bill Benzon 
        Subject: Re: [Humanist] 34.207: on GPT-3

In his remarks on GPT-3 Jim Rovira noted: "I think the machine as a writer
demonstrates the sophistication of its programming only" [Humanist 34.207].

Engines like GPT-3 are NOT programmed in any ordinary sense of the word. You
don't program in the way you would write a script to conduct a stylometrics
analysis of a text, or generate a concordance, nor, for that matter, code up a
general purpose word-processing. That's not how they work.

We can think of GPT-3 (GPT = Generative Pre-trained Transformer) as consisting
of three components:

1.      encoder-decoder (the engine, if you will)
2.      language model (175 billion parameters)
3.      user interface (how one prompts the engine)

In addition to that we need a corpus of text to train 1.

1 and 3 are programmed in the ordinary sense of the word. The language model is
not. Rather, the language model is created when 1 is run against the training
corpus. The language model, the weightings on 175 billion parameters (don't
ask me what they are because I don't know), is opaque to the human programmer.
They don't know how it does what it does and they have no way of manipulating
it directly. That opaque and inaccessible language model is what makes the thing

What does GPT-3 tell us about human 'intelligence'. I don't know and, to
wax polemical, I don't really care. What does the performance of an automobile
tell us about how humans or horses run races? Nothing. They're different

At the moment we're pretty much limited to comparing GPT-3s output with our
own. That's a tricky business. I'm not sure what it tells us. To the extent
that GPT-3 produces convincing simulacra of human writing, maybe it tells us
that an awful lot of what we write is based on mixing and remixing boilerplate

In neither case, humans or GPT-3, can we 'look under the hood' and observe
the process at work. In both cases the process is opaque. I can't help but
wonder what Giambattista Vico would think of it all.

As you know, he believed that to understand something, you have to be able to
construct it (verum factum). Thus we are at a disadvantage in understanding the
natural world as we did not create it; God did. But the human world, that
we've created, and so we should be able to understand it. As things have since
unfolded, though, we've done quite well in understanding the natural world,
but not so well in understanding our own world. And now we have this latest crop
of AI devices, devices created though some kind of learning. We've created
them but, guess what? they're opaque to us.


Bill Benzon



Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php

Editor: Willard McCarty (King's College London, U.K.; Western Sydney University, Australia)
Software designer: Malgosia Askanas (Mind-Crafts)

This site is maintained under a service level agreement by King's Digital Lab.