Home About Subscribe Search Member Area

Humanist Discussion Group


< Back to Volume 34

Humanist Archives: Aug. 2, 2020, 7:58 a.m. Humanist 34.203 - on GPT-3

                  Humanist Discussion Group, Vol. 34, No. 203.
            Department of Digital Humanities, King's College London
                   Hosted by King's Digital Lab
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: Brigitte Rath 
           Subject: AW: [Humanist] 34.202: on GPT-3 (109)

    [2]    From: Bill Benzon 
           Subject: Re: [Humanist] 34.199: on GPT-3 (148)

    [3]    From: Willard McCarty 
           Subject: GPT-3 and me (62)


--[1]------------------------------------------------------------------------
        Date: 2020-08-01 18:51:50+00:00
        From: Brigitte Rath 
        Subject: AW: [Humanist] 34.202: on GPT-3

I'm a literary scholar, and I am utterly fascinated by GPT-3, the new machine
learning language model with unimaginable 175 billion parameters. GPT-3 makes me
ask new questions, among them what we can infer from a coherent text about the
faculties of its creator.

I'll start with a brief example. Given only the prompt

Here is an award winning short story:
They Come From The Earth
By John Vickersonik

GPT-3 offered several beginnings for different stories that could conceivably
follow that brief introduction. These are the first two:

1

They come from the earth. They crawl through cracks in the sidewalks and find
their way into basements, between walls and into all manner of dark places. They
come from the earth when we open up the ground to dig a new foundation or plant
a tree. They come from the earth when we disturb the dirt. They can’t wait to
get out.


2

THE TALL SHIP SAT AT THE END OF THE DOCK. We gathered to see the ship and all
its splendor, like pilgrims at an altar, as a swarm of ancient buses hissed up
the coast. I stood among the crowd watching each bus disgorge passengers onto
wooden planks laid over mudflats. The guests swarmed into town for their free
visit to another world: our island on Earth where strange new gods were
worshipped; here they could gather at some primitive shrine from which they
could send offerings back home or sell out-of-date clothes in pawnshops full of
old junk salvaged from forgotten times.
[1]


Both of these opening paragraphs are written in correct and coherent English.
Both develop their own style: The second one uses strong verbs -- "buses
hissed", "bus disgorge," "guests swarmed" -- fitting together to create an
impression of animality, framed in a series of similes and metaphors evoking
worship. The effect of the first one, with its short sentences and more typical
collocations, is based on parallelisms and suggests a possible opposition of us
vs them. "They," specified only by their acts but otherwise undescribed, are
moving and encroaching on "our" space now, in present tense, creating a subtle
sense of dread. Both texts clearly establish syntactic and semantic patterns,
each in a different, individual way.

Both of these opening paragraphs make narrative sense because they allow for
what Peter Brooks calls "anticipation of retrospection": we can easily imagine
that the end to these stories already exists, that, when reading the subsequent
paragraphs and pages, the hints and information we are just now gathering will
fit into a larger, more complex and ultimately satisfying picture. We imagine
that these paragraphs were written by someone with an idea for a whole story,
that these paragraphs are shaped by a larger idea.

They are not, though. No-one has something bigger in mind of which these
sentences are just the beginning. Although GPT-3 could doubtlessly create more,
this is all there is from these two stories with the same title, and nothing
else, nothing further is necessary for them to exist.

This is all there is: this seems to me the most noteworthy aspect of GPT-3. All
GPT-3 is is a predition model for text. All it does is run a statistical
calculation on an immense model predicting which words are most likely to come
next. GPT-3 does not have sensory organs, mental models [2], a theory of mind
[3], or any of the kind of interaction that teaches a child that this here is a
ball and that over there a doggie. For GPT-3, the whole universe is BPE encoded
text.

And yet, GPT-3 can produce not only creative texts like these opening paragraphs
of short stories, but also blog posts [4], newspaper articles [5], interviews
with famous people [6] and whatever else it is prompted to do within the realm
of text [7].

GPT-3 makes me ask what knowing a language entails, or, more precisely: to what
extent does "just" being able to produce an appropriate next sentence imitate
knowledge about the world, about  conversational norms, about text genres?

And as someone who tends to plan excessively before writing (sometimes with the
result that no writing ever happens), I am amazed how far one seems to get by
just starting to write and then following the drift.

What kinds of questions does GPT-3 raise for you?



[1] https://arr.am/2020/07/31/gpt-3-using-fiction-to-demonstrate-how-prompts-
impact-output-quality/

[2] When asked to keep track of objects in a box, GPT-3 seems to struggle:
https://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.html (Though the
quality of prompts seems to play a huge role in how well GPT-3 performs, see
https://arr.am/2020/07/25/gpt-3-uncertainty-prompts/)

[3] https://andrewmayneblog.wordpress.com/2020/06/11/the-aichannels-project/
While "It hasn't happened for me." is a likely phrase in a story about time
travel, it does not make a lot of sense that the time traveller says it here.
GPT-3 seems to have no model of the knowledge or intention of the individual
characters in a story.

[4] https://perceptions.substack.com/p/immortality-and-its-consequences

[5] https://arxiv.org/pdf/2005.14165.pdf , p. 28. (This is the official paper on
GPT-3.)

[6] https://jackmgaller.com/blog/interviews-with-my-favorite-people-using-gpt-3

[7] https://github.com/sw-yx/gpt3-list



--[2]------------------------------------------------------------------------
        Date: 2020-08-01 08:48:22+00:00
        From: Bill Benzon 
        Subject: Re: [Humanist] 34.199: on GPT-3

Dear Tim,

While I appreciate the fact that you took the time to respond at some length, I
fear that your response does not represent my position accurately. I endeavor to
correct the record in my response below.

> On Jul 31, 2020, at 3:49 AM, Humanist > wrote:
>
> -[2]------------------------------------------------------------------------
>        Date: 2020-07-30 10:36:52+00:00
>        From: Tim Smithers >
>        Subject: Re: [Humanist] 34.193: on GPT-3 & beyond
>
> Dear Bill,
>
> You say
>
>  "GPT-3 represents an achievement of a high order:"

That was from  the opening line of the third paragraph in my second post in the
series: "1. No meaning, no how: GPT-3 as Rubicon and Waterloo, a personal
view". I then go on at some length, 2800 words or so, to present what are by
now fairly standard arguments to the effect that GPT-3 is just a dumb machine
and can't possibly understand anything. I believe those arguments, otherwise I
wouldn't have bothered to make them. In the course of that post I quote from
Martin Kay at considerable length. Kay is one of the Grand Old Men of
computational linguistics. He studied under Margaret Masterman at Cambridge and
then, in the mid-to-late 1950s, went off to the RAND Corporation's program in
machine translation. There he worked with the late David Hays, with whom I
studied computational semantics in the 1970s. Here's what Kay had to say about
statistical models of language:

> "What we are doing is to allow statistics over words that occur very close to
one another in a string to stand in for the world construed widely, so as to
include myths, and beliefs, and cultures, and truths and lies and so forth. As a
stop-gap for the time being, this may be as good as we can do, but we should
clearly have only the most limited expectations of it because, for the purpose
it is intended to serve, it is clearly pathetically inadequate. The statistics
are standing in for a vast number of things for which we have no computer model.
They are therefore what I call an 'ignorance model.'"


This was in the mid-2000s well before GPT-3, but it applies to GPT-3 and every
other statistical model. They are all ignorance models, to use Kay's term, every
one of them.

> Whose "high order" is that, may I ask?  And how, in your view,
> does GPT-3 achieve this, exactly?

I'm judging GPT-3 against previous AI devices going back to Eliza. I was in my
20s when Eliza entered the hype stream, so that is in my direct experience. I
know about the collapse of machine translation in the 1960s, but was only in
high school at the time and didn't even know about that enterprise. I learned
about the collapse of MT from my teacher, Dave Hays, who was there when it
happened and did his best to keep the baby from being thrown out with the bath
water.

What I am trying to understand is how, given that GPT-3 is a dumb machine
working from an ignorance model, how it is able to do anything at all. That
strikes me as a worthwhile enterprise. That's what I'm up to in the third
post in the series, the one Willard referenced at the top of Humanist 34.199:
"2. The brain, the mind, and GPT-3, Part 1: Dimensions and conceptual spaces."

As for sonnets, GPT-3 was not created to generate sonnets, something you just
barely acknowledge at the end of your post, and it is disingenuous to evaluate
it on a single attempt at a sonnet.

> In your "Electric Conversation" with Hollis Robbins (see HDG
> Vol 34, No 178) you asked her to give her view on a sonnet
> generated using GPT-3, using the first three lines of Marcus
> Christian's sonnet "The Craftsman" [1] as input.  This is how
> Hollis Robbins responded [2]

You missed a small detail. It was the first two lines of the sonnet plus the
phrase, "A sonnet by Marcus Christian:". The phrase was added by the man, Phil
Mohun, who interacted with GPT-3 on my behalf. I have no objection to his
addition. He knows how to interact with GPT-3.

[snip]

>   But what's got my attention is the words "drum" and "bell"
>   in those last two lines. ...
>
> Really?  But these aren't the last two lines.  As Robbins
> explained, GPT-3 didn't finish the sonnet.  It needs another
> six lines (see [2]).  And "drum" and "bell" are what you see
> as a highlight here?  This is "high order achievement"?  Take
> a look at Marcus Christian's real version.  GPT-3, in my
> (uneducated) reading, gets nowhere near to Christian's words
> here.  Fixing on "drum" and "bell" looks to me like fishing
> for something that can be remarked upon as surprising in the
> GPT-3 output.

Not surprising so much as interesting. And in doing that I am doing what every
literary critic does in the face of a text of interest. The question that's
been hanging over the academic critical enterprise since the 1960s is: When we
interpret a text, are we pointing out what's actually there in the text, or
projecting our own meaning onto the text. That question has been endlessly
debated to no satisfactory conclusion.

I'm still interested in just how those two words appeared in GPT-3s poem-that-
is-not-a-sonnet when and where they did. Unfortunately, we cannot open GPT-3 up
and observe that happened. That's the very peculiar thing about these devices,
these massive statistical language models. GPT-3 was constructed over a database
of some 300 billion tokens and the resulting model has 175 billion parameters.
But we do not know how it operates. We create these things in our own image, if
you will, and it turns out that they are as opaque to us as we are to ourselves.

Let me conclude by presenting the full paragraph from which you extracted the
opening line:
> "GPT-3 represents an achievement of a high order; it deserves the attention it
has received, if not the hype. We are now deep in "here be dragons"
territory and we cannot go back. And yet, if we are not careful, we'll never
leave the dragons, we'll always be wild and undisciplined. We will never
actually advance; we'll just spin faster and faster. Hence GPT-3 is both a
Rubicon, the crossing of a threshold, and a potential Waterloo, a battle we
cannot win."
>

By "we" in that last line I mean the community that created GPT-3. If I had
to guess, I'd say that that community will look at its achievements and keep
right on doing what got it to this point. And then it will proceed to walk over
the edge of a cliff, like the character in a cartoon, look down, and crash and
burn. It happened in the mid-1960s with machine translation and it happened in
the mid-1980s with a previous generation of AI technology (known generally as
symbolic computing).

I hope that doesn't happen, but I fear that it will. Meanwhile I continue to
find such things fascinating.

Still, you're right about that non-sonnet. It is not a very good poem. I
marvel that it exists at all. That's what I want to understand.

Regards,

Bill Benzon
bbenzon@mindspring.com 

917-717-9841

http://new-savanna.blogspot.com/ 
http://www.facebook.com/bill.benzon 
http://www.flickr.com/photos/stc4blues/
https://independent.academia.edu/BillBenzon
http://www.bergenarches.com 


--[3]------------------------------------------------------------------------
        Date: 2020-08-01 08:12:08+00:00
        From: Willard McCarty 
        Subject: GPT-3 and me

My response to GPT-3 and all such Turing-Testy performances is to look
into the gap between them at their best and human performance at its
best. I want to know, what do we learn about our own abilities, and what
are the differences between the machine's offering and our own -- and 
again, by "our own" I mean the very best we humans can produce? Is the
attempt at seamless imitation of garden-variety prose really the right 
way to go? (See e.g. Tolentino's review of Archer and Jockers, The Best-
Seller Code, in The New Yorker for 23 September 2016.[*])

In his paper "Human versus Mechanical Intelligence"[**], Turing's friend,
mathematician Robin Gandy, wrote after quoting from "Computing Machinery 
and Intelligence":

> The 1950 paper was intended not so much as a penetrating
> contribution to philosophy but as propaganda. Turing thought the
> time had come for philosophers and mathematicians and scientists to
> take seriously the fact that computers were not merely calculating
> engines but were capable of behaviour which must be accounted as
> intelligent; he sought to persuade people that this was so. He wrote
> this paper - unlike his mathematical papers - quickly and with enjoyment.
> I can remember him reading aloud to me some of the passages - always
> with a smile, sometimes with a giggle. Some of the discussions of the
> paper I have read load it with more significance than it was intended
> to bear.

To my mind (to rephrase and expand), three questions arise:

(1) What is 'intelligence'? Should we not be talking in terms of
different intelligences? (See the recent research of the ethologists.)

(2) How do we develop the artificial kind(s) according to its (their)
own particular characteristics and constraints? From the evidence we
currently have, what is utterly new, strange but somehow teases us
intellectually, perceptually?

and finally, to quote Marilyn Strathern from her discussion of Donna
Haraway on cyborgs,[***]

(3) "The question is the kind of connection one might conceive between
entities that are made and reproduced in different ways - have different
origins in that sense - but which work together."

Comments?

Yours,
WM

---
*https://www.newyorker.com/books/page-turner/the-bestseller-code-tells-us-what-we-already-know
**In Machines and Thought: The Legacy of Alan Turing. Ed. P. J. R.
Millican and A. Clark. Oxford: Clarendon Press, 1996, p. 126
***Partial Connections. Updated edn. Walnut Creek CA: AltaMira Press,
2005, p. 37.


--
Willard McCarty (www.mccarty.org.uk/),
Professor emeritus, Department of Digital Humanities, King's College
London; Editor, Interdisciplinary Science Reviews
(www.tandfonline.com/loi/yisr20) and Humanist (www.dhhumanist.org)




_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php


Editor: Willard McCarty (King's College London, U.K.; Western Sydney University, Australia)
Software designer: Malgosia Askanas (Mind-Crafts)

This site is maintained under a service level agreement by King's Digital Lab.