Home About Subscribe Search Member Area

Humanist Discussion Group


< Back to Volume 34

Humanist Archives: July 31, 2020, 8:49 a.m. Humanist 34.199 - on GPT-3

                  Humanist Discussion Group, Vol. 34, No. 199.
            Department of Digital Humanities, King's College London
                   Hosted by King's Digital Lab
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: Bill Benzon 
           Subject: NEW SAVANNA: 2. The brain, the mind, and GPT-3, Part 1: Dimensions and conceptual spaces (56)

    [2]    From: Tim Smithers 
           Subject: Re: [Humanist] 34.193: on GPT-3 & beyond (269)


--[1]------------------------------------------------------------------------
        Date: 2020-07-30 14:05:37+00:00
        From: Bill Benzon 
        Subject: NEW SAVANNA: 2. The brain, the mind, and GPT-3, Part 1: Dimensions and conceptual spaces

Dear Willard and Humanists,

Here's the third post in my series on GPT-3, now projected to have six posts.
This is the key post.

As you know, much NLP (natural language processing) work is accomplished through
'blind' empiricism. It works and that's all that matters, just how and why
it works, someone else can figure that out. In this case we've got a double
blindness. On the one hand, such language models are opaque. You can't open
them up and observe how they operate. In any case, just where would you look in
a model that as 175 billion parameters? That's one thing.

The other thing is that these models do not follow from a coherent theory of
language and mind. Rather, they follow from an empirical tradition stretching
back to Gerard Salton's work on document retrieval in the 1960s and 70s. Thus
there has been considerable discussion among linguists on just what's going on
with such models. I gave my own version of that discussion in the previous post
in the series and I cite one such discussion, a very recent one, in this post.

What I do in this post is sketch out the (beginnings) of a coherent account of
why these models work. Here' the first two paragraphs of the post:

> The purpose of this post is to sketch a conceptual framework in which we can
understand the success of language models such as GPT-3 despite the fact that
they are based on nothing more than massive collections of bare naked
signifiers. There's not a signified in sight, much less any referents. I have
no intention of even attempting to explain how GPT-3 works. That it does work,
in an astonishing variety of cases if (certainly) not universally, is sufficient
for my purposes.
>
> First of all I present the insight that sent me down this path, a comment by
Graham Neubig in an online conversation that I was not a part of. Then I set
that insight in the context of and insight by Sydney Lamb (meaning resides in
relations), a first-generation researcher in machine translation and
computational linguistics. I think take a grounding case by Julian Michael, that
of color, and suggest that it can be extended by the work of Peter Gärdenfors
on conceptual spaces.


With regards to all,

Bill Benzon

https://new-savanna.blogspot.com/2020/07/2-brain-and-gpt-3-part-1-dimensions-
and.html

Bill Benzon
bbenzon@mindspring.com

917-717-9841

http://new-savanna.blogspot.com/ 
http://www.facebook.com/bill.benzon 
http://www.flickr.com/photos/stc4blues/
https://independent.academia.edu/BillBenzon
http://www.bergenarches.com

--[2]------------------------------------------------------------------------
        Date: 2020-07-30 10:36:52+00:00
        From: Tim Smithers 
        Subject: Re: [Humanist] 34.193: on GPT-3 & beyond

Dear Bill,

You say

  "GPT-3 represents an achievement of a high order: ..."

Whose "high order" is that, may I ask?  And how, in your view,
does GPT-3 achieve this, exactly?

In your "Electric Conversation" with Hollis Robbins (see HDG
Vol 34, No 178) you asked her to give her view on a sonnet
generated using GPT-3, using the first three lines of Marcus
Christian's sonnet "The Craftsman" [1] as input.  This is how
Hollis Robbins responded [2]

   HR: Well it's not finished as a sonnet of course — it's the
   first part of a sonnet, at eight lines.  The rhyme scheme
   is unclear -- on the one hand skill/toil is a nice
   para-rhyme but 'care' and 'up' don't seem to have rhymes
   and that's a problem in a poem that is explicitly about
   creating art "with consummate care."  GPT-3's six lines are
   careless.  Moreover the words chosen have no coherent
   meaning.  Why use a word like score, with multiple
   meanings, if you’re not going to lean on the multiple
   meanings?  Why a million?  Why the strange archaic syncope,
   "howe’er"?  Who puts up a drum?  Why the wasted word
   phrase, "Can never be made into"?

This, to me, seems quite damming, and certainly not signs
of "achievement of a high order."

In response, you dismissed Hollis Robbins's worries, and said

   BB: That's all well and good.  I certainly don’t think
   GPT-3 is going to put poets out of work.  Nor, for that
   matter, GPT-6, GPT-13, or even GPT-42.  I don’t think
   that’s how this is going to evolve.

   But what's got my attention is the words "drum" and "bell"
   in those last two lines. ...

Really?  But these aren't the last two lines.  As Robbins
explained, GPT-3 didn't finish the sonnet.  It needs another
six lines (see [2]).  And "drum" and "bell" are what you see
as a highlight here?  This is "high order achievement"?  Take
a look at Marcus Christian's real version.  GPT-3, in my
(uneducated) reading, gets nowhere near to Christian's words
here.  Fixing on "drum" and "bell" looks to me like fishing
for something that can be remarked upon as surprising in the
GPT-3 output.  But it's not surprise that makes Marcus
Christian's sonnet amazing, in my view.  It's a deep
understanding of the human condition, and a beauty in
expressing this.

But this gross over rating of an AI program has all happened
before, starting way back with Joseph Weizenbaum Eliza program
of 1964 to 1966.  This too surprised people, and this surprise
worried Weizenbaum.

Another time: Deep Blue I and II, IBMs chess playing machines.
When Deep Blue II beat Garry Casparov -- thought by some in
the game as perhaps the best human chess player ever -- we
heard doom and gloom predictions about the future of
humankind, and of chess as a game people play.  What happened?
Humans are still here, still messing up the planet.  And more
of them play chess now, some teaming up with computer chess
programs to do this.

Another one: in 1991 David Cope, Professor of music at the
University of California at Santa Cruz, published his book
"Computers and Musical Style," about his computer program
called Experiments in Musical Intelligence (EMI), but later
renamed Emily Howell.  This, when suitably setup and
configured with plenty of carefully prepared data, could
convincingly produce music in the style of Mozart, for
example.  EMI became famous because it shocked, and somewhat
outraged, Douglas Hofstadter (he of Gödel, Escher, Bach fame).
Hofstadter expressed deep surprise and unbelief on first
hearing what EMI could produce, and respond in an
uncharacteristically dismal way, see

   3 Pessimistic Possibilities
    clip from a 1997 Douglas Hofstadter talk about David
    Cope's EMI and algorithmic music [1m31s]
  

He wrote more on this here

   Essay in the style of Douglas Hofstadter
   by Douglas Hofstadter
   AI Magazine, Vol 30, No 3, Fall 2009
   
   (Full article)

And organised a series of seminars with David Cope that
resulted in a book [3].

But, again, humans did not stop composing music, far from it.
We have not come to think Chopin's music as shallow, nor music
in general, nor that "human souls and minds are a lot simpler
than we thought" (though perhaps we should for other more
contemporary reasons).

Why does this keep happening?  Why do we keep saying "with
this new AI the end is nigh?"  Because, I think, we keep
calling things what they are not.

Eliza was not able to hold a conversation with a person.  It
just sort of looked like it did, or sort of felt like it did,
to some people, at least.

Gary Casparov lost his second chess match against IBM's Deep
Blue II, but that doesn't mean Deep Blue II plays chess.  It
means Gary Casparov could not win enough times when he played
chess against Deep Blue II. Deep Blue II, the machine, could
not say anything about its "chess playing;" it couldn't
explain what it did; it couldn't tell you anything about
chess, it's history, cultural significance, about other
important players, what the rules of the game are ...  all
things people who play chess well can tell you about, and, in
my view, are essential and integral to a proper understanding
of what we mean by "plays chess."

In the case of David Cope's EMI, the reality was, as Cope
freely admitted, rather more modest than Hofstadter's response
suggested, and in fact the music in the style of Mozart that
it produced was noticeably not as good as original Mozart.
And, a program like EMI was never going to be able to get near
to the music of people like John Cage, or Iannis Xenakis, for
example, to name just two composers.

Developing EMI did teach David Cope a lot about music and how
to understand it.  Playing against Deep Blue also taught Gary
Casparov more about human chess playing.  And, though we seem
to have forgotten it, Weizenbaum's Eliza showed us just how
easily we humans can be fooled into saying machines can do
things we can do when they don't.

A good use of GPT-3, it seems to me, could be to use it to
massively flood Twitter, Facebook, Instagram, and all the other
(so called) Social Media platforms, with empty words, so that
we can all forget about trying to find the "true" and
"factually correct" words, while avoiding and ignoring the
false, insulting, and hateful words, in these places.

I'm happy staying with, the known to work well filter-in,
places like Humanist, rather than try to use modern filter-out
places that will always fail to keep things civilised,
sometime spectacularly, such as with the current president of
the USA, I would say.

Best regards,

Tim


Notes

[1] The Craftsman,  by Marcus B Christian

    I ply with all the cunning of my art
    This little thing, and with consummate care
    I fashion it—so that when I depart,
    Those who come after me shall find it fair
    And beautiful. It must be free of flaws—
    Pointing no laborings of weary hands;
    And there must be no flouting of the laws
    Of beauty—as the artist understands.

    Through passion, yearnings infinite—yet dumb—
    I lift you from the depths of my own mind
    And gild you with my soul’s white heat to plumb
    The souls of future men. I leave behind
    This thing that in return this solace gives:
    "He who creates true beauty ever lives."


[2] An Electric Conversation with Hollis Robbins on the
    Black Sonnet Tradition, Progress, and AI, with Guest
    Appearances by Marcus Christian and GPT-3
    By Bill Benzon
    Posted on Monday, Jul 20, 2020
    


[3] David Cope, 2004.  Virtual Music Computer Synthesis of
    Musical Style, MIT Press.  In particular, see chapter 2:
    Staring EMI Straight in the Eye -- and Doing My Best Not,
    by Hofstadter, pp 33.
    to Flinch 


> On 28 Jul 2020, at 08:25, Humanist  wrote:
>
>                  Humanist Discussion Group, Vol. 34, No. 193.
>            Department of Digital Humanities, King's College London
>                   Hosted by King's Digital Lab
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
>
>
>        Date: 2020-07-28 04:40:25+00:00
>        From: Bill Benzon 
>        Subject: NEW SAVANNA: 1. No meaning, no how: GPT-3 as Rubicon and
Waterloo, a personal view
>
> [The following concerns an AI text-generator called GPT-3, for which e.g. see
https://www.wired.co.uk/article/gpt-3-openai-examples --WM]
>
> Willard and fellow humanists -- The second installment (of 5 planned) in my
> series on GPT-3 and beyond.
>
> 1. No meaning, no how: GPT-3 as Rubicon and Waterloo, a personal view
>
> I say that not merely because I am a person and, as such, I have a point of
view
> on GPT-3, and related matters. I say because the discussion is informal,
without
> journal-class discussion of this, that, and the others, along with the
attendant
> burden of citation, though I will offer a few citations. More over, I'm pretty
> much making this up as I go along. That is to say, I am trying to figure out
> just what it is that I think, and see value in doing so in public.
>
> What value, you ask? It commits me to certain ideas, if only at a certain
time.
> It lays out a set of priors and thus serves to sharpen my ideas developments
> unfold and I, inevitably, reconsider.
>
> GPT-3 represents an achievement of a high order; it deserves the attention it
> has received, if not the hype. We are now deep in "here be dragons"
> territory and we cannot go back. And yet, if we are not careful, we'll never
> leave the dragons, we'll always be wild and undisciplined. We will never
> actually advance; we'll just spin faster and faster. Hence GPT-3 is both a
> Rubicon, the crossing of a threshold, and a potential Waterloo, a battle we
> cannot win.
>
> Here's my plan: First we take a look at history, at the origins of machine
> translation and symbolic AI. Then I develop a fairly standard critic of
semantic
> models such as those used in GPT-3 which I follow with some remarks by Martin
> Kay, one of the Grand Old Men of computational linguistics. Then I look at the
> problem of common sense reasoning and conclude be looking ahead to the next
post
> in this series in which I offer some speculations on why (and perhaps even
how)
> these models can succeed despite their sever and fundamental short-comings.
>
> https://new-savanna.blogspot.com/2020/07/1-no-meaning-no-how-gpt-3-as-
> rubicon.html
>
> Bill Benzon
> bbenzon@mindspring.com
>
> 917-717-9841
>
> http://new-savanna.blogspot.com/ 
> http://www.facebook.com/bill.benzon 
> http://www.flickr.com/photos/stc4blues/
> https://independent.academia.edu/BillBenzon
> http://www.bergenarches.com 





_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php


Editor: Willard McCarty (King's College London, U.K.; Western Sydney University, Australia)
Software designer: Malgosia Askanas (Mind-Crafts)

This site is maintained under a service level agreement by King's Digital Lab.