Home About Subscribe Search Member Area

Humanist Discussion Group


< Back to Volume 34

Humanist Archives: Aug. 3, 2020, 8:38 a.m. Humanist 34.207 - on GPT-3

                  Humanist Discussion Group, Vol. 34, No. 207.
            Department of Digital Humanities, King's College London
                   Hosted by King's Digital Lab
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: Jim Rovira 
           Subject: Re: [Humanist] 34.203: on GPT-3 (39)

    [2]    From: Mark Wolff 
           Subject: Re: [Humanist] 34.203: on GPT-3 (65)

    [3]    From: William L. Benzon 
           Subject: NEW SAVANNA: What Louis Milic saw back in 1966 [digital humanities] (21)


--[1]------------------------------------------------------------------------
        Date: 2020-08-02 16:41:42+00:00
        From: Jim Rovira 
        Subject: Re: [Humanist] 34.203: on GPT-3

Many thanks for these recent posts about GPT-3. I have to say those opening
sentences are remarkable. Since they don't "mean" anything to the machine,
though, and they don't, I don't think they teach us anything about
intelligence, at least not human intelligence, and probably not about
intelligence in any general sense. I think we need to remember one key
point:

- textual meaning exists for readers only.


I wanted to set that idea apart so that we can focus on it. That seems like
a remarkable thing to say, but textual meaning only exists even for authors
only as readers of their own texts. We read our texts in advance, in our
heads, and then write them -- how far in advance is a matter of our
planning process, but the editing process we all go through tells us that
we're not perfect readers of our own texts during the composition process.
Authors only finish their works as readers of their own texts. That's when
they make some final decisions about "meaning" that is embodied in the
editing choices they make. It's very difficult, if not impossible, to write
and interpret a sentence simultaneously, especially any sentence
demonstrating any kind of complexity.

That's also why I reject the Turing test as telling us anything meaningful
about the machine. It's a test of human readers, not machine intelligence.

So the machine as a writer is not demonstrating any intelligence. I think
the machine as a writer demonstrates the sophistication of its programming
only. Now if the machine continued to write, -agonizing- over word choices
(can a machine agonize or experience emotion without an organic body? It
can simulate emotional reactions, but is that the same as -feeling- these
emotions?), writing complete, coherent stories, and then could have a
conversation with us about what the story -meant- after the fact -- best of
all, getting miffed if we misunderstood or criticized its story -- then it
would be demonstrating intelligence.

In other words, we only demonstrate human intelligence through stupidity.

Jim R


--[2]------------------------------------------------------------------------
        Date: 2020-08-02 14:02:30+00:00
        From: Mark Wolff 
        Subject: Re: [Humanist] 34.203: on GPT-3

On Aug 2, 2020, at 2:58 AM, Humanist  wrote:

> To my mind (to rephrase and expand), three questions arise:
>
> (1) What is 'intelligence'? Should we not be talking in terms of
> different intelligences? (See the recent research of the ethologists.)
>
> (2) How do we develop the artificial kind(s) according to its (their)
> own particular characteristics and constraints? From the evidence we
> currently have, what is utterly new, strange but somehow teases us
> intellectually, perceptually?
>
> and finally, to quote Marilyn Strathern from her discussion of Donna
> Haraway on cyborgs,[***]
>
> (3) "The question is the kind of connection one might conceive between
> entities that are made and reproduced in different ways - have different
> origins in that sense - but which work together."

I think we are running down a rabbit hole if we are trying to distinguish
artificial intelligence from human intelligence when it comes to writing.
Willard writes:

> I want to know, what do we learn about our own abilities, and what are the
differences between the machine's offering and our own -- and again, by "our
own" I mean the very best we humans can produce?

The problem with this formulation is that we don’t really know what “our own
abilities” are because humans always create within a world with things at hand.
When it comes to writing we can’t really isolate ourselves from the world and
determine what is human and what is not because we think in, through, and with
the world. We can historicize literary artefacts and explain their contexts, but
this in a way takes them out of the world.

Instead of trying to distinguish what a machine can do with language from what
humans can do, I think a more productive (and rather uncharted) line of inquiry
would be to see what humans can write with machines. The notion of distibuted
cognition, where cognitive work is shared by humans and machines, is helpful
here. Does the way in which humans write physically (stylus, pen, typewriter,
keyboard) and the linguistic resources at their disposal (libraries,
dictionaries and thesauri, autocorrect apps) contribute to what they write? Of
course it does. And we can try to tease out how distributed cognition works when
it comes to writing, but a more humanist approach would be to explore what can
be written with various forms of technology as they constitute the “terroir” (to
borrow from Thomas Rickert) of the communicative act.

For humanists the question concerning intelligence is not epistemological. It is
ontological and rhetorical, in the sense that, ultimately, we use language in a
given situation to persuade each other about who we are. Who are we with
machines, and how can we express that in useful ways?

mw
--
Mark B. Wolff, Ph.D.
Professor of French
Chair, Modern Languages
One Hartwick Drive
Hartwick College
Oneonta, NY  13820
(607) 431-4615

http://markwolff.name/




--[3]------------------------------------------------------------------------
        Date: 2020-08-02 07:03:28+00:00
        From: William L. Benzon 
        Subject: NEW SAVANNA: What Louis Milic saw back in 1966 [digital humanities]

Willard – Here’s a post about Milic’s essay [-- which speaks to the 
question at hand. --WM].

BB

https://new-savanna.blogspot.com/2020/08/what-louis-milic-saw-back-in-1966.html


Bill Benzon
bbenzon@mindspring.com 

917-717-9841

http://new-savanna.blogspot.com/
http://www.facebook.com/bill.benzon
http://www.flickr.com/photos/stc4blues/
https://independent.academia.edu/BillBenzon
http://www.bergenarches.com





_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php


Editor: Willard McCarty (King's College London, U.K.; Western Sydney University, Australia)
Software designer: Malgosia Askanas (Mind-Crafts)

This site is maintained under a service level agreement by King's Digital Lab.