Home About Subscribe Search Member Area

Humanist Discussion Group


< Back to Volume 33

Humanist Archives: Oct. 25, 2019, 6:42 a.m. Humanist 33.343 - what we're not ready for

                  Humanist Discussion Group, Vol. 33, No. 343.
            Department of Digital Humanities, King's College London
                   Hosted by King's Digital Lab
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2019-10-25 05:39:20+00:00
        From: Tim Smithers 
        Subject: Re: [Humanist] 33.337: what we're not ready for

Dear Bill and Jim,

To me, the question "is languaging (writing, for example, but
not just writing) computational?"  smells a lot like the
question "is intelligence computational?"
In AI (and to some extent Cognitive Science) this second
question has resulted in much heated but empty and
unproductive debate and argument and little else, I think.

To be more useful both these questions need to be cast as an
hypothesis to be empirically investigated, and accompanied by
a convincing and practical programme for how we might do this
investigation.

But, before we rush off to do this, it would be worth
noticing, I think, that AI does not investigate its version of
this hypothesis: "intelligence is computational in nature."
Most people in AI simply presume intelligence is
computational, and they always have done, and have mostly
ignored the debate and argument.

AI as a field of research is, I think, best understood as an
investigation of intelligent behaviour (in its different
forms) by trying to (re)create it in the artificial.
[Digital] Computation is a very convenient medium to use to do
this recreating in the artificial.  (And very attractive if
you already believe intelligence is computational.)

There is, however, a central hazard to this way of
investigating intelligence that is inherent in trying to
create things in the artificial.  Do we end up with what I
call Artificial Light AI (AL-AI), or do we end up with what I
call Artificial Flower AI (AF-AI)?  Like artificial light,
AL-AI is the real thing, but created by artificial means,
digital computation, for example.  AF-AI, on the other hand,
like artificial flowers, looks like the real thing, but isn't.
In the case of flowers it turns out we can have both natural
flowers and we can have artificial flowers that are
(sometimes) hard to tell apart from real ones.  In the case of
light, both natural light and artificial light are the same
thing, and cannot be different.  What of intelligence?  If
created in the artificial must intelligence be the real thing,
like light?  Or, is it like flowers, it might be AF-AI, and
thus not the real thing, just a good look-a-like?

From the point of view of engineering useful tools and
devices, this question may not be important.  From the point
of view of investigating our hypotheses about the
computational nature of languaging and intelligence, it is
important.  But, it is not an easy question to sort out.  How
do we tell if it's real AL-AI intelligence, and not AF-AI? Or,
how are we going to tell is it's AL-Languaging, done by
computational means, or AF-Languaging, again, done by
computational means?

To illustrate this difficulty, a bit, a recent article in
Quanta Magazine is useful, I think.

   Machines Beat Humans on a Reading Test.  But Do They
   Understand?
    A tool known as BERT can now beat humans on advanced
    reading-comprehension tests.  But it's also revealed how far
    AI has to go

(https://www.quantamagazine.org/machines-beat-humans-on-a-reading-test-but-do-
they-understand-20191017/)

BERT is a kind of trained (deep artificial neural network)
machine learning tool.  The article explains how BERT (and
variants) has been used to perform well, and better than most
humans, on GLUE (General Language Understanding Evaluation)
tasks, thus demonstrating reading comprehension, as measured
by GLUE. Comprehension, but not, perhaps, understanding.  Not
unless we want to say understanding is equivalent to GLUE
measured comprehension, and therefore say it is real
understanding, and not look alike understanding.

I know reading is not writing, but if we have difficulties
establishing if computational reading results in real
understanding, why would we think establishing that
computational writing is the real thing is going to be any
easier.  And, it would seem strange if (real) reading
comprehension turned out not to be computational in nature,
but (real) writing does, or, of course, visa versa.
Establishing that both reading and writing are computational
in nature, or not computational in nature, is not going to be
easy, I think.  But, this certainly doesn't mean good attempts
to do this should not be encouraged.  I don't think current
practices in AI are going to help much with this, however.

This does mean that if work in DH did manage to sort out how
to tell if computational writing or reading is real writing
and reading, or just (very) good look-a-like writing and
reading, to the satisfaction of scholars across the
Humanities, this would be, I would say, an important
contribution, and not just to the Humanities.

Best regards,

Tim



> On 23 Oct 2019, at 07:28, Humanist  wrote:
>
>                  Humanist Discussion Group, Vol. 33, No. 337.
>            Department of Digital Humanities, King's College London
>                   Hosted by King's Digital Lab
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
>
>
>        Date: 2019-10-22 15:22:43+00:00
>        From: Jim Rovira 
>        Subject: Re: [Humanist] 33.336: what we're not ready for
>
> Great response, Bill, and thank you for the details and clarification. I
> correct myself -- I'm not aware of that kind of work being widely conducted
> either.
>
> I suspect the problem might be that language isn't really computational. It
> doesn't unfold word after word. One metaphor is that it's a garden of
> forking paths, but even that is too linear. A computer could manage that. A
> word isn't a meaning -- it's a range of denotations and connotations that
> are continually creating new paths that simultaneously create and
> contradict others. It's a series of self involved and self-defeating loops.
> In computational terms, the average literary text is a mass of system
> crashes. What we might do instead is ask what kinds of approaches to
> literary texts most resemble forms of computation already. Might be
> tempting to say "New Criticism," but so many of them were in love with
> paradox: it's not interesting until the system crashes. Maybe some kinds of
> formalism, especially perhaps Russian formalism, and perhaps this might be
> an interesting way to revive myth criticism, say, Frye?
>
> However, have you looked into Robert Brandom?
>
> Jim R
>
> On Tue, Oct 22, 2019 at 1:30 AM Humanist  wrote:
>
>>
>> I hesitate to offer that passage because, as far as I can tell, Moretti's
>> not
>> calling for the kind of theoretical inquiry I've been referring to, though
>> what he IS calling for interests me a great deal. I quote it, though,
>> because it
>> does point up pretty much the same issue. To invoke a cliche, computation
>> is
>> always the bridesmaid, never the bride.
>>
>> BB
>>



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php


Editor: Willard McCarty (King's College London, U.K.; Western Sydney University, Australia)
Software designer: Malgosia Askanas (Mind-Crafts)

This site is maintained under a service level agreement by King's Digital Lab.