Home About Subscribe Search Member Area

Humanist Discussion Group


< Back to Volume 33

Humanist Archives: Oct. 26, 2019, 7:44 a.m. Humanist 33.346 - what we're not ready for

                  Humanist Discussion Group, Vol. 33, No. 346.
            Department of Digital Humanities, King's College London
                   Hosted by King's Digital Lab
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: Jim Rovira 
           Subject: Re: [Humanist] 33.343: what we're not ready for (42)

    [2]    From: Bill Benzon 
           Subject: Re: [Humanist] 33.337: what we're not ready for (208)

    [3]    From: William Pascoe 
           Subject: Re: [Humanist] 33.343: what we're not ready for (324)


--[1]------------------------------------------------------------------------
        Date: 2019-10-25 17:41:44+00:00
        From: Jim Rovira 
        Subject: Re: [Humanist] 33.343: what we're not ready for

Thanks so much for your contribution to this discussion, Tim. What comes to
mind now, at least to my mind, is that this question is badly phrased:

"Is reading (or writing) computational in nature?"

It's that "in nature" bit that's the real sticking point. That means that
whatever else reading and writing is, it's always also computational. I
think that's a problem, even aside from the problem you raise. If we don't
even understand if human thinking is computational in nature, how can we be
sure at all about reading or writing human written products?

But how about this question?

"Can writing (specifically, written literary texts) be interpreted as a
form of computation?"

That's a hermeneutic question that's limited, potentially valid, and
doesn't implicate us in bigger questions that don't seem answerable at this
point. Or we could put it this way,

"If we read written literary texts computationally, will we learn anything
of value?"

The answer to that questions is, well, try it and let's see.

Jim R

On Fri, Oct 25, 2019 at 1:42 AM Humanist  wrote:

>
> This does mean that if work in DH did manage to sort out how
> to tell if computational writing or reading is real writing
> and reading, or just (very) good look-a-like writing and
> reading, to the satisfaction of scholars across the
> Humanities, this would be, I would say, an important
> contribution, and not just to the Humanities.
>
> Best regards,
>
> Tim



--[2]------------------------------------------------------------------------
        Date: 2019-10-25 17:19:27+00:00
        From: Bill Benzon 
        Subject: Re: [Humanist] 33.337: what we're not ready for


> On Oct 23, 2019, at 1:28 AM, Humanist  wrote:
>
>                  Humanist Discussion Group, Vol. 33, No. 337.
>            Department of Digital Humanities, King's College London
>                   Hosted by King's Digital Lab
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>        Date: 2019-10-22 15:22:43+00:00
>        From: Jim Rovira 
>        Subject: Re: [Humanist] 33.336: what we're not ready for
>
> Great response, Bill, and thank you for the details and clarification. I
> correct myself -- I'm not aware of that kind of work being widely conducted
> either.

I appreciate the apology, Jim. Beyond that, however, I am at a loss as to how to
respond to your further remarks and to the more recent and more extensive
remarks by Tim Smithers, which I have appended below.

In the post where I raised issue I very carefully said "literary studies people
could begin ... to think about literary processes as somehow, in some measure,
computational in kind." I didn't say that literature was computational in
kind, but only that it was "somehow, in some measure" computation. And then
-- this is the important part -- I have a reference and a link to a paper I
published a bit over a decade ago: Literary Morphology: Nine Propositions in a
Naturalist Theory of Form (PsyArt: An Online Journal for the Psychological Study
of the Arts, 2006, article 060608). It's available online here: https://www.ac
ademia.edu/235110/Literary_Morphology_Nine_Propositions_in_a_Naturalist_Theory_o
f_Form

In that paper I set forth some ideas about the computational nature of literary
form that are the result of three decades of research. I've been thinking
about this for a long time and my views are fairly sophisticated. What you and
Tim have, in effect, told me is that that work is of no consequence, that I've
wasted my time. How can I respond to your uninformed comments except to ignore
them?  If neither of you have time to read a long and difficult paper, I
understand that. As far as I am concerned, however, that implies that you are
not prepared to engage the issue in a serious way. OK. You're busy, you've
got your own research program, teaching, family, and so forth. You plate is
full.

But here's the problem: As far as I can tell, the whole of digital humanities,
or whatever it is, is pretty much like that. Here are some remarks I used to
preface and old post I recently bumped to the top of the queue at my blog:

> "If you read far enough you'll see me point out that many of the tools
currently used in computational criticism have their origins in machine
translation, yet the effort to understand language computationally has all but
been ignored in computational criticism and DH more generally. Since the field
as chosen "digital humanities" as its name, however fitfully, it seems to me
this calls for a bit of deconstructive questioning: Why ignore (some of) the
deepest lines of investigation implied by the term you've adopted for your
inquiry? Is being au courant so important that you're willing to toss the baby
overboard so that you can splash about in the bath more freely? I leave such
analysis to the reader."

In that post I quote from some articles Matthew Kirschenbaum has written about
the origins of the term, "digital humanities". The post is entitled
"Whatâ's in a Name? -- 'Digital Humanities' [#DH] and 'Computational
Linguistics' and here's the link: https://new-
savanna.blogspot.com/2016/05/whats-in-name-digital-humanities-dh-and.html


Best,

BB

> I suspect the problem might be that language isn't really computational. It
> doesn't unfold word after word. One metaphor is that it's a garden of
> forking paths, but even that is too linear. A computer could manage that. A
> word isn't a meaning -- it's a range of denotations and connotations that
> are continually creating new paths that simultaneously create and
> contradict others. It's a series of self involved and self-defeating loops.
> In computational terms, the average literary text is a mass of system
> crashes. What we might do instead is ask what kinds of approaches to
> literary texts most resemble forms of computation already. Might be
> tempting to say "New Criticism," but so many of them were in love with
> paradox: it's not interesting until the system crashes. Maybe some kinds of
> formalism, especially perhaps Russian formalism, and perhaps this might be
> an interesting way to revive myth criticism, say, Frye?
>
> However, have you looked into Robert Brandom?
>
> Jim R
>

Reply from Tim Smithers:

>       Date: 2019-10-25 05:39:20+00:00
>        From: Tim Smithers 
>        Subject: Re: [Humanist] 33.337: what we're not ready for
>
> Dear Bill and Jim,
>
> To me, the question "is languaging (writing, for example, but
> not just writing) computational?"  smells a lot like the
> question "is intelligence computational?"
> In AI (and to some extent Cognitive Science) this second
> question has resulted in much heated but empty and
> unproductive debate and argument and little else, I think.
>
> To be more useful both these questions need to be cast as an
> hypothesis to be empirically investigated, and accompanied by
> a convincing and practical programme for how we might do this
> investigation.
>
> But, before we rush off to do this, it would be worth
> noticing, I think, that AI does not investigate its version of
> this hypothesis: "intelligence is computational in nature."
> Most people in AI simply presume intelligence is
> computational, and they always have done, and have mostly
> ignored the debate and argument.
>
> AI as a field of research is, I think, best understood as an
> investigation of intelligent behaviour (in its different
> forms) by trying to (re)create it in the artificial.
> [Digital] Computation is a very convenient medium to use to do
> this recreating in the artificial.  (And very attractive if
> you already believe intelligence is computational.)
>
> There is, however, a central hazard to this way of
> investigating intelligence that is inherent in trying to
> create things in the artificial.  Do we end up with what I
> call Artificial Light AI (AL-AI), or do we end up with what I
> call Artificial Flower AI (AF-AI)?  Like artificial light,
> AL-AI is the real thing, but created by artificial means,
> digital computation, for example.  AF-AI, on the other hand,
> like artificial flowers, looks like the real thing, but isn't.
> In the case of flowers it turns out we can have both natural
> flowers and we can have artificial flowers that are
> (sometimes) hard to tell apart from real ones.  In the case of
> light, both natural light and artificial light are the same
> thing, and cannot be different.  What of intelligence?  If
> created in the artificial must intelligence be the real thing,
> like light?  Or, is it like flowers, it might be AF-AI, and
> thus not the real thing, just a good look-a-like?
>
> From the point of view of engineering useful tools and
> devices, this question may not be important.  From the point
> of view of investigating our hypotheses about the
> computational nature of languaging and intelligence, it is
> important.  But, it is not an easy question to sort out.  How
> do we tell if it's real AL-AI intelligence, and not AF-AI? Or,
> how are we going to tell is it's AL-Languaging, done by
> computational means, or AF-Languaging, again, done by
> computational means?
>
> To illustrate this difficulty, a bit, a recent article in
> Quanta Magazine is useful, I think.
>
>   Machines Beat Humans on a Reading Test.  But Do They
>   Understand?
>    A tool known as BERT can now beat humans on advanced
>    reading-comprehension tests.  But it's also revealed how far
>    AI has to go
>
> (https://www.quantamagazine.org/machines-beat-humans-on-a-reading-test-but-do-
> they-understand-20191017/)
>
> BERT is a kind of trained (deep artificial neural network)
> machine learning tool.  The article explains how BERT (and
> variants) has been used to perform well, and better than most
> humans, on GLUE (General Language Understanding Evaluation)
> tasks, thus demonstrating reading comprehension, as measured
> by GLUE. Comprehension, but not, perhaps, understanding.  Not
> unless we want to say understanding is equivalent to GLUE
> measured comprehension, and therefore say it is real
> understanding, and not look alike understanding.
>
> I know reading is not writing, but if we have difficulties
> establishing if computational reading results in real
> understanding, why would we think establishing that
> computational writing is the real thing is going to be any
> easier.  And, it would seem strange if (real) reading
> comprehension turned out not to be computational in nature,
> but (real) writing does, or, of course, visa versa.
> Establishing that both reading and writing are computational
> in nature, or not computational in nature, is not going to be
> easy, I think.  But, this certainly doesn't mean good attempts
> to do this should not be encouraged.  I don't think current
> practices in AI are going to help much with this, however.
>
> This does mean that if work in DH did manage to sort out how
> to tell if computational writing or reading is real writing
> and reading, or just (very) good look-a-like writing and
> reading, to the satisfaction of scholars across the
> Humanities, this would be, I would say, an important
> contribution, and not just to the Humanities.
>
> Best regards,
>
> Tim

Bill Benzon
bbenzon@mindspring.com

917-717-9841

http://new-savanna.blogspot.com/ 
http://www.facebook.com/bill.benzon 
http://www.flickr.com/photos/stc4blues/

https://independent.academia.edu/BillBenzon

http://www.bergenarches.com 


--[3]------------------------------------------------------------------------
        Date: 2019-10-25 07:55:40+00:00
        From: William Pascoe 
        Subject: Re: [Humanist] 33.343: what we're not ready for

Trying not to write 100000 words in an email.... some thoughts on 'strong' AI,
and meaning, (not just useful applications modeled on human intelligence like
neural nets for image matching):

# GOOD ANALOGIES

We still aren't sure what we mean by 'intelligent'. Rather than simply say,
'well you can't try to make it until you define your terms', look at how in it's
history, the attempt to make it and understanding what we mean by it and how it
works go hand in hand in an iterative process. Eg: someone builds a computer
than can beat humans at chess, because intelligent things can play chess right?
Then we learn that 'winning at chess' isn't really what we meant by
'intelligent' so we refine our understanding, and try a different experiment
from which we further learn what we don't and do mean by 'intelligence', and how
to build it (neural networks, non computational autonomous cockroach robots,
etc). This is a methodological point. By trying to build it you understand it
and vice versa, it's not as if one is priori and the other posteriori (it turns
out this process of attempting and learning is also crucial to
understanding/building intelligence).

A more pertinent question than 'Is intelligence inherently computational' or 'Is
intelligence necessarily not computational' is 'Can intelligence be implemented
computationally?' It has been implemented biologically in ways that are probably
not best described as 'computational', or at least are very difficult to emulate
with serial computation, but that doesn't necessarily mean you can't implement
intelligence computationally. 'Flight' is a great analogy for thinking about AI.
Like 'intelligence' it is an *ability* not a substance or a property. It works
as an analogy in many ways, but one of them is that you can implement flight
biologically with birds, but you can also implement it with metal and internal
combustion etc, so long as you have forward propulsion and an aerodynamically
shaped wing. Surprisingly, flight doesn't require flapping wings - so too we
might have artificial 'intelligence, Jim, but not as we know it'. Also it's not
as if there is a thing called flight inside a bird or aeroplane that can be
added or removed - rather it is an outcome of the functioning of a system. Also,
the account we make of flight, of *how and why* things can fly, is a whole bunch
of aerodynamics equations which don't look or feel much like a flying thing. So
too we shouldn't expect the account we give of how and why things are
intelligent to look or feel like intelligence - it will still not be adequate to
answer questions about whether my yellow looks and feel like your perception of
yellow. It will just explain how and why yellow can possibly mean something to
the intelligent entity such they would act on it etc.

# MORE THAN BEHAVIOUR

While things can be learned from behaviour, in psychology behaviourism is
outdated and there are many good arguments against it. Although we might learn
something from it, we shouldn't cling to a behaviourist attitude, such as the
Turing Test, in AI, if we have already encountered so many flaws and moved on
from it when studying intelligent things like ourselves and animals. Although we
aren't sure on the definition yet, to decide whether something is intelligent or
not we need a theory of intelligence, so that we can then check if the thing is
intelligent, not just observe it's outward behaviour (which can never resolve
the question as to whether this just a parlour trick or 'real' intelligence - a
question which demands a theory of what 'real' intelligence is, so that we can
check. I can see that the proof of flight is that we can see it fly but it might
be that someone has rigged invisible strings - then what? What if all percpetion
is a trick of evil demons with brains in vats? - we are not trying to answer
that question here, so bear with me on the need for theory point) Eg: when I was
13 years old I saw an animatronic on a podium that was extraordinarily lifelike,
even the skin looked real. Approaching closely to see how it worked it tipped
it's hat, stepped off the podium and walked out the door. Just because I was
convinced by outward appearances it was a robot did not prove it was a robot. In
fact it was human - we need some theory of what something 'is' (and that is an
understanding we work towards.)

Rejecting the behaviourist attitude that the Turing Test has made so popular is
vital to progress in AI for deeper reasons. Without writing a whole thesis here
- my thesis is that a parsimonious theory of AI includes an account of learning,
free will, sentience, semiosis, intentionality, hopes and dreams, cognition,
memory and many of those things we consider 'human'. In accounting for one of
those things we end up accounting for the others. Let's see now - when we say
something is 'intelligent' we usually mean it is not just a machine performing
according to explicit instruction, it is not just automatically responding
exactly to the same stimuli every time (like a fish that must eat food in front
of it's face, or the way we recoil in pain - unless out intelligence enables us
to see we should withstand it for a little while for future gain), but that
doing something different at different times according to some reasoning about
what seems 'better', is symptomatic of intelligence. So it involves the entity
doing it for itself, not *merely* automatically (being programmed like a
computer, and doing things for yourself are not necessarily mutually exclusive
as many arguments assume).

# FREE WILL SYSTEMS

So then, take 'free will'. Looking for a *working*, pretty good, parsimonious
account, as we work through things and try them out, notwithstanding thousands
of years of debate among the worlds greatest philosophers, how about we break
'free will' down firstly to the word 'free' and wonder how would you make a
'free' thing artificially? What could the word 'free' mean in machine, or
engineering terms? Well free from what? From the world. There is a basic assumed
distinction already, that there is the entity, and it is distinct from the world
and it is free from it (there's a lot to think about thermodynamically here and
this all too easy distinction is problematised later). Meaning its behaviour, or
what it does is not determined by the world, but by some process *internal* to
it. Note that already the *internal* is crucial to intelligence (if it involves
free will/choice/ etc), not just the behaviour. What we mean by this term 'free'
in information engineering terms could just be that outputs are more determined
by internal processing and states than by outside conditions. Eg: flick a switch
and the light goes on or off - the inputs are directly mapped to outputs. That
is not free, but if the light sometimes doesn't come on, this system is not
completely determined by the world - so it is to some degree 'free' from it
(note philosophical arguments are often absolutist, but you can solve a lot of
problems just by accepting matters of degree). You could even measure how 'free'
systems are with some sort of equation of the probability of outputs given
inputs (though a 'free will' system might just happen to decide the same thing
every time - so there's more to work through there). So a system where the
output was determined by one of those hinged pendulums they use to illustrate
chaos theory, would be very 'free' of the world - but we wouldn't call it
'intelligent' or 'willing'.

So much for freedom. The 'will' part of freedom is more complicated and
paradoxically in order to freely will something we must also be pre-
determined....

# AW HECK, I'VE GOT A JOB TO DO

Look, maybe 10 years ago, sometime after learning a great deal from Prof Cliff
Hooker and many months of rumination I was walking through the car park and saw
how all this fit together. It wasn't much original thought - others have figured
out all the parts of the puzzle, it's just seeing how they fit together and
making an account of it. Sadly I had to go back to work fixing broken email
accounts, and it wasn't until earlier this year I was able to write it all down,
albeit in sketchy note form, having been motivated to give free philosophy
courses to students at our Uni because philosophy was cancelled, and because
whenever I thought about dying it always seemed it would be such a shame if
someone had understood this ageless question but wasn't able to communicate it
to anyone before they shuffled off. So if anyone is interested in how and why
artificial intelligence is theoretically possible, here's those notes (sorry
they are incomplete and sketchy):
https://hri.newcastle.edu.au/phil/artificialsemiosis.php

Kind regards,

Dr Bill Pascoe
System Architect
Time Layered Cultural Map Of Australia
C21CH Digital Humanities Lab
c21ch.newcastle.edu.au

T: 0435 374 677
E: bill.pascoe@newcastle.edu.au

The University of Newcastle (UON)
University Drive
Callaghan NSW 2308
Australia

The University of Newcastle is in the lands of Awabakal, Worimi, Wonaruah,
Biripi and Darkinjung people.

________________________________
From: Humanist 
Sent: Friday, 25 October 2019 4:42 PM
To: publish-liv@humanist.kdl.kcl.ac.uk 
Subject: [Humanist] 33.343: what we're not ready for

                  Humanist Discussion Group, Vol. 33, No. 343.
            Department of Digital Humanities, King's College London
                   Hosted by King's Digital Lab
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2019-10-25 05:39:20+00:00
        From: Tim Smithers 
        Subject: Re: [Humanist] 33.337: what we're not ready for

Dear Bill and Jim,

To me, the question "is languaging (writing, for example, but
not just writing) computational?"  smells a lot like the
question "is intelligence computational?"
In AI (and to some extent Cognitive Science) this second
question has resulted in much heated but empty and
unproductive debate and argument and little else, I think.

To be more useful both these questions need to be cast as an
hypothesis to be empirically investigated, and accompanied by
a convincing and practical programme for how we might do this
investigation.

But, before we rush off to do this, it would be worth
noticing, I think, that AI does not investigate its version of
this hypothesis: "intelligence is computational in nature."
Most people in AI simply presume intelligence is
computational, and they always have done, and have mostly
ignored the debate and argument.

AI as a field of research is, I think, best understood as an
investigation of intelligent behaviour (in its different
forms) by trying to (re)create it in the artificial.
[Digital] Computation is a very convenient medium to use to do
this recreating in the artificial.  (And very attractive if
you already believe intelligence is computational.)

There is, however, a central hazard to this way of
investigating intelligence that is inherent in trying to
create things in the artificial.  Do we end up with what I
call Artificial Light AI (AL-AI), or do we end up with what I
call Artificial Flower AI (AF-AI)?  Like artificial light,
AL-AI is the real thing, but created by artificial means,
digital computation, for example.  AF-AI, on the other hand,
like artificial flowers, looks like the real thing, but isn't.
In the case of flowers it turns out we can have both natural
flowers and we can have artificial flowers that are
(sometimes) hard to tell apart from real ones.  In the case of
light, both natural light and artificial light are the same
thing, and cannot be different.  What of intelligence?  If
created in the artificial must intelligence be the real thing,
like light?  Or, is it like flowers, it might be AF-AI, and
thus not the real thing, just a good look-a-like?

From the point of view of engineering useful tools and
devices, this question may not be important.  From the point
of view of investigating our hypotheses about the
computational nature of languaging and intelligence, it is
important.  But, it is not an easy question to sort out.  How
do we tell if it's real AL-AI intelligence, and not AF-AI? Or,
how are we going to tell is it's AL-Languaging, done by
computational means, or AF-Languaging, again, done by
computational means?

To illustrate this difficulty, a bit, a recent article in
Quanta Magazine is useful, I think.

   Machines Beat Humans on a Reading Test.  But Do They
   Understand?
    A tool known as BERT can now beat humans on advanced
    reading-comprehension tests.  But it's also revealed how far
    AI has to go

(https://www.quantamagazine.org/machines-beat-humans-on-a-reading-test-but-do-
they-understand-20191017/)

BERT is a kind of trained (deep artificial neural network)
machine learning tool.  The article explains how BERT (and
variants) has been used to perform well, and better than most
humans, on GLUE (General Language Understanding Evaluation)
tasks, thus demonstrating reading comprehension, as measured
by GLUE. Comprehension, but not, perhaps, understanding.  Not
unless we want to say understanding is equivalent to GLUE
measured comprehension, and therefore say it is real
understanding, and not look alike understanding.

I know reading is not writing, but if we have difficulties
establishing if computational reading results in real
understanding, why would we think establishing that
computational writing is the real thing is going to be any
easier.  And, it would seem strange if (real) reading
comprehension turned out not to be computational in nature,
but (real) writing does, or, of course, visa versa.
Establishing that both reading and writing are computational
in nature, or not computational in nature, is not going to be
easy, I think.  But, this certainly doesn't mean good attempts
to do this should not be encouraged.  I don't think current
practices in AI are going to help much with this, however.

This does mean that if work in DH did manage to sort out how
to tell if computational writing or reading is real writing
and reading, or just (very) good look-a-like writing and
reading, to the satisfaction of scholars across the
Humanities, this would be, I would say, an important
contribution, and not just to the Humanities.

Best regards,

Tim



> On 23 Oct 2019, at 07:28, Humanist  wrote:
>
>                  Humanist Discussion Group, Vol. 33, No. 337.
>            Department of Digital Humanities, King's College London
>                   Hosted by King's Digital Lab
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
>
>
>        Date: 2019-10-22 15:22:43+00:00
>        From: Jim Rovira 
>        Subject: Re: [Humanist] 33.336: what we're not ready for
>
> Great response, Bill, and thank you for the details and clarification. I
> correct myself -- I'm not aware of that kind of work being widely conducted
> either.
>
> I suspect the problem might be that language isn't really computational. It
> doesn't unfold word after word. One metaphor is that it's a garden of
> forking paths, but even that is too linear. A computer could manage that. A
> word isn't a meaning -- it's a range of denotations and connotations that
> are continually creating new paths that simultaneously create and
> contradict others. It's a series of self involved and self-defeating loops.
> In computational terms, the average literary text is a mass of system
> crashes. What we might do instead is ask what kinds of approaches to
> literary texts most resemble forms of computation already. Might be
> tempting to say "New Criticism," but so many of them were in love with
> paradox: it's not interesting until the system crashes. Maybe some kinds of
> formalism, especially perhaps Russian formalism, and perhaps this might be
> an interesting way to revive myth criticism, say, Frye?
>
> However, have you looked into Robert Brandom?
>
> Jim R
>
> On Tue, Oct 22, 2019 at 1:30 AM Humanist  wrote:
>
>>
>> I hesitate to offer that passage because, as far as I can tell, Moretti's
>> not
>> calling for the kind of theoretical inquiry I've been referring to, though
>> what he IS calling for interests me a great deal. I quote it, though,
>> because it
>> does point up pretty much the same issue. To invoke a cliche, computation
>> is
>> always the bridesmaid, never the bride.
>>
>> BB
>>



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php


Editor: Willard McCarty (King's College London, U.K.; Western Sydney University, Australia)
Software designer: Malgosia Askanas (Mind-Crafts)

This site is maintained under a service level agreement by King's Digital Lab.