Home About Subscribe Search Member Area

Humanist Discussion Group

< Back to Volume 33

Humanist Archives: Dec. 4, 2019, 6:03 a.m. Humanist 33.456 - misinformation about AI

                  Humanist Discussion Group, Vol. 33, No. 456.
            Department of Digital Humanities, King's College London
                   Hosted by King's Digital Lab
                Submit to: humanist@dhhumanist.org

        Date: 2019-12-02 14:17:12+00:00
        From: Robert Amsler 
        Subject: Re: [Humanist] 33.448: misinformation about AI

There are several factors causing AI to experience cycles of "boom" and
"bust" in the general media over time. This is not unique to AI. Many of
these factors affect other fields as well.

First, with increased investment in AI, there are now multiple well-funded
research establishments under pressure to produce the promised
revolutionary breakthroughs their funding was supposed to secure for their
funders. As every industry has felt the pressure to not be "left out" of
the advances that are being suggested are just around the corner, they have
started yet more research efforts. But research doesn't work well under
pressure to turn out breakthroughs on schedule. So this has forced the
laboratories to create events that demonstrate to the public some
breakthrough. Dog and Pony shows to show off progress.

Such demos are carefully scripted events intended to demonstrate an advance
first, and explain the special conditions set up to make the demo appear to
be an advance second. These are not unbiased performances of the
technology. They are carefully engineered to rule out known gaps in the
system, to bridge those gaps with fixes installed in the system to make
sure it doesn't fail. A demo is hardly a "test" of a system. It is a "play"
produced by the research lab. A bit of drama, much the was any bit of
stagecraft is used to convey an impression to an audience.

And the audience is visitors to the research lab, often forced encounters
between the researchers and the management of the company who are paying
their salaries or "opportunities" to demo their work to prospective funders
the laboratory is approaching for contracts, and lastly for invited
reporters who are engaged in a cooperative effort to promote news to the
benefit of their publication and the research lab. Think of it as the debut
of a new dramatic work for the audience. What goes on in presenting demos
is more like theater than science.

Second, there is a severe language gap in trying to discuss AI, and I don't
mean in terms of the inability of AI systems to deal with natural language.
I mean we lack the vocabulary to describe what an AI system is doing. This
results in AI researchers using vocabulary taken from describing what human
beings are doing and applying it to their systems. It isn't their fault,
there are no words available to explain AI results apart from those used to
explain such results as if they had been produced by a human being. Verbs
such as "understand" and "learn" are applied because we lack suitable
scientific terminology for AI. The very tasks AI is seeking to solve are
the ones which no "machine" has ever solved. It should thus not be terribly
surprising that when a machine does something it has never been shown to do
before that we are at a loss for words to use other than those words we
have accepted to describe that activity when performed by human beings. But
this is a warning flag. Whenever a word such as "understand" is used to
apply to a machine, it has been extended way beyond its warranty. I would
venture to say that no AI has every "understood" anything" in the human
sense of that word.

"Understand" can only be used to apply to what doesn't yet exist, a
"artificial general intelligence". Talk about a cart before the horse

Now, I believe we can invent word meanings to describe AI, but as part of
that process we have to much more careful in distinguishing between words
we understand in describing human activity, especially human mental
activity, as to how they utterly do not apply to machine activity.

The word "reasoning" is closer to what machines do than "understanding". I
routinely write programs in which I note in the comments that the program
"read in the data and remembered it". What I mean when I write such a
comment is that text was input and stored so that it can be compared to
other text. A list of words, a set of sentences that can be compared
character by character to another list of words or sentences character by
character to reach a "conclusion" that "it matches" or "it doesn't match".

The verb "matches" is a good one to examine. Computers, including AI's
basically have to reduce tasks to performing "matches". They are so much
more precise than what human beings do when performing a "match" that we've
transformed the way we communicate to bring our writing up to computer
standards. I have been working on entering data from a small town library
of books collected since 1897 that had a traditional handprinted card
catalog. One thing that surprised me was that because I've grown up in a
post-computer world, my sense of "correct" for a handprinted library
catalog card is different than that employed before computers became
commonly used for text entry and storage. The best way to describe the
difference is that what was considered correct before meant "could be
understood by a human reader" vs. today it being "will be recognized as a
match by a computer program".  The "typo" (an interesting term in itself
reflecting the imposition of typewriters or typesetting on human text (OED
says it dates from 1892 meaning an error) has no place in data intended for
use in a computer system today. We know that if we enter the wrong letter
in typing our name, the computer will simply print out "name not
recognized". It won't say "don't you mean 'Smith'?. More likely, with a
dictionary of "words" it will try to transform a name into a word in its
dictionary, such as "I suggest 'Smote'"

I find it very funny that we understand the foibles of computers in trying
to read human printed text typos very well--while simultaneously believing
that AI's will perform human-level "understanding" of text. Hasn't the
existence of the "typo" taught us what to expect from computers over more
than a half century of use. 'Autocorrect' is a good new word. It is often
used as a derogatory comment on what a computer does with text words. It's
a good word because it was added to the language to describe how computer
software works. It's just the sort of term human language needs to describe
the difference between the "AI" of spelling correction and the human word

On Mon, Dec 2, 2019 at 1:57 AM Humanist  wrote:

>                   Humanist Discussion Group, Vol. 33, No. 448.
>             Department of Digital Humanities, King's College London
>                    Hosted by King's Digital Lab
>                        www.dhhumanist.org
>                 Submit to: humanist@dhhumanist.org
>         Date: 2019-12-01 20:56:53+00:00
>         From: Bill Benzon 
>         Subject: An Epidemic of AI Misinformation
> Willard and other denizens of the list,
> Here's an excellent article by Gary Marcus setting forth the limitations of
> current AI and exposing the hype that continues to surround the technology.
> Bill B
> https://thegradient.pub/an-epidemic-of-ai-misinformation/
> Bill Benzon
> bbenzon@mindspring.com
> 917-717-9841
> http://new-savanna.blogspot.com/ 
> http://www.facebook.com/bill.benzon 
> http://www.flickr.com/photos/stc4blues/
> https://independent.academia.edu/BillBenzon
> http://www.bergenarches.com 

Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php

Editor: Willard McCarty (King's College London, U.K.; Western Sydney University, Australia)
Software designer: Malgosia Askanas (Mind-Crafts)

This site is maintained under a service level agreement by King's Digital Lab.