Home About Subscribe Search Member Area

Humanist Discussion Group


< Back to Volume 34

Humanist Archives: June 9, 2020, 7:37 a.m. Humanist 34.93 - notation, software and mathematics

                  Humanist Discussion Group, Vol. 34, No. 93.
            Department of Digital Humanities, King's College London
                   Hosted by King's Digital Lab
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: Peter Robinson 
           Subject: Re: [Humanist] 34.89: notation, software and mathematics (50)

    [2]    From: Allen Riddell 
           Subject: hardware and programming style (68)


--[1]------------------------------------------------------------------------
        Date: 2020-06-08 20:05:57+00:00
        From: Peter Robinson 
        Subject: Re: [Humanist] 34.89: notation, software and mathematics

Folks, I’m entertained too by this memory of a long-ago time when SPITBOL was
not a misspelling.

I’d like to add a footnote to Michael S-M’s comment on how speedily one can now
make a simple concordance along the lines he gives, using Query etc, in just a
few minutes while the venerable Oxford Concordance Program (OCP) needed a full
year to reach the same point. It is an engaging comparison but (as Michael well
knows) this elides several points:

1. Once you had OCP you could generate a simple concordance in just a few
seconds. I know, I took Susan Hockey’s course way back then and did it.

2. One might compare rather favourably the one person year spent developing OCP
with the many, many person years spent developing XQuery, XSLT, and indeed XML
and far more.

3. OCP actually did ONE important thing far better than did XQuery and all the
rest. It understood and gleefully processed multiple overlapping hierarchies.
You could output (as I did) a concordance of the Merchant of Venice which
located every word according to its speaker, scene and act AND according to the
page of the edition. You can (as Michael will tell you) indeed do the same with
XQuery. But it is an order of magnitude more difficult in XQuery etc.  And the
orders of magnitude escalate if you want (for example) to do things such as:
tell me the range of act/scene/line numbers on each page; or give me a list of
the pages in which Solanio speaks; or get me all the text between two page
breaks; or create a hypertext link that takes me to a particular page and tells
the user just what is on it,  and so on.

This does lead me into a little bit of alternative reality speculation. Around
1987, the godparents of what became the TEI (Susan Hockey, Nancy Ide, Lou
Burnard, Michael himself — and others of course who were in the conversation,
which I was not) had to make a critical decision. What document modelling
architecture would the TEI choose as the base for its encoding? Of course, they
chose SGML, and not OCP. There are many excellent reasons why they made that
choice. (Interestingly at almost exactly the same moment I chose differently, to
use OCP as the basis for the transcriptions I was then making, and the first
versions of Collate were built on this markup. I recall David Robey making a
similar choice for the work he was doing on Italian verse). But still: one
wonders what might have happened if they had decided to build on OCP and not go
down the SGML (and then XML) route. As an aside: yes, SGML (and later XML) had
the immense advantage of document validation, and I think every one of us
involved in those early TEI years (and still now) saw that as critical. Perhaps
it is not as critical as we thought. JSON is sweeping the world, and has no
document validation facility.

Just some heretical thoughts.

Peter



--[2]------------------------------------------------------------------------
        Date: 2020-06-08 16:55:59+00:00
        From: Allen Riddell 
        Subject: hardware and programming style

I asked someone familiar with the work of the Thinking Machines
Corporation about this. (Thinking Machines worked on the Connection
Machine (https://en.wikipedia.org/wiki/Connection_Machine) which
emerged from research exploring alternatives to the VN architecture:

Here's what he said re: alternative architectures

 > ... I really only know three: VN, parallel architectures like the
Cray or the Connection Machine, and quantum computers. The mathematical
foundations of the first two I think are basically the same, bits and
bytes, etc. although the programming considerations are very different.
QC is a whole other ballgame. I don't know that the architectures are
affecting mathematical development, other than the techniques evolved to
consider questions of efficiency and describing algorithms. I do think
it safe to say that humans seem most naturally adapted to the
algorithmic/programming considerations of the VN architecture, although
we have yet to have birthed a generation of programmers who have only
ever seen the latter two architectures. We still learn/teach VN first,
so I guess have to unlearn that in moving to the different architectures.

Best wishes,

AR

On 6/2/20 6:53 AM, Humanist wrote:
>                    Humanist Discussion Group, Vol. 34, No. 76.
>              Department of Digital Humanities, King's College London
>                     Hosted by King's Digital Lab
>                         www.dhhumanist.org
>                  Submit to: humanist@dhhumanist.org
>
>
>
>
>          Date: 2020-06-01 09:26:47+00:00
>          From: Willard McCarty 
>          Subject: hardware architecture and programming style
>
> Robert Sebestra, in Concepts of Programming Languages (11th edn, 2016),
> makes the case that the imperative style of programming characteristic
> of most languages now in use is more or less determined by the so-called
> von Neumann architecture of hardware common to our machines. Other styles,
> notably the functional one pioneered by McCarthy in Lisp, can be
> accommodated by translating them via a compiler or interpreter into
> imperative code. A number of early comparisons of software with
> mathematics draw the distinction based on the outgrown characterisation
> of the former as imperative, noting that the latter is descriptive.
> Still there's the von Neumann structure to keep the distinction alive.
> Promises of other architectures are not difficult to find. Is it reasonable
> to expect a very different relationship between software and mathematics
> that would result if these alternative architectures were widely available?
>
> Those here who know what I am talking about must be able to spot the
> muddle I'd very much like to find a way out of. Is clear guidance to be
> had? As with any issue involving up-to-the-minute technological
> progress, the swarm of promises makes getting out of muddles difficult.
> All help appreciated, esp if it comes as reference to credible publications.
>
> Many thanks.
>
> Yours,
> WM
> --
> Willard McCarty (www.mccarty.org.uk/),
> Professor emeritus, Department of Digital Humanities, King's College
> London; Editor, Interdisciplinary Science Reviews
> (www.tandfonline.com/loi/yisr20) and Humanist (www.dhhumanist.org)



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php


Editor: Willard McCarty (King's College London, U.K.; Western Sydney University, Australia)
Software designer: Malgosia Askanas (Mind-Crafts)

This site is maintained under a service level agreement by King's Digital Lab.