3.550 supercomputing the humanities, cont. (47)

Willard McCarty (MCCARTY@vm.epas.utoronto.ca)
Fri, 6 Oct 89 20:15:02 EDT

Humanist Discussion Group, Vol. 3, No. 550. Friday, 6 Oct 1989.

Date: Thu, 5 Oct 89 21:04:04 EDT
From: amsler@flash.bellcore.com (Robert A Amsler)
Subject: Supercomputing for humanists

I find it surprising that I can disagree with both authors of the
recent supercomputer messages (Patrick Conner and Guy Pace) even
though it might be supposed they were themselves in disagreement
over sueprcomputer use.

First, using a supercomputer for data transfer isn't very sensible.
Supercomputers `crunch' things they keep internal to their memory.
They in fact may not be that good at data transfer since they depend
on their I/O (input/output) capabilities--which often are no better
than many conventional computers.

However, to claim that there are no `text' tasks which a
supercomputer could improve on is much more irritating and reflects
a numeric bias I dislike.

Here is a text calculation that might consume a lot of cycles.

Suppose you wanted to output every collocation in a text
whose frequency as a collocation was at least one quarter of the
frequency of the least frequent isolated word in the collocation.

Or, suppose you wanted to find the average distance in words between
all reoccurrences of words in a text? (That is, in the last sentence
the words `words' and `in' reoccur at distances of 5 and 7 from their
previous occurrences).

Admittedly these aren't `really' nasty calculations, but if one
decided to explore something like a authorship characteristics of a
set of texts, in which you wanted the answers to which pairs of texts
had the most similar such statistics and then wanted that to be
graphically plotted in real time.... I think the supercomputer's
nanoseconds would be suitably occupied.