3.587 supercomputing; how computers work, cont. (220)

Willard McCarty (MCCARTY@vm.epas.utoronto.ca)
Sun, 15 Oct 89 18:18:55 EDT

Humanist Discussion Group, Vol. 3, No. 587. Sunday, 15 Oct 1989.


(1) Date: Fri, 13 Oct 89 20:53:32 EDT (33 lines)
From: amsler@flash.bellcore.com (Robert A Amsler)
Subject: I am not a magnetic field, I am a number.

(2) Date: Fri, 13 Oct 89 21:04:23 EDT (57 lines)
From: nsabelli@ncsa.uiuc.edu (Nora Sabelli)
Subject: Re: 3.580 supercomputing, etc., cont. (89)

(3) Date: Fri, 13 Oct 89 23:48 EDT (19 lines)
From: "Sterling Beckwith (York University)" <GUEST4@YUSol>
Subject: RE: 3.576 supercomputing the humanities, cont. (174)

(4) Date: Sat, 14 Oct 89 17:16 CDT (81 lines)
From: "John K. Baima" <D024JKB@UTARLG>
Subject: Supercomputers

(1) --------------------------------------------------------------------
Date: Fri, 13 Oct 89 20:53:32 EDT
From: amsler@flash.bellcore.com (Robert A Amsler)
Subject: I am not a magnetic field, I am a number.

Well..... Michael Sperberg-McQueen is also misleading you. If they
ain't 0's and 1's, they sure as heck ain't TRUE and FALSE.

It is true that computers don't have little 0's and 1's in them (any
more than human speech has letters in it) they have micro-electrical
charges and magnetized micro-regions which other components attempt
to read as either sufficiently in one state or another to constitute
a value of 1 or 0---but only instanteously are those things true.
The computer in action is continuously making those determinations
and THEN acting upon them as though the determinations were of a 1 or
a 0 and then reapplying a micro-electrical charge or a
micro-magnetizing pulse to some other region in the computer to
effectively make it hold on to the conclusion. That is to say, the
computer is working with 0's and 1's and only when it isn't working
are they fixed in analogue media. The computer converts physical
realities into abstract 1's and 0's and then transfers them to new
physical places where they are returned to physical realities.

I think what Michael is trying to say is that all symbol systems are
made up by simply assigning values to sequences of these
`hypothetical' 1's and 0's (much the way one would claim that all
English words are made up of some set of characters) and that the
humanities has its options as to how it wants computers to assign
symbols to its hypothetical 1's and 0's.

(Then, to confuse things, we `could' discuss analogue computers
in which there `really' are no 1's and 0's--but I am not aware of
any analogue supercomputers...)

(2) --------------------------------------------------------------72----
Date: Fri, 13 Oct 89 21:04:23 EDT
From: nsabelli@ncsa.uiuc.edu (Nora Sabelli)
Subject: Re: 3.580 supercomputing, etc., cont. (89)

Let me come to the support of Michael Sperberger-McQueen. Willard McCarty
says:

> I think it is more
> accurate to say that our problems are considerably more difficult, and
> only very recently has the technology become sufficiently advanced to
> have a broad appeal among humanists.
>
Please do not confuse the problems that natural scientists solve
with the problems that the natural sciences pose. Biology's problems
(from the human genome to neural networks) are less complicated that
many (most?) of humanities problems? World's climate?

I don't think the argument is very productive; both problems are, if
approached as a whole, insurmountable. Does it matter which one is most
unsolvable if approached the wrong way?

Let me relate an story, true as far as I can tell, about the great
british neurophysiologist Claude Sherrington, famous among other
things for Sherrington's turtle. This turtle was a small electronic device,
mounted on wheels, that was governed by complex circuitry. In essence,
when its batteries were almost empty, the turtle woul move towards a
light source and bask in it (feed); once its batteries were full, it
would move towards a quiet dark place and wait (sleep), until it was
time to repeat the cycle. Needless to say, Sherrington was studying
feedback mechanisms in the brain and was proceeding to test his
understanding by observing the behaviour of a circuit based on it.

Somebody was trying one day to get Sherrington to make generalizations
about the turtle' intelligence, whether it was alive, etc. Finally, in
exasperation, Sherrington said: I am modeling the brain. My model fits
my question. "A dish of cold porridge is an acceptable model of the brain
if you are interested in how the brain reacts to concussion."

Which is to say that natural scientists have a long history of breaking
up their problems into parts that are manageable at a given time, and
extracting intuitive concepts about nature from each of these subsets
of the problem until their intuitive understanding allows them to
bootstrap to a slighly more complex level. Use of thechnology is part
of the process, by no means the whole process. You make many mistakes
on the way, and have your share of blind alleys, but you always know
a little more which each step. Very few problems have no solution, all
you have to do is define what is an acceptable solution at each step.

Lets not be bigots for one side or the other. I belong to a generation
of natural scientists that had to become computer knowledgeable to
be able to solve our problems. Not everybody chose that path, but those
of us who did built the tools (and the educational infrastructure) for
our colleagues, past and present. If you want, we can argue why humanists
have not quite done it yet. The complexity of the complete problem is
is not one of the reasons.

Nora Sabelli (nsabelli@ncsa.uiuc.edu)
(3) --------------------------------------------------------------27----
Date: Fri, 13 Oct 89 23:48 EDT
From: "Sterling Beckwith (York University)" <GUEST4@YUSol>
Subject: RE: 3.576 supercomputing the humanities, cont. (174)

Our "problems" are just as hard or harder than those physicists and
mathematicians have learned to do with expensive machines, so why shouldn't we
have a share in such machines? So runs the general drift of this discussion so
far. No one seems yet to have asked out loud what many must be thinking: Would
our "results", having stormed the citadels of supercomputing and wrested our
"fair share" from their keepers, have the same KIND of value, to us and to the
people whose labor generates the surplus that pays the shot? Or is there a
sense in which humanists might remain more humane by NOT buying into the
highest and most expensive technologies? (Why, I even know some Computer
Scientists who disdain to make any use of ArpaNet. Do you wonder why?)

Perhaps someone will catch my drift, and help pull me ashore.

Sterling Beckwith GUEST4@SOL.YORKU.CA.bitnet

(4) --------------------------------------------------------------85----
Date: Sat, 14 Oct 89 17:16 CDT
From: "John K. Baima" <D024JKB@UTARLG>
Subject: Supercomputers


I have read the discussion about supercomputers with interest and I
would like to throw in my two cents. First, no one has really defined
what a supercomputer is. Well, it's fast right? I would say that a
computer does not really qualify for being super *today* unless its
speed is measured in 1,000's of millions of instructions per second
(MIPS). To put that in perspective, the original IBM-PC was about a
1/10 MIP machine. Today, the top end 386 machines are about 6 MIPS. But
even that does not tell the story.

There has been some discussion about what goes on inside a computer,
but I think that is irrelevant. When talking about the speed of a
computer's CPU, it is important to note how long it takes to execute
the various instructions it can do. If we say that a 33 Mhz 386 is a 6
MIP machine, that means it takes the 386 33/6 cycles to execute one
"average" instruction. However, some instructions, like floating
point division, may take hundreds of cycles. That is why a math co-
processor can speed up the math by a factor of 100 or so. When it is
said that supercomputers are good at math, it means that it does not
take it hundreds of cycles to do one floating point instruction.
Because of the parallelism that exists on supercomputers, they can
execute multiple floating point instructions per cycle.

Supercomputers are sometimes said to be not so good with I/O because
they are not so fast in the category. I think that the current Cray's
*only* do I/O at about 10 MB/Sec. A SUN SparcStation can do about 2
MB/Sec. An old XT can do about 170 K/Sec.

Why do people buy supercomputers? I think that it is usually because
the have a task that cannot be done with anything else. They have the
software and they have the data. There are tasks being done on
supercomputers that would take months or longer on anything else. Vicky
Walsh complains about having to "wait" 45 minutes to search the TLG.
Does anyone buy a supercomputer so that they do not have to wait 45
minutes every now and again? Besides, the 45 minutes represents how
long it takes to read the TLG CD-ROM. Ibycus does the actual search in
essentially no extra time. If one had the TLG on a hard disk (that can
be purchased for less than about $3,000), the search could go much
faster on a micro today.

The programs being run on supercomputers, like it or not, usually have
a some application to an industry. Most of the Humanist's computing do
not directly contribute to any industry. The value of what a Humanist
is doing with a computer is usually much more abstract, even if it is
of no less value. Thus, it is necessary to have a definite goal in
mind. I cannot imagine spending that kind of money without having
something definite in mind. Perhaps I am misunderstanding him, but that
seems to be what Tom Thomson was saying: "Fight for the facilities you
want or at least for the tools that will allow you to develop the
facilities you want without spending any significant time on software,
rather than letting the physicists have all the computing resources!"

I think that Willard McCarty had it right when he said. <<it may strike
the observer as strange for tens of millions of dollars to be sunk into
a supercomputer. It may seem especially strange if, as witnessed here,
the question seems to be, "How can we use this computer?" and not "What
sort of computer do we need?">>

Humanists need software and uniform data first. For the later, I think
that the TEI is absolutely essential and I hope that they succeed. I
also think that Humanist support of software is essential. The Perseus
project is a good example of how to spend 3 million dollars. 3 million
dollars of supercomputer time would most likely be a very large waste
of money. Perseus will leave something of enduring value. The TLG has
been more expensive than Perseus, but that too is a project with
enduring value.

My recommendation: Push for developing software to do what you want. If
the result cannot be adequately done on a small machine, then ask for
something larger. Remember that over the last 30 years the
price/performance of computers has halved about every two years. There
is no end in sight for that trend. Only buy hardware when you need it--
it is one of the worst investments imaginable. The more expensive the
hardware, the worse the investment.

John Baima