coding "strange" languages (37)

Willard McCarty (MCCARTY@VM.EPAS.UTORONTO.CA)
Wed, 12 Apr 89 20:52:12 EDT


Humanist Mailing List, Vol. 2, No. 824. Wednesday, 12 Apr 1989.

Date: 12 April 1989, 15:31:15 EDT
From: Brad Inwood (416) 978-3178 INWOOD at UTOREPAS
Subject: Coding for Sanskrit, Greek, etc.

I for one am happy to see HUMANIST as a whole deal with the Sanskrit coding
discussion. Most of what Dominik Wujastyk says about the prospects for
standardization there goes just as well for Ancient Greek. There have
been a few idiosyncratic formats in use: users of Lettrix have been
transliterating in their own private ways; Academicfont coding looked
as though it might approach being *a* not *the* standard, but then it
died the death of all niche products; Notabene's far superior approach
is (as far as I can see now) of limited use outside that programme's
environment. There are other word processors with Ancient Greek capability,
but I don't know anything about them. The TLG uses upper case lower ASCII
to represent Greek in its massive text base, but you have to interpret it
through specialized software to get it to look or print like Greek. In
the age of EGA, VGA and the HERC+, what are the prospects for a standard
representation of exotic alphabets? Probably very poor, unless some
central scholarly body throws its weight around, and even then ... What
is the effect on standardization of the competing approaches represented
by the PC world and the Mac? And what is the relevance of word-processing
vs text-base, text-retrieval, or database applications?

This may be no more than end-of-term melancholy, but I would guess that
the chances of standardization are very low... And while I am at it,
what would my HUMANIST colleagues do with 10 megabytes of Ancient Greek
text in Academicfont coding when my word-processing has changed over
to Nota Bene? Ah well, maybe I should be working on Latin texts anyway.