Date: 1 March 1988, 09:32:49 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Conference on Honorifics (164 lines) -------------------------------------------------------------------- From: John B. Haviland Reed College, Anthropology and Linguistics Portland State University, Department of Modern Languages announce A Conference on |HONORIFICS| to be held in Portland, Oregon April 8-10, 1988 |Precursors and topics:| Since classic early work on the social effects of pronoun choice, the respectful (or abusive) elaboration of vocabulary, and proposed universal features of the linguistic expression of politeness, there has been significant interest in the pragmatics of honorifics. Despite some programmatic optimism, these varying phenomena have not been systematically related. Nor have such studies been integrated with more recent research on natural conversation and discourse, on contextual cues, or on the pragmatics of inference, in which honorific phenomena are significant components. Similarly, detailed treatments of Asian, American, and Australian languages have shown the necessity for integrating honorific categories into the central core of grammatical description in individual cases, but work on the general theoretical significance of such categories for syntax and morphology has only recently begun. This conference proposes to direct these two complementary but divergent currents towards a more unified and general theory of honorifics in language. The prospects for significant results seem substantially heightened by a conference organized around both detailed ethnolinguistic and morpho-syntactic presentations, complemented by an explicitly comparative and theoretical workshop. We have solicited contributions which deal with the following topical areas: |A. Strategies for encoding honorific (and deprecatory)| |categories |in morphology, syntax, and the lexicon of different languages, as well as the consequences--both typological and interactive--of such strategies: Here we have in mind different possibilities for \grammaticalizing\ pragmatic aspects of language. A summary of the basic typological facts of honorific phenomena awaits more detailed descriptions of a wide range of particular languages. |B. Levels of honorifics within language structure:| their encoding within subject, object, speaker, and addressee categories; or in pronouns, classificatory elements; in lexical or phonological alternants; or in kinesic, paralinguistic, and gestural cues. In the best studied cases, honorific devices seem to range from speech levels and marked vocabulary (often, if not always, buttressed by marked kinesics as well) as in the cases of Javanese and Balinese, Australian "mother-in-law" languages, or Samoan respect vocabularies; to lexical alternation complemented by thoroughgoing syntacticization of honorific categories within verbal morphology (as in Korean or Japanese); to classification and gender-like systems incorporating honorific components (as in languages of the Americas, such as Nahuatl or Mixtec) or systems of "politeness affixes"; to familiar pronominal alternations, for example in the languages of Europe, or in a more elaborate form in the highly developed systems of address terms found in South Asia. What warrant, conceptual or analytic, is there for treating these phenomena together, either at the level of language use or language structure? |C. Honorifics (and their relatives) as tools, or fuel for| |linguistic argumentation and theory:| convincing evidence that honorific categories bear on syntactic theory derives from work on Japanese and Korean, but no doubt can be found elsewhere as well, once the syntactic and morphological facts are sufficiently well described. Recent work on the 'iconicity' (or principled motivation) of morphological encoding may need revision on the basis of fuller treatment of such ill-understood categories as honorifics. In a somewhat similar way, the seemingly inescapably indexical and situated nature of honorific usage seems to pose a challenge to current semantic theories in a way that will be useful to explore. |D. Historical change and evol||ution of honorific devices in| |language:| we have in mind such matters as: the evolution of terms of respect and disrespect; the gradual 'downgrading' of markers on honorific scales; the history of social arrangements (revolutions, or religious conversion, for example), and corresponding reflexes in the linguistic expression of honorific categories; and so on. The literature contains suggestive proposals connecting pronominal elaboration, for example, with particular political institutions or ideological movements--connections that might well be explored in the light of current ethnography. |E. Sociolinguistic and ethnographic aspec||ts of honorific| |use|, in relation to the place of language in wider social theory. Rich ethnographic description of the ethnography of deference and respect, in natural conversation and interaction, is a necessary precursor to a more widely applicable pragmatic account. Moreover, as socially potent linguistic tokens, honorifics serve as particularly striking instances of language as social action, moves in the discourse of power. |F. The interaction of honorifics with related but perhaps| |distinct phenomen||a:| candidates include politeness, respect, deference, formality, and their opposites: rudeness, insult, abuse; joking relationships, avoidance relationships; formality, informality, casual and non-casual interaction. Just as, at the level of language structure, honorific categories often attach to vehicles with different grammatical characters (pronoun and agreement systems, verbal or nominal classifiers, for example), in their social circumstances their use may be linked to specific situations, relationships, or contexts. The components of such interaction must be disentangled for a candidate theory to succeed. |G. Case studies ||of honorific systems|: we will solicit studies from various sociocultural traditions and contexts, with an emphasis on the wider ethnographic significance of honorifics, and their placement within both a general linguistic or ethnographic framework, and an adequate social theory. For further information, please contact John Haviland Linguistics and Anthropology Reed College Portland, OR 97202 (503)771-1112, 771-1197 e-mail: via Unix, end with tektronix!reed!johnh from Internet: johnh%reed%tektronix.tek.com@relay.cs.net from UUnet: reed!johnh@uunet.UU.NET *****END***** ========================================================================= Date: 1 March 1988, 09:39:01 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Magneto-optical disks (33 lines) ----------------------------------------------------- From: Humanists' Group [Taken from ``The Olympus Pursuit'', Vol. 7 No. 1 1988] Magneto-optical discs can now hold a prodigious mount within their 5 1/4-inch format. Now a magneto-optical disc drive offers the means to read, write and erase the data in these information warehouses. .. many have agreed that the future of data storage lies in the magneto- optical disc, a system that can hold as much as 500 floppy discs ... .. After four years of work, Olympus researchers have answered the demands with a new magneto-optical disc drive ... .. Momentum gathered with the development of a laser-optical pickup system. Still other contributions came from development of a new high-speed servo- motor, a linear motor, an error-correction controller and central processing unit. Since the field was new to Olympus, the project took longer than expected. The results, however, set the performance standards in an industry looking toward an estimated disc market of $2,200 million by 1990 ... For further information, contact the Optical Memory Department, Olympus, Tokyo. ========================================================================= Date: 1 March 1988, 09:43:02 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Nota Bene list (53 lines) --------------------------------------------------------- From: David Sitman Dear Colleagues, We have started a new electronic mail special interest list at Tel Aviv University, the Nota Bene discussion list. This list is intended as a forum for discussing issues related to the word processor/textbase, Nota Bene, exchanging tips, asking questions, trading programs, etc. This discussion list is made possible by the list serving software, LISTSERV. Here is how it works: A list of the members is kept in TAUNIVM. Whenever anyone sends electronic mail (or a file) to NOTABENE@TAUNIVM, the mail (or file) is automatically distributed to all list members. All requests to join the list, leave the list, etc., should be sent to LISTSERV@TAUNIVM, and NOT to NOTABENE@TAUNIVM. To join the list, or to correct your name on the list, send the SUB command: SUB NOTABENE Your Name For example, if I wanted to add my middle initial to my name (I don't), I would use the command: SUB NOTABENE David B. Sitman Note that you send your name, not your computer code. TO leave the list, send LISTSERV@TAUNIVM the command: UNS NOTABENE LISTSERV will accept the commands either as interactive messages or as mail. For example, from VM/CMS in BITNET you can send the command: TELL LISTSERV AT TAUNIVM UNS NOTABENE From VAX/VMS, you could use: SEND LISTSERV@TAUNIVM SUB NOTABENE My Name From any computer you can send electronic mail. Make sure that the command that you want to send to LISTSERV is in the mail body, not in the header (e.g., not in the SUBJECT field). The LISTSERV command processor ignores mail headers. We have also made available some programs written for Nota Bene use by Itamar Even-Zohar. Explanations of what these files are and how they can be obtained will appear shortly in the Nota Bene discussion list. David Sitman ========================================================================= Date: 1 March 1988, 09:51:40 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: SQL and Humanities Research (23 lines) --------------------------------------------------------- From: ROBERT E. SINKEWICZ (416) 926-7128 ROBERTS at UTOREPAS I am preparing a report on SQL Database Software and Humanities Research. It will be based in the first instance on our experiences in the Greek Index Project. If there are other installations of SQL software, large or small, mainframe, mini or micro being used by humanities projects, I would be interested in hearing of their existence. When my report is ready I will post it on the HUMANIST file storage area for anyone who may be interested. Robert Sinkewicz Greek Index Project Pontifical Institute of Mediaeval Studies ROBERTS@UTOREPAS ========================================================================= Date: 1 March 1988, 12:38:41 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: The Global Jewish Database and its software (133 lines) Dear Colleagues: Some time ago I made reference to the "Global Jewish Database," and in particular to its full-text retrieval software. We were, as you'll recall, discussing the specific things that humanists want to do with large amounts of text, e.g., as provided on a CD-ROM. Since I've received a few enquiries about this Database, and since my Centre is one of the few places outside Israel connected to it, I thought I'd send along the following brief description. Willard McCarty mccarty@utorepas The Global Jewish Database is a 70-million-word electronic corpus in Hebrew accessed by means of a full-text retrieval system running on an IBM mainframe at Bar-Ilan University in Israel. The largest part of this database (53 million words, 253 volumes, 53,000 documents) comes from the "Responsa literature," a collection of rabbinical answers to questions about all aspects of Jewish life and culture. This literature spans the millennium from the tenth to twentieth centuries and originates from more than 50 different countries. It thus comprises a very rich storehouse of information on Jewish law, history, philosophy, ethics, customs and folklore, and is of interest for both religious and secular scholarship. The other parts of the database include the Hebrew Bible (fully vocalized, with marks of cantillation), the Babylonian Talmud, the Midrash literature, Maimonides' Code, and various medieval biblical commentaries. By arrangement with the Institute for Information Retrieval and Computational Linguistics at Bar-Ilan, the Database is accessible to anyone with a PC and a modem via a telecommunications network such as Telenet or Datapac. To date connections have been established at the Rabbinical Court of London, at the Institute for Computers in Jewish Life (Chicago, Illinois), and the Centre for Computing in the Humanities, Univ. of Toronto, besides 25 connections at various research centers in Israel. The connection in Toronto is the only one to a research institution outside Israel; so far it has been used both by members of the local Jewish community and by academics. Computing humanists without interest in the contents of the Database will, however, likely be interested in its software. Although this full-text retrieval software was developed at the Institute in Bar-Ilan especially for the Hebrew Database, it has recently been adapted for use with English language texts. It allows searching for words and phrases with positive and negative proximity operators on the word, sentence, and paragraph levels, and for combining different queries with boolean operators. Besides a full pattern-matching component (with left, middle and right truncations, wild cards, and boolean constraints on the occurrence of strings in the pattern), it embodies a sophisticated morphological component that allows the user to retrieve all grammatically derivative forms of a word automatically by specifying merely its lemma. Through the technique of "short-context disambiguation" the system also permits most unwanted occurrences of a word to be eliminated before full retrieval: occurrences are grouped and listed with their nearest verbal neighbours, either to the left or right, and from this list the user selects which are to be retrieved. Since in general the number of different neighbors that are also relatively frequent turns out to be quite small, the user can disambiguate a large number of occurrences quickly. For example, "see" causes "see", "sees", "saw", and "seen" to be retrieved, while the difference between "he saw" and "the saw" allows the homographic noun to be identified and eliminated immediately. Experiments have shown that native speakers have a high degree of proficiency in making such choices on the basis of very limited contexts. By statistically analyzing some of the retrieved relevant documents, a local feedback module can suggest to the user new words to be included in his query. A local thesaurus element allows the user to define and later edit families of related terms to be considered equivalent and to be automatically retrieved whenever the family name is mentioned in the query. Specialized communications software with error-correction and compression modules handles all necessary protocol conversions and insures that the rapid response time of the software is practically unaffected by the relatively slow 1200-baud speed of transmission across telephone lines. Experience with the Database from North America, for example, suggests that a 10 second response is not uncommon. Research for the Database was initiated and conducted from 1966 to 1975 by Prof. Aviezri Fraenkel and developed since then at the Institute for Information Retrieval and Computational Linguistics, Bar-Ilan University, Ramat-Gan, Israel 52100, by its Head, Prof. Yaacov Choueka, and colleagues. For information contact Prof. Choueka at that address; telephone: 03-53718716; e-mail: choueka@bimacs.biu.ac.il . The following references may be of interest: Choueka, Yaacov. "Computerized Full-Text Retrieval Systems and Research in the Humanities: The Responsa Project." CHum 14 (1980): pp. 153-169. ... R. Attar, N. Dershowitz,and A.S. Fraenkel, "KEDMA--Linguistic tools for retrieval systems", J. of the Ass. Comp. Man., 25 (1978) pp 52-66. ... "Linguistic and Word-Manipulation Components in Textual Information Systems." In The Application of Mini- and Micro-Computers in Information, Documentation and Libraries. Ed. C. Keren and L. Perlmutter. Elsevier, 1983. pp. 405-17. ..., S.T. Klein and E. Neuwitz. "Automatic Retrieval of Frequent Idiomatic and Collocational Expressions in a Large Corpus." ALLCJ 4.1 (1983) pp. 34-38. ... and Serge Lusignan. "Disambiguation by Short Contexts." CHum 19 (1985) pp. 147-157. See also "List of Publications (In English)" Institute for Information Retrieval and Computational Linguistics (The Responsa Project), Bar-Ilan Univ., August 1987. *****END***** ========================================================================= Date: 1 March 1988, 12:49:12 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Computer scientists etc..... (66 lines) --------------------------------------------------------- From: Richard Giordano My original message regarding SNOBOL and large systems was not meant to attack computer scientists, nor did I even suggest that Amsler is a computer scientist. I should by way of explanation mention that I am completing a dissertation on the early history of data processing, and the emergence of computer science in the early 1960s. Some of what I have observed in my research guides the way that I approach this issue of large systems, management, and the choice of programming languages. Giampapa is absolutley correct when he states that the selection of programming languages are not always reduced to concerns of the optimal language. He then mentions such factors as "future programming support, the availability of coders and the implementation of the language on a system, and the cost of maintaining such a system are important managerial concerns." From what I've gathered, they are not important concerns--they are absolutely critical in the choice of a language. Equally important is the language that the shop is already using. You'd be surprised what has been programmed in COBOL and FORTRAN, especially after ALGOL was introduced. You'd be even more surprised to learn which features in languages (like the use of based variables in PL/I) are forbidden because of what managers considered their intinsic difficulty to learn. A fundamental difference between computer scientists and data processors is that the managers are mainly interested in getting the job done with the least amount of present or future disruption. From what I can tell, programmers and managers don't work together in selecting a language, and programmers (coders) are really something like a high-tech proletariat (but let's not take the analogy too far). Anyway, if a shop is programming in FORTRAN or COBOL, it would take an awful lot of convincing to switch to some other language. Anyhow, this was how commercial shops work--and I stand by that. Academic environments may not function in exactly the same way. But let's face it, academic shops are small potatoes in the grand scheme of things. As for computer scientists... Geeze, I wasn't trying to insult anyone, and I am surprised over the vehemence of Giampapa's reply. In fact, upon re-reading my message, I still don't think I've said anything to "alienate computer scientists." But Giampapa uses a word that is critical in understanding a computer scientist and a practitioner--that word is "professional." Nothing is intrinsically wrong with that, but what about those who do computing, but who are not computer scientists? The answer, as it has panned-out throughout the sixties and seventies, is that the others are outside of the core information circuits, outside of the means by which they self-consciously think of themselves as computer scientists, outside of the profession. C'mon, that's the nature of professions, and we all know it. What makes this relevant? Sometimes practitioners, trained in something other that computer science, come up with solutions that may not (or may) have anything to do with how computer scientists do things. What's to make one solution better than the other? Most of us on the HUMANIST are not computer scientists, and many of the solutions to our problems may seem 'unprofessional'--I'm not putting words in anyone's mouth here. That brings me to my original statement that computer scientists are not always right. In one environment they may be right, but the world's a big place, and there is often more than one way to skin a cat. ========================================================================= Date: 1 March 1988, 12:52:25 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: PostScript non-standard fonts? (19 lines) --------------------------------------------------------- From: Susan Hockey Does anybody know of any PostScript versions of non-standard fonts which are either public domain or available on a site-licence? We are particularly interested in Greek, Hebrew, Arabic and Cyrillic as well as an extended Latin alphabet including Old English characters. Susan Hockey SUSAN@VAX.OXFORD.AC.UK ========================================================================= Date: 1 March 1988, 20:08:45 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Apology (21 lines) --------------------------------------------------------- From: Joe Giampapa To Rich Giordano and HUMANISTs: I would like to apologize for my reply to Rich's [2-26 12:33] letter. Also, I would like to retract the last paragraph of that letter -- it clearly renders me a hypocrite. I should not have responded with such strong language to something of which I did not know the full context. With sincere regrets, Joe Giampapa giampapa@brandeis.bitnet ========================================================================= Date: 1 March 1988, 20:15:24 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Prolog and Lisp (72 lines) --------------------------------------------------------- From: Joe Giampapa Joergen Marker recently asked about what tasks Prolog and LISP are better suited for. Here is a brief description and attempted explanation off the top of my head. If people would like a more thorough explanation and some examples of code from each, I will have to get back to them after digging up some archived programs and notes. Prolog first: (see also "Programming in Prolog" by Clocksin and Mellish, Springer-Verlag) Prolog creates and manages a database of facts. The command structure analogous to other language's "procedures" and "functions" is the prolog "predicate" which basically evaluates to "true" or "false". The arguments to the predicate are the objects you are looking for. A common warm-up exercise is to create a prolog geneology: human(Mary,female). human(John,male). human(Bill,male). parent_of(Mary,Bill). parent_of(John,Bill). offspring(Bill). parent(X) :- human(X) and parent_of(X,Y) and human(Y) and offspring(Y). ?- parent(Z). Z=Mary etc. A package that comes with Prolog are DCG's, which correspond roughly to context free grammars. They are a sort of "macro" for prolog, which use the above-described structure for doing parsing. Here is an example of a DCG grammar which will parse a Pascal program. The lower case names are literals which the parser looks for in the program text, the upper case names are further DCG expansion rules. Pascal --> program NAME ( DEVICE-LIST ) ; const CONSTANTS type TYPES var VARIABLES begin BODY end. NAME --> [_] ; means accept anything . . . BODY --> .... With some limited success, you can achieve versatility in parsing English (or any other "natural language") sentences. LISP is an nth-order language, which means you can derive any language from it (including Prolog). Prolog is only 1st-order and cannot be designed to write a LISP interpreter (that is, without extreme hacks and "unpure" Prolog constructs). For this reason, you can write anything you want in LISP. However, LISP code usually does not conceptualize some constructs as elegantly as Prolog does. I hope this helps out. I can fill in any specific "holes" people have questions about. Joe Giampapa giampapa@brandeis.bitnet ========================================================================= Date: 1 March 1988, 20:19:12 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: More on the Global Jewish Database (28 lines) --------------------------------------------------------- From: John J. Hughes Perhaps those HUMANISTs who inquired about IRCOL's Global Jewish Database and the Responsa Project might find the detailed review of those projects in the June 1987 _Bits & Bytes Review_ (pp. 7-12) helpful. That review was written with the cooperation and assistance of Yaacov Choueka, so it is both up-to-date and accurate. I'll be glad to send a free copy to any interested HUMANISTs. They may contact me as XB.J24@Stanford or c/o Bits & Bytes Computer Resources 623 Iowa Ave. Whitefish, MT 59937. John J. Hughes ========================================================================= Date: 1 March 1988, 23:29:30 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Programmers (47 lines) --------------------------------------------------------- From: amsler@flash.bellcore.com (Robert Amsler) There are two usages of the word programmer which need to be clarified. In the Data Processing/Business world, the term `programmer' is used for the basic generic worker; akin to `secretary'; who does what they are told the way they are told to do it. These people do indeed work in `shops' and in horrible languages on business machines (i.e. IBM's) a lot. They make the bank payroll programs (and the phone system and its billings) work. The other use is that commonly employed in academia, which refers to anyone who writes programs. Typically such people ALSO originate the task the program is trying to accomplish (in the DP world this would mean they are more like ``system analysts'' than ``programmers''). These people have varied backgrounds and may hold advanced degrees in all sorts of fields. The term `hacker' and `wizard' are used to refer to the most revered members of the group--hacker meaning someone who figures out how to do things without being told how and wizard being someone who KNOWS how things can be done (having perhaps figured them out via hacking). The discussion about programmers and computer scientists seems plagued by a misunderstanding as to which group of `programmers' are being discussed. A language such as SNOBOL or LISP or PROLOG would have very little use among the members of the ``programming shop'' community. However, among the members of the latter group, these languages might be at least equal in their use to those of FORTRAN, COBOL, etc. It would be inappropriate to describe the ``programmers'' in EITHER environment as not amounting to much. On the business side, the world would cease to run if those programmers stopped doing their daily routine programming tasks. On the academic side, nearly all the advances in computer science have come from university environments. This includes time-sharing, computer graphics, several computer languages, database management, information science, etc. Robert Amsler ========================================================================= Date: 2 March 1988, 12:53:00 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: CD-ROM Configurations (43 lines) --------------------------------------------------------- From: Bob Kraft A week ago, Joe Giampapa asked for information about using CD-ROM technology, and John Gleason (PHI) responded to part of Joe's question by outlining the steps in preparing a laser disk. The costs of such mastering fluctuate, but seem to be in the neighborhood of $2000 to $3000 at present (John would know more accurately). That, of course, does not include the countless hours of formatting the files, preparing the ID tables, etc., prior to sending it off for mastering. As for hardware and software to access a CD-ROM from an IBM type microcomputer, one needs a reader (we use the Sony, which is also a standard component of the micro-IBYCUS Scholarly Computer) and an interface/controller card. The cost here is around $750, unless it has dropped recently, and doubtless varies from manufacturer to manufacturer or dealer to dealer. To use the new "High Sierra" format TLG "C" disk or the PHI/CCAT demonstration disk, one also needs the "DOS Extension" software from MicroSoft that is available from various sources at minimal cost (we paid $10). Finally, for the aforementioned CD-ROMs, software to decode the IDs in the text, etc., is necessary to make things work effectively. CCAT is preparing such software, and plans to make it available as a basis for further cooperative development and/or use at fairly nominal cost (hopefully under $100). Of course, for those fortunate enough to have purchased a complete IBYCUS SC system ($4000, including Sony CD-ROM reader), all the necessary hardware and software comes with the package and works efficiently and impressively from the start. It will take some time before people with IBMs or other machines can come anywhere close to the IBYCUS standard. Bob Kraft (CCAT) ========================================================================= Date: 2 March 1988, 21:03:05 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: More on Prolog (70 lines) --------------------------------------------------------- From: Joe Giampapa I have received a few questions relating to my posting on Prolog, yesterday. Here they are: 1. "Has Prolog been standardized?" As far as I can tell, the most "standard" description of the language is that given by Clocksin and Mellish, "Programming in Prolog" (Springer-Verlag). All implementations must satisfy the minimum requirements as stated there. Some implementations may have more features, but that book is the bare-bones. I think people at the University of Edinbergh would be better authorities on what is the language "standard". 2. "Can it 'deal well' with non-Western-European languages?" I assume that this makes reference to the difficulties imposed by strings of characters in a non-Roman character set. What one would have to do is include in their Prolog code routines which would map from an ASCII character set (or whatever your machine supports) to their own character fonts. The Prolog interpreter does not care what symbols it manipulates, so you could conceivably run Prolog in a special character display package. The degree of "well"ness or efficiency depends on how well this has been implemented at your site, and the sophistication of your tools. I do not know of any "alternate character set" packages which will keep your system "standard" in a portable way, if you do need them. 3. "Will it really be worth it to make the effort of learning Prolog?" Some tasks are easily suited for Prolog, and the concise representation it can give you would be well worth while. Whether it is "worth it" depends on the nature of your work. It might well be worth some time and effort to become more familiar with it for future reference. 4. "Where is Prolog's strong point?" In a simplistic way, the strength of a Prolog system is in the way it searches for items in its database. As long as it can find something to satisfy its goals, it will proceed down to the next level, look there, and continue. If it cannot find something to satisfy its goal, it will "backtrack", which is a little complicated to explain. In practice, if you are parsing a sentence which can be parsed in more than one way, Prolog will automatically return the alternate parse trees. 5. "Could you offer a sample of [...] something that all of us might find useful - and explain what all the little nick-nacks mean? Then could you do the same for Lisp in another posting?" OK. What would be useful to all? A complete parsing program with full documentation? Something else? Send me some suggestions and I will pick something from them. Finally, I would like to throw the door wide open for additional commentary. Some people know Prolog, and have sent me notes in response to my posting. Please, publicly correct me and add to what I say. ========================================================================= Date: 3 March 1988, 00:51:30 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Request for announcements of conferences; plan for same (20) Would anyone with an announcement of an upcoming conference or call for papers please send me the notice so I can post it on our file-server? My request applies to all such notices, whether they have been circulated on HUMANIST or not, except for notices of conferences that have already taken place. My plan is to keep current announcements on the server and to distribute to all of you only a brief summary of the upcoming events. Comments, objections, shouts of relief? Willard McCarty mccarty@utorepas ========================================================================= Date: 3 March 1988, 09:18:29 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Software for "roots"? (33 lines) --------------------------------------------------------- From: Tom Flaherty I have recently been bequeathed the "family tree" for my mother's side of my family. My uncle worked on it quite vigorously for about 40 years, so it now contains thousands of names/dates and a great deal of narrative information. I would like to organize all of this stuff (the amount of paper is incredible) into a "database" that will make it less forbidding and more accessible. Also, should I ever have the time and energy, I would like to be able to fill in a blank or two in the historical information, and I am now obligated to update the near end of the tree with current events -- births, deaths, etc. So, my question is: Does anyone know of any *good* system for genealogical record keeping? Ideally, it should provide for a virtually infinite number of entries and links AND be able to hold textual information (biographies) in some organized way. Probably, one of the off-the-shelf database packages could be used. I just don't know enough about them to choose the most likely products to investigate. Is this a hyper/card/text application? I have a PC clone but would like to hear about systems for any hardware. My main concern is to get the material into a "permanent" system. There is room in a life to do this sort of thing only once. Any advice will be greatly appreciated. Thanks. --Tom Flaherty flaherty@ctstateu.bitnet ========================================================================= Date: 3 March 1988, 19:23:00 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Software for roots (24 lines) --------------------------------------------------------- From: Francois-Michel Lang This may sound like an odd idea, but I know that the Mormon church has investigated precisely this problem and invested quite a lot of time and money into developing genealogical software. I have never seen or read about any of their methods, but I spoke with some LDS (Latter-Day Saints) Church members at a Logic Programming conference in Salt Lake City a couple of years ago, and learned of the LDS Church's interest in this sort of thing. That might be a place to start... Francois-Michel Lang Paoli Research Center, Unisys Corporation lang@prc.unisys.com (215) 648-7469 Dept of Comp & Info Science, U of PA lang@cis.upenn.edu (215) 898-9511 ========================================================================= Date: 3 March 1988, 19:27:57 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Genealogy (24 lines) --------------------------------------------------------- From: Mark Olsen Tom Flaherty's request for info on geneology systems may be of more general interest, since historians frequently have to represent family linkages on large amounts of data. The best system for this seems to be GENESYS, developed by Mark Skolnick et al. You might consult "A Computerized Family History Data Base System" *Sociology and Social Research* 63 (April, 1979): 506-523. There is a good description of the application of this system in the Saguenay project by Ge/rard Bouchard, "The Processing of Ambiguous Links in Computerized Family Reconstruction" *Historical Methods* 19 (Winter, 1986): 9-19. Nominal record linkage and family reconstruction are areas where historians (bless our souls) have been developing computer methods in innovative and interesting ways, just in case we were getting to worried about the "literary ghetto" some HUMANISTS are afraid of. ========================================================================= Date: 3 March 1988, 19:33:27 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Prolog and logic, Prolog and parsing (76 lines) --------------------------------------------------------- From: Michael Sperberg-McQueen Two points worth stressing about Prolog for humanists who think they may be interested in learning it: 1 Prolog, as its name implies, is an attempt to make it possible, or nearly possible, to write 'programs' that are nothing more than sets of statements expressed in first-order predicate calculus. While the relations between Prolog and symbolic logic as you may have learned it in Philosophy 150 are not always obvious, they are there, and for people interested in symbolic logic Clocksin and Mellish's Chapter 10 ('The Relation of Prolog to Logic') makes fascinating reading. It clarifies the nature of the inferences a Prolog system can make, points out the various ways in which Prolog's version of logic loses nuances that can be important to the logical form of propositions in the predicate calculus, and develops a method for reducing predicate-calculus statements to Prolog clauses. (An appendix gives Prolog programs that will do the transformation for you, but it's worth doing it manually on some samples first.) That is, the logical underpinnings of Prolog as a system are almost as interesting as what you can do with it as a language. ***If you think symbolic logic is or can be beautiful, I think you'll like Prolog.*** It should be noted, though, that it can be hard to switch from the procedural, step by step style of analysis one acquires from other programming languages, to the non-procedural, declarative interpretation of Prolog programs. There turn out to be lots of things I know how to do step by step, that I cannot define readily in predicate calculus. And since Prolog programs have both a declarative and a procedural interpretation (that is, they *are* programs that are supposed to *do* things), working with Prolog can induce some intellectual dizziness. 2 Prolog provides a convenient way to parse expressions using Chomsky-like rewrite rules, and many implementations provide facilities for working with a special notation for such rewrite rules. (This is the 'definite-clause grammar' notation mentioned by Joe Giampapa.) I am less enthusiastic about this than most people seem to be, because these facilities enforce a specific left-to-right parsing strategy that seems on the whole better suited for things like Pascal programs or SGML document component hierarchies than for natural language texts. (And even for such unnatural grammars I am having trouble finding a definite clause notation that correctly parses an SGML document -- maybe my fault and not that of the notation, but still frustrating. Has anyone else done this sort of thing with better success? Write me if you have.) But even if one ignores the builtin 'grammar' notations and writes one's own parser, Prolog handles a lot of the details more conveniently (*NOT* faster!) than other languages. To parse a lot of text, I'd almost surely want to write a parser in some other language, for speed. But only after working out the parsing strategy with Prolog. ***For developing a parser, Prolog has a lot of advantages.*** Finally, a note on products: Borland's Turbo Prolog is very nice, seems to run fast, and has a convenient (though complicated) multiwindow interface. But they achieved the speed by leaving out some of the key features of Prolog, qua logical system. You may not feel that you've thrown your money away by buying Turbo Prolog, but if you are interested in logic, then you will eventually want a fuller implementation. (I have not compared them all, but have been happy with Arity on the PC and the Waterloo Core Prolog interpreter on our IBM mainframe.) Michael Sperberg-McQueen, University of Illinois at Chicago ========================================================================= Date: 3 March 1988, 19:41:06 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Software for roots (28 lines) --------------------------------------------------------- From: David Nash I would like to second the inquiry of Tom Flaherty (message of 3 March 1988, 09:18:29 EST). For MS-DOS, and the Mac, the main contenders I know about are: (1) Quinsept's Family Roots (Mac version reviewed favourably in Feb. 1988 MacWorld pp.213-4, but not mentioning that the complete equivalent of the MS-DOS version is not yet available); (2) Personal Ancestral File from the Church of the Latter-Day Saints. I would like to hear any information about the latter, not having yet tried to pursue it through Salt Lake City. There was a newsletter "Genealogical Computing" published in Virginia; maybe still exists. -DGN ========================================================================= Date: 3 March 1988, 19:45:22 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: More roots to follow.... (15 lines) --------------------------------------------------------- From: cbf%faulhaber.Berkeley.EDU@jade.berkeley.edu (Charles Faulhaber) I have seen advertised (but know nothing about) a program which I believe was originally developed for the Mormon Church; and I suspect that inquiries to their genealogical society in Salt Lake City might be fruitful. ========================================================================= Date: 3 March 1988, 23:32:15 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: More roots (20 lines) --------------------------------------------------------- From: Norman Zacour I used to see advertisements for a product called Family Tree, which began: "Trace 10 generations of ancestors or pet pedigrees." It was sold by Systems Consultants, Inc., P.O.Box 37076, Raleigh, NC 27627, ph. 800-334-0854, ext.508. But all this was two years ago, and for all I know the company may have disappeared since then. I should think that historians might be interested in some good software for pet pedigrees. Do tell us what you find out. Norman Zacour (Zacour@Utorepas) ========================================================================= Date: 3 March 1988, 23:34:59 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Of the tracing of roots there is no end (18 lines) --------------------------------------------------------- From: Norman Zacour Now there is a program called Roots II, recently advertised in PC Magazine. "Organize your family tree and print camera-ready family books containing charts, text and indexes. Store, retrieve and display thousands of family facts with biographical sketches and source documentation. Lightning-fast searches and sorts. 250 page manual. Free brochure. Price: $195 (US)." Commsoft, 2257 Old Middlefield Way, Ste.A, Mountain View, CA 94043 (ph. 415-967-1900). ========================================================================= Date: 4 March 1988, 09:24:39 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Computer-mediated communication (22 lines) --------------------------------------------------------- From: RD_MASON@VAX.ACS.OPEN.AC.UK I wonder if I might use this forum to announce a conference hosted by the Open University, U.K. on computer-mediated-communication in distance education. It is to take place Oct. 8 - 11 and will consist of a workshop on the Open Universities' large scale use of the conferencing system, CoSy on an Information Technology course with 1500 distance students, and a colloquium with invited educators and researchers involved in a variety of educational applications of CMC. Because of limited accommodation here, numbers will be limited to 100 participants. If you are interested and want more details, please send a mail message to RD_Mason@uk.ac.ou.acsvax. [that's rd_mason@vax.acs.ou.ac.uk for those on Bitnet &c. -- WM] ========================================================================= Date: 4 March 1988, 09:31:39 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Prolog and Lisp (24 lines) --------------------------------------------------------- From: Hans Joergen Marker In my earlier note I posed the question "what kind of tasks are so much better handled in Prolog and Lisp than in C....?". In Joe Giampapa's answer I see no reference to C. An important question (at least to me) is the performance of the generated code. What I find in the answer is examples of specific program syntax, which naturally could not be used unaltered in C, but which on the other hand could be replaced by structures, pointers and functions in a quite legible way. The performance of a C solution to a specific problem would be considerably better (I suspect) than the solution to the same problem in Lisp or Prolog. So the question remains: "Are there problems out there that you can't solve in C, but only in other languages?" ========================================================================= Date: 4 March 1988, 09:35:08 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Overwhelmed by the roots-software (32 lines) --------------------------------------------------------- From: Tom Flaherty I am overwhelmed! To my posting to HUMANIST requesting information about computerizing a family tree, I have received 23 replies (in less than 24 hours). Most of them have suggested sources of programs, a few have asked me to share my findings with them. Given the apparent interest, I will compile a list of the suggested software and sources thereof and post it to HUMANIST. I have only had time to glance at the responses so far, but I can report that the "Personal Ancestral File" software from the Church of Latter Day Saints was the most frequently suggested program. (Why didn't I think of them? It seems so obvious now.) It may be a short time before I have the opportunity to sort this out and report back, but I do want to express my appreciation to all of those who have responded or may yet do so. It seems that HUMANISTs really are just that. Many Thanks. --Tom p.s. Please do continue to send me ideas. I will include all I receive in my "list." ========================================================================= Date: 4 March 1988, 09:51:19 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Selecting a programming language (71 lines) --------------------------------------------------------- From: John C. Hurd The remark about lack of humor on the network a while ago and the current discussion of the merits of various programming languages reminded me of the appended analysis. It includes APL, one of my favorite languages, but omits SNOBOL4(+), my very favorite. John Hurd HURD@UTOREPAS Selecting a Programming Language Made Easy Daniel Solomon & David Rosenblueth Department of Computer Science, University of Waterloo Waterloo, Ontario, Canada N2L 3G1 With such a large selection of programming languages it can be difficult to choose one for a particular project. Reading the manuals to evaluate the languages is a time consuming process. On the other hand, most people already have a fairly good idea of how various automobiles compare. So in order to assist those trying to choose a language, we have prepared a chart that matches programming languages with comparable automobiles. Assembler - A Formula I race car. Very fast, but difficult to drive and expensive to maintain. FORTRAN II - A Model T Ford. Once it was king of the road. FORTRAN IV - A Model A Ford. FORTRAN 77 - A six-cylinder Ford Fairlane with standard transmission and no seat belts. COBOL - A delivery van. It's bulky and ugly, but it does the work. BASIC - A second-hand Rambler with a rebuilt engine and patched upholstry. Your dad bought it for you to learn to drive. You'll ditch the car as soon as you can afford a new one. PL/I - A Cadillac convertible with automatic transmission, a two- tone paint job, white-wall tires, chrome exhaust pipes, and fuzzy dice hanging in the windshield C - A black Firebird, the all-macho car. Comes with optional seat belts (lint) and optional fuzz buster (escape to assembler). ALGOL 60 - An Austin Mini. Boy, that's a small car. Pascal - A Volkswagon Beetle. It's small but sturdy. Was once popular with intellectuals. Modula II - A Volkswagon Rabbit with a trailer hitch. ALGOL 68 - An Astin Martin. An impressive car, but not just anyone can drive it. LISP - An electric car. It's simple but slow. Seat belts are not available. PROLOG/LUCID - Prototype concept-cars. Maple/MACSYMA - All-terrain vehicles. FORTH - A go-cart. LOGO - A kiddie's replica of a Rolls Royce. Comes with a real engine and a working horn. APL - A double-decker bus. Its takes rows and columns of passengers to the same place all at the same time. But, it drives only in reverse gear, and is instrumented in Greek. Ada - An army-green Mercedes-Benz staff car. Power steering, power brakes and automatic transmission are all standard. No other colors or options are available. If it's good enough for the generals, it's good enough for you. Manufacturing delays due to difficulties reading the design specification are starting to clear up. ========================================================================= Date: 4 March 1988, 14:41:21 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Hyper-roots (19 lines) --------------------------------------------------------- From: Mary Peterson I've read the materials about software for family trees, etc., and I still think HyperCard on the Macintosh is the best choice for this application. Mary Peterson University of New Hampshire M_PETERSON@UNHH ========================================================================= Date: 4 March 1988, 14:43:00 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Reply to Hans Joergen Marker (27 lines) --------------------------------------------------------- From: Joe Giampapa Q: "Are there problems out there that you can't solve in c, but only in other languages?" [emphasis on performance] I would say not. C gives one amazing control over a computer system. The other languages stress "conceptual control" to the program designer. Lisp and Prolog hide the pointers and lower level features from the programmer, directing concentration on the higher-level objects and constructs, themselves. C allows the clever programmer to do practically anything in the most efficient way as the programmer sees fit (but gives enough rope to hang inexperienced programmers). I have seen some pretty fast Lisp systems, whose time-lag behind C systems is not that noticeable. I have not seen too many fast Prolog systems. Joe Giampapa giampapa@brandeis.bitnet ========================================================================= Date: 4 March 1988, 14:45:34 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Prolog (56 lines) --------------------------------------------------------- From: Leslie Burkholder (1) Are there problems you can't solve in C but only in other programming languages, eg Prolog or Lisp? Answer: No. C, Prolog, and Lisp (and other standard computer programming languages, eg, Basic) are of equivalent computational capacity. What you can do in one you can always find a way to do in another. Indeed, there are versions of Lisp and Prolog written in C. This is suffient to show that whatever you can do in either Lisp or Prolog you can do in C. The relevant question is ease of accomplishing the task. Take one of the examples provided by Mr Giampapa, writing a CFPSG and a top-down parser. That's available in most Prologs (Turbo Prolog does not have it). But you can build it in C with more work. (2) Prolog and predicate logic. People should know that the Prolog language hasn't the same expressive power as any language for first-order predicate logic. There are some things you can say in a predicate logic language for which there is no translation in Prolog. The translator in Ch 10 of Clocksin and Mellish, Programming in Prolog, translates from the predicate logic language into a clausal form language. But there are some legal sentences in a clausal form language not translatable into the more restrictive Prolog language. People should also know that the inference engine in Prolog is incomplete. For example, a complete inference engine for things sayable in the Prolog language should be able to infer b from a if b. b if a. a. but the inference engine in Prolog will not return a "yes, it can be inferred". It will go into a loop. None of these things will make those who find logic beautiful very happy. (3) CFPSG's and DCG's. DCG's are availble in most Prologs. What is also available is a top-down parsing mechanism to make use of DCG rewrite rules. This is available because the DCGs are just notational variants of regular Prolog code and the parsing mechanism is just Prolog's inference engine put to use on this code. DCG's are more powerful than CFG's. CFG's are composed of rewrite rules of the form X --> Y where X is a Prolog atom and Y one or more Prolog atoms (that is, there is a restriction on what X and Y can be). DCG's expand what X and Y can be in two ways: they can have arguments and Y can include executable Prolog goals. There are examples of these extensions in Clocksin and Mellish, Programming in Prolog, secs 9.4 and 9.5 respectively. ========================================================================= Date: 4 March 1988, 14:47:49 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Computable solutions (57 lines) --------------------------------------------------------- From: Michael Sperberg-McQueen Hans Joergen Marker asks whether there are problems that simply cannot be solved in certain languages (e.g. in C), but which require other languages. Computer scientists on the list may wish to correct me if I am wrong, but I am fairly sure the answer is no. If your language allows you to say "Subtract X from Y and if the result is negative, branch to location Z" (or the logical equivalent), and the problem you want to solve can be solved in some other computer language, then you can solve it in your language. This follows, does it not, from Turing's discussion of computable numbers. Of course, no one is claiming that working with a language this primitive will be any fun, or that the program will be readable. There is an analogous theorem in sentential logic, which demonstrates that the operators of sentential logic ('and,' 'or,' 'not,' 'if,' 'if and only iff' and so on are all superfluous and every sentence of sentential logic can be expressed using only one operator: the 'Sheffer stroke' (named for its inventor), which means 'not both.' If we write the Sheffer stroke with '|', then 'A|B' means 'A and B are not both true,' and we can paraphrase the other operators thus: not A A|A A or B (A|A)|(B|B) if A then B A|(B|B) A and B (A|B)|(A|B) But although an interesting result (and perhaps profoundly significant in symbolic logic), Sheffer's notation is not nearly as convenient for practical logic as is the conventional notation. And so it is not used. The difference between Prolog and Lisp on the one hand and languages like C or assembler on the other is similarly one of notation, not power. It is easier, many find, to think in Lisp or Prolog terms and let the scut work of translation into machine terms be handled by the compiler or interpreter. The logical structure of the program is easier to display -- and easier to implement because you don't have to write all your own procedures for handling unusual data objects. To be sure, it's possible to display the logical structure of a solution in Pascal or C or Assembler, too -- but it's likely to be harder to change it, since you will have to change your underlying procedures. For this reason some AI shops develop programs in Prolog, and then translate the finished product into C for the production version. The simple rule: use an 'AI Language' to optimize programmer productivity; use a lower level language to optimize machine time. ========================================================================= Date: 4 March 1988, 14:49:01 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Snobol as an automobile (20 lines) --------------------------------------------------------- From: amsler@flash.bellcore.com (Robert Amsler) How about..... snobol4 - A Winnabago camper. Needs lots of space, very comfortable inside when exploring the countryside; but neither built for speed nor tight parking. ICON - A modern version of the Winnabago, shipped as a kit. Claims to get good gas mileage, but older Winnabago owners seem unconvinced. ========================================================================= Date: 4 March 1988, 14:51:53 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: French --------------------------------------------------------- From: Jacques Julien OVERTURE (alla Water music, but a bit slower and somewhat pomposo) I have been watching the river flow for a while now and it is very interesting. I would describe the show as a colourful pageant of boats, some of them quite impressive, fast and powerful and a very small population of surfers. My residence is too far away from the Maritimes and all that I can think of to join the stream is a bottle, a good and cosy one, let's say Chianti Ruffino. It has been on my desk for years. So, I take away the candle I used to decipher my manuscripts at night and I kick the vessel into the channel. FIRST MOVEMENT (French aria, double dotted) I operate in a rare, monstruous, extravagant area called French. And it is not even the good Burgundy of blue, white and red French, but its sparkling and lighter version: French-Canadian! I have a feeling of always looking at the dark face of the moon. In fact, and to keep on with the same staging, French seems to be as repulsive to the Computing Hegemony as garlic is/was? to Dracula. It reacts in the same way: horror or evasion but it can never get to swallow the *!!!* bulb.... For example, what do we do with accents on mainframes? SECOND MOVEMENT: The merry widow at her simultaneous windows I would like to list certain items on which I am sure the network can be very helpful. As stated in my *hagiography *, I am working in French-Canadian literature and popular culture (songs). The tools I am looking for are: 1. Database. One, or more. Relational, must be open to data stricto sensu and to added text like: annotations, commentaries, full transcript of lyrics. 2. Stylistic analysis device. I am thinking of Deredec and its sub-products, which I have not tried yet. In the long term, I would like to build an analysis that would integrate (not simply place side by side) lyrics AND music. When I read the report from the Conservatorio di Musica L. Cherubini, I tried to catch the next plane to Italy, but planes heading for sunny countries never land on my iceberg. 3. Access to large collections of texts in French from France and from North America, literature and references like dictionaries. 4. CAI. "What do you do for a living, besides watching the river flow?" Well, I must confess, I teach French. That is why I am interested in software dealing with interactive writing in ..... French! CONCLUSION coda/cauda, and no venenum I appreciate HUMANIST very much. It is a welcomed network, much needed and improving with use. Do not send me too many bottles back: I do not want to block the channel. Julien@Sask ========================================================================= Date: 4 March 1988, 15:06:44 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion Comments: From: Charles Young, Philosophy, The Claremont Graduate School From: MCCARTY@UTOREPAS Subject: Displaying and printing Classical Greek (19 lines) --------------------------------------------------------- From: Charles Young (youngc@clargrad) Over the past year or so I have maintained a list of packages that claim to support the printing and display of classical Greek. It has finally occurred to me that other HUMANISTS might be interested in the list.... [This list has been posted to the file-server. It should be available by this coming Monday, under the name GREEK SOFTWARE. -- W.M.] ========================================================================= Date: 4 March 1988, 15:12:23 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Addendum (46 lines) --------------------------------------------------------- From: Joe Giampapa I would like to post an addendum to what I said previously about computer languages. Every once in a while, the question pops up in computer circles about "what IS the best language" to program in. Most times it does not attempt to be so cut-and-dry, yet, the variants do not really stray too much from this. In the search for the optimal answer, the question almost never gets answered the way the question was originally posed. The tendency I have observed, is that when programmers are faced with a project, and several languages to choose from for doing the project, their "decision algorithm" proceeds roughly as follows: First: What restrictions on languages are imposed by the problem? Ie. you better not consider a number-crunching language for text-intensive operations. Second: Of the languages available (what they know, or what they are willing to learn), which are more aesthetically pleasing? Sometimes, aesthetics override the first concern. (Once, a friend who was nuts about BASIC wrote programs to simulate recursion. They were slow, and not particularly elegant from my point of view, but they worked.) Third: What are other people, with whom the programmer has regular contact, likely to use? Most people do not want to program alone. If they get stuck, or in a "rut", who can help them out of the bind? Also, programming in an environment where there are a lot of experts in a language helps keep the momentum of a project going. In short, then, I think the "ultimate answer" to the question is, "whatever the programmer wants to use", ... or "42". Joe Giampapa giampapa@brandeis.bitnet ========================================================================= Date: 4 March 1988, 15:14:14 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: c.s. graduates are people too (44 lines) --------------------------------------------------------- From: Dan Bloom My appologies in advance for several things; 1) Unlike most of you, I am not erudite. 2) I may not be timely or on topic. 3) my degree was in computer science. Given the above; Yes, many people of the computer faith do tend to consider themselves above such considerations as ease of use, and prefer elegance over readability/useability. Such will be the case of most whom started computing in the long gone dusty era of punched cards and 256k mainframes where elegance, compactness of code, and speed of execution were paramount. Others of the lofty profession, who take themselves less seriously, such as myself consider the computer a rather advanced tool. As a tool, it must suit the users purpose and not the designers. However, as with any advanced tool, it requires a learning curve, both on the part of the user and of the creators. There also seems to be a mindset of the micro computer industry wherein they feel obligated to recreate every error made in the development of mainframes. In conclusion, if you consider the above to have presented a thesis of any sort, I have put forth the proposition that not *ALL* people who are in the computer industry are inhumane pretentious soothe sayers, some of us are people too. (I have not really taken any offence: in general I agree with most of what has been said in reference to the above and quite enjoy the different view of the field). And in retort, if this network is any indication, Humanists seem to have an obsession with what should be, not what is. Hope I haven't taken too much time....Dan (Improbable) Dan Bloom Senior Consultant Academic Computing Services York University ========================================================================= Date: 4 March 1988, 15:18:25 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: C vs. Prolog (43 lines) --------------------------------------------------------- From: Stephen J. DeRose In reply to Hans Joergen Marker's note: > So the question remains: "Are there problems out there that you can't > solve in C, but only in other languages?" No, there are no such problems. In fact, there are no problems which can be dealt with in only a particular subset of programming languages. The more formal theorem for this is known in CS circles as "Church's Hypothesis" (among other names). All serious computer languages are "functionally complete", and so inter-translatable. Thus, the more relevant questions are: 1) How *easy* is it to learn/use language X? 2) How *fast* can I program problem Y in language X? 3) How *efficient* will my code in language X be? And here we have major differences. For example, right now I'm putting the finishing touch on a program to handle an annotated natural-language dictionary of about 50,000 words. It takes about 3,000 lines of C, because of the need to provide detailed control of storage allocation and data structures. I think I could write the same functionality in about 10,000-15,000 lines of Assembler, or in 750-1,000 lines of Icon or Prolog. It is roughly true that a programmer can write the same number of (working) lines of program per day, regardless of language. So it makes sense to use the most compact language available for the particular problem at hand. Unfortunately, in this case I had to use C rather than Icon or Prolog, because the last 2 do not deal with memory as efficiently by themselves as I can by myself, and I can't afford 8 Meg of RAM for my Mac. Steven J. DeRose ========================================================================= Date: 4 March 1988, 19:06:40 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Family tree software (50 lines) --------------------------------------------------------- From: dow@husc6.BITNET (Dominik Wujastyk) I have used both the Mormon product, Personal Ancestral File (PAF), and the Pine Cone one (is it FTetc for Family Tree Etc?). Both for DOS. I am doing this note from memory, since I am at present in the USA, and all my notes and manuals are back at home in the UK. FTetc (if that's it) was slicker and somewhat faster, since it was compiled. The PAF (Pers Anc. File) was Basic code, and needed fiddling with to set the correct defaults for use on a hard disk. This was clearly described in the manual, but seemed unnecesary in these days of setup menus. PAF has a companion program that can store biographical data about individuals; FTetc includes this in the main prog, but if I remember rightly, PAF allows larger files for this. FTetc had one huge advantage: it allowed you to print a BIG chart piecemeal on several sheets of ordinary computer paper, for gluing together. PAF can handle 135 col paper (I think) but that's it. One bit of the tree at a time. The manual of PAF is written in the style of an obsessive. Everything is hyper neat and repeated several times. I found dealing with PAF (program and documentation) worried me at some deep level: was the author still sane? Nevertheless, he wrote me a nice letter when I sent a query, and probably fits into his community very well (no offence intended in any quarters). By comparison, FTetc is just another good shareware prog. The capabilities are very similar. I had a lot of data in PAF before I heard of FTetc, and I am reluctant to change over. But if I were starting today I think I'd go for FTetc. Of course a lot depends on where the programs go, what upgrades are made, maintenance etc. Imponderables. Dominik. bitnet: user DOW on the bitnet node HARVUNXW arpanet: dow@wjh12.harvard.edu csnet: dow@wjh12.harvard.edu uucp: ...!ihnp4!wjh12!dow ========================================================================= Date: 4 March 1988, 19:16:12 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Summary of conference notice on the file-server (28 lines) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * International Conference on Symbolic and Logical Computing Dakota State College Madison, South Dakota April 21-22,1988 The third International Conference on Symbolic and Logical Computing (ICEBOL3) will present papers and sessions on many aspects of non-numeric computing: artificial intelligence, analysis and printing of texts, machine translation, natural language processing, the use of dangerously powerful computer languages, SNOBOL4, SPITBOL, Icon, Prolog, and LISP. There will be a series of concurrent sessions (some for experienced computer users and others for interested novices). ========================================================================= Date: 4 March 1988, 19:19:08 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Computer language intertranslatability (49 lines) --------------------------------------------------------- From: Sheldon Richmond Are computer languages intertranslatable? There is a discussion in the Talmud that God understands every language, but Angels only understand the Holy Language of the Torah. So, if you want to speak to the Angels, you have to speak in the Holy Language, but God doesn't care what language you speak. Computers are somewhere between Angels and God. Mathematically, computers are God; practically, they are neither. Turing argued that every computer is formally identically: every Turing machine is a Universal Turing Machine. So every computing languague ideally is formally identical. The operative terms are 'ideally' and 'formally'. In practice, not every computing language permits recursion--i.e. statements which call themselves; or functions which define operations in terms of themselves. This is not only important for convenience, and for performing certain algorithms, but for AI simulation. So, then, in the real world, computer languages are not completely intertranslatable. The upshot is that, depending on what one wants to do with computers, one will have to use different languages, and different hardware/software systems. The technology of computers has not done away with the Tower of Babel, or the requirement for multilingualism. Though every few years, new Holy Languages for our computer/angels--PASCAL, C, PROLOG-- are produced. In reality, computers are neither angels nor God. Different languages are required for different purposes, no one language can do all, and some languages are more suited to some tasks than other languages. The proper attitude toward computer systems and languages, is the one that states 'when in Rome do as the Romans do'. You can't expect that one system of manners or etiquette will please all people regardless of cultural background. So, rather than search for the Holy Language, or just use one language regardless of task, choose the language that is most pleasing to the crowd you will be hanging around with for the task at hand. Use the language that the crowd who is working on the project one is interested in for the moment uses--and in that way you will be included in the chit-chat and problem/solution sharing talk. ========================================================================= Date: 4 March 1988, 20:06:10 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Server security (82 lines) --------------------------------------------------------- From: Mark Olsen Security for Kermit3 SERVER operation Kermit3.EXE (2.30) has a number of new features, many of which enhance REMOTE operation. It is now relatively easy to SEND and GET files, etc. (between, say, home and school) by setting one machine in SERVER mode before leaving for the other location. The problem with Kermit3 SERVER operation is security: somebody chancing on your remote machine and erasing your files, putting an infected COMMAND.COM in it, etc. The following is a solution to this problem, offering you a fairly high degree of safety. This technique make use of Kermit3's ENABLE and DISABLE commands. It requires the following: A file much like SECRET, below, which is run on the REMOTE (HOST) machine before it is put in SERVER mode; A file much like PASSWORD (but using your own secret filename!), which must be available on the machine you are using as a terminal. Host file: SECRET: DEFINE SRV OUTPUT ATS0=1\15,SET PARITY NONE,SET BAUD 1200,DO SR2 DEFINE SR2 CWD \XX\XX,DELETE PASSWORD,DISABLE ALL,DO SR3 DEFINE SR3 ENABLE FIN,SERVER,TAKE PASSWORD,SERVER,DO SR2 Terminal file: PASSWORD: ENABLE ALL To use this system: 1) Set up the values you want on the machine which is to run in SERVER mode, then add the commands: TAKE SECRET DO SRV 2) Later, from the second machine, call the number of the SERVER, and you will be connected, but with all services DISABLED; enter the commands: SEND PASSWORD FIN This will cause the host computer to TAKE PASSWORD and reenter SERVER mode. Since PASSWORD contains the command ENABLE ALL, you are now in business. When you are through, you must be sure to DISABLE all services; to do this, type: FIN This will cause the host to rerun SRV2, disabling all services and erasing PASSWORD. notes: Be sure to arrange the host machine so that Kermit3 is looking at an empty subdirectory; Do not use the word PASSWORD! Change every occurrence to a word known only to you. If you find strange files in your /xx/xx subdirectory, it is probably best to erase them, to fend off infection. No guarantees; good luck! *****END***** ========================================================================= Date: 4 March 1988, 21:39:57 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Unix "user-friendly" shell (20 lines) --------------------------------------------------------- From: Vicky A. Walsh Rumor has it that at the Unix conference UNIFORUM held this last February in Dallas, TX. someone dicussed a user friendly shell for unix that works with the Macintosh. Did anyone attend this meeting and/or can they provide any information about the shell? American management Systems is the company name associated with this project. I'd be grateful for any information and/or experience on this. Thanks. Vicky Walsh ========================================================================= Date: 4 March 1988, 21:43:11 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Computer language intertranslatability (18 lines) --------------------------------------------------------- From: Bob Krovetz I've heard on more than one occasion that you cannot write a Lisp interpreter in Prolog. Is this really true? If so, why? -bob Krovetz@umass.bitnet or Krovetz@umass.edu (internet) ========================================================================= Date: 4 March 1988, 23:29:08 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Choices of programming languages (67 lines) --------------------------------------------------------- From: amsler@flash.bellcore.com (Robert Amsler) Something strange seems to be creeping into the discussion, the quest for the ``best'' programming language devoid of knowing what computer one is running on. To say a programming language is INHERENTLY slow is somewhat strange; like saying that electric-powered vehicles are inherently slower than gasoline-powered vehicles (electric trains do much better than most cars) First, some basic CS. There are computers. Computers have very elemental things called `instruction sets' which are primitive `languages' often referred to as assembly code or machine language, in which one can talk to the computer. These languages are the fastest things the computers can execute. Videogames, for instance, are often written in this level of language to absolutely positively optimize what can be done as rapidly as possible. However, such languages are awful most of the time. They keep saying things like `load and carry contents of register XXX to Register YYY'; which bears as much relationship to making a concordance of a text as the wiring diagram of your TV set has to how the on-off switch works. So... people write `higher level' programming languages which will run on the same computer. But how can they do that? Simply by telling the computer what to do with the statements in the higher level language to translate them into the original language the computer understood. This introduces some inefficiency, for a couple reasons. One is that the user of the higher level language doesn't necessarily know whether what he is asking for is efficient for the computer he is asking it of. Most people want computers to run their favorite languages. This may or may not be easy to do on some computers---``Mr. Spock, can we program the tricorder to become a TV set receiver?'' ``Yes, captain, but it will take a little time and won't work very well for long'' ``I don't care if it is efficient''... That sort of thing. So... we get BASICs and FORTRANs and PASCALs and LISPs and PROLOGs and lots of languages for lots of machines. Each is an IMPLEMENTATION written by someone with a varying degree of attention to how efficient it will be (and hence someone could write a LISP for a certain machine which runs faster than someone else's PASCAL (or visa versa)). The speed of a language is thus a matter of first and foremost what computer it is running on. Then, it is a matter of how efficiently it has been implemented for that computer. Now that computers are things the size of postage stamps (all that other stuff it comes surrounded with is just for the sake of your bulky human fingers and poor input/output capabilities) the possibility of a computer chip that can run your favorite language is very real. (For instance, TI just announced a chip to run LISP for the MacIntosh II's). Every time they change the chip, they change the possible speed of the computer; and saying that any language is slower is very problematic since you have to know on what computer it has been implemented and at what level of design (i.e. as software or hardware). ========================================================================= Date: 5 March 1988, 10:13:52 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Notice of ALLC/AIBI conference posted to file-server (25 lines) Association for Literary and Linguistic Computing 5 - 9 June 1988 XVth International ALLC Conference The Fifteenth International ALLC Conference is being held at Jerusalem, Israel, and will be immediately followed by the Second International Conference of the Association Internationale Bible et Informatique from June 9 until June 13, 1988. The major topics of the Conference to be covered will be: textual databases and corpora; mechanised morphology, lexicography, and dictionaries; statistical linguistics, stylistic analysis and authorship studies; encoding and formatting techniques; critical editions, collations and variants; computational linguistics; and data entry, typesetting and text processing. ========================================================================= Date: 5 March 1988, 22:59:03 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Summary of posting to the fileserver (19 lines) UNIVERSITY OF EXETER FROM VALOIS TO BOURBON December 14-16 1988. To coincide with the quatercentenary of the Blois assassination of the Duke and Cardinal de Guise, which in turn prompted the assassination of Henri de Valois, a residential Conference/Colloquium has been arranged for December 1988 at the University of Exeter. ========================================================================= Date: 5 March 1988, 23:03:16 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: French (26 lines) --------------------------------------------------------- From: Jean-Claude Guedon En reponse a Jacques Julien: La question des diacritiques sur les ordinateurs est effectivement troublante. Il n'aurait pas ete tres difficile en effet de prevoir un nombre suffisant de signes diacritiques pour, au moins, prevoir l'utilisation des ordinateurs par des francophones, hispanophones et autres germanophones ou italophones, etc. And this is why I write the beginning of this message in French, just to remind all that might forget it that although English is a useful language as a kind of lingua franca, it should limit its role to this functional level and not impose itself as if it were THE language of the world, be it computerized or otherwise. This is not meant as an aggressive statement; but simply as a reminder of the marvellous variety that characterizes humanity. Cheers Jean-Claude Guedon ========================================================================= Date: 5 March 1988, 23:38:20 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Languages on HUMANIST (30 lines) As a member of HUMANIST I'm grateful for Jean-Claude Guedon's reminder of monolingual perils. I, too, rejoice in variety and difference. Doctrinally imposed uniformity is dangerously prevalent these days, and (may I hasten to add) it is promulgated by both sexes, by many if not all nationalities, and in many if not all languages. As editor of HUMANIST (for what it's worth) I welcome notes in all languages, whether or not I can read them. Many HUMANISTs read if not write French, not a few must know some German and Italian, and so forth. So, let me suggest that if anyone is moved to write in a language other than English, let him or her do so, let us say providing that a translation into English is appended. After all, a lingua franca (or lingua anglica) is a fine thing, nicht wahr? Would it be reasonable to establish some kind of convention for diacritics, say that the appropriate symbol follow the letter it belongs with? Comments or suggestions? Willard McCarty mccarty@utorepas ========================================================================= Date: 6 March 1988, 16:44:56 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Two pleas and a request from your editor (41 lines) Plea the first. When you want to fetch any file from HUMANIST's file-server, you must communicate by whatever means with LISTSERV, not with HUMANIST. Thus, in interactive mode, for example, you would TELL LISTSERV at UTORONTO GET HUMANIST FILELIST, *not* TELL HUMANIST.... &c.; and if embedding your request in a message, this message would be sent to LISTSERV, not to HUMANIST. When you send requests to HUMANIST, they just come to me, which means that either I have to request the file and send it to you or that I have to write to you and say something helpful. Right now, for example, I have 101 messages in my reader, and it's Sunday afternoon.... Plea the second. Several of you, intending a message for HUMANIST, send it to me directly, knowing that I must deal with it anyhow. True enough, but this procedure can cause two problems: (1) occasionally I cannot tell if the message is meant for me only or for everyone; I usually decide it's meant for everyone, but this may not always be the case; and (2) at such time as I decide no longer to interpose myself between incoming and outgoing mail, HUMANIST messages sent to me will get delayed. Actually, I will be away from about mid May to mid July, and during this time we may decide to return to the automatic mode rather than to ask someone to assume my daily duties with HUMANIST. So, *please* send HUMANIST mail only to HUMANIST. Request. Would all of you who redistribute HUMANIST mail to others send me a brief description of how this is done and, if you have it, a list of those to whom the mail is sent, or a count of the number of people? Thank you all for making HUMANIST such an interesting creature. Willard McCarty mccarty@utorepas ========================================================================= Date: 6 March 1988, 17:09:38 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Diacritically speaking... (23 lines) --------------------------------------------------------- From: Norman Zacour One way to prevent linguistic "imposition" might be to provide a glossary of technical computer terms in French, German and Italian. Reading manuals in one's own language, whatever that might be, is difficult enough; what sort of garbage words (interface), compressed descriptions (cut-and-paste), diverse borrowings (macro, default, root, library, directory), slightly out-of-focus terms (routine), to say nothing of out-and-out neologisms are likely to cause us trouble in a language other than our own? At the moment, for quite selfish reasons, I could use a good short glossary of English-French and English-German. If HUMANISTs can't contribute to its making, who can? Shall we dance? Norman Zacour (Zacour@Utorepas) ========================================================================= Date: 7 March 1988, 10:48:39 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: English and French (32 lines) --------------------------------------------------------- From: Richard Goerwitz I think most of us realize that the use of English as a lingua franca for things like international air-traffic control, radio, etc. is a rather ar- bitrary choice, based on practical political and economic considerations. We had Latin at one time, then French, now English. What next, Japanese? With computers, the phenomenon runs a bit deeper than this. English doesn't use a lot of diacritics, and can be represented comfortably, using a 7-bit coding scheme. Note also that entire languages like Prolog are tuned to an English-like syntactic scheme. Prolog does not work well with languages that have few word-order constraints or lots of discontinuous morphemes. I suppose that with a little fussing, we could all post to the HUMAN- IST in French or German, or some other W. European (left-to-right, alphabetic, syntactically rigid) language. That would be a lot of fun. -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer ========================================================================= Date: 7 March 1988, 10:50:45 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Help for Penn OT users (44 lines) --------------------------------------------------------- From: Richard Goerwitz I've been playing with the OT and NT texts I got from Bob Kraft now for a year and a half, and they have served my quite well. I've been able to determine with great precision many things about the OT that would have taken months to do by hand. Many thanks! One problem keeps coming up with these texts, however: They are coded using TLG-style (i.e. "Betacode") counters. So, instead of marking each chapter and verse reference explicitly (e.g. chapter 10, verse 1 [in Beta- code language ~~x10y1]), they merely tell us to increment (e.g. "increase chapter counter by one, verse counter by one [in Betacode ~~xy]). This means that you can't take verses here and there out of context. I had a friend who uses LBASE (a nice language-database package allowing grammatical searches) complain to me that, on account of this coding pro- blem, he could not slice out separate documents for LBASE to analyze (he wanted to work on the supposed "priestly" document only). So I wrote him a program in Icon that does this. Basically, the program allows one to a) collect a corpus by excising verses and chapters from a larger work, and b) mark them explicitly as to chapter and verse (while still remaining within the definition of the TLG Betacode level-marking scheme). Now the program is just sitting around, and I was wondering if any HUMANISTs wanted it. NB: It's written in Icon, so it's not going to work on any sys- tem that doesn't have Icon installed. I haven't tested it under v6, though it should work fine. -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer ========================================================================= Date: 7 March 1988, 10:54:04 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: In search of a search engine (106 lines) --------------------------------------------------------- From: Robin C. Cover I'm looking for a search engine which probably does not exist, but I would like advice from those more knowledgable about text retrieval systems. It is a text retrieval system optimized for literary-critical and linguistic study. The major requirements for the search engine are as follows: (1) Literary texts should be "understood" by the system in terms of the individual document structure, as indicated by markup elements. The user should be able to specify within a search argument that proximity values, positional operators, comparative operators and logical operators govern the search argument and the textual units to-be-searched IN ACCORDANCE WITH THE HIERARCHICAL STRUCTURE OF THE DOCUMENT. That is, if a document is composed of books, chapters, pericopes, verses and words, then expressions within the search argument must be able to refer to these particular textual units. If another document (or the *same* document, viewed under a different hierarchical structure) contains chapters, paragraphs, sub-paragraphs, (strophes), sentences and words, then expressions in the search argument should be framed in terms of these textual units. To borrow a definition of "text" from the Brown-Brandeis-Harvard CHUG group: the text retrieval system must be capable of viewing each of its documents or texts as an "ordered hierarchy of content objects (OHCO)." (2) The database structure must be capable of supporting annotations (or assigned attributes) at the word level, and ideally, at any higher textual level appropriate to the given document. Most record-based retrieval systems cannot accommodate the word-level annotations that textual scholars or linguists would like to assign to "words." More commonly, if such databases can be modified to accommodate annotations at the word level, the record-field structure is thereby contorted in ways that introduce new constraints on searching (inability to span record boundaries, for example). Preferably, even the definition of "word" ought not to be hard-coded into the system. Hebrew, for instance, contains "words" (graphic units bounded by spaces) which contain three or four distinct lemmas. Minimally, the database must support annotations at the word level (e.g., to account for the assignment of lemma, gloss, morphological parse, syntactic function, etc) and these annotations must be accessible to the search engine/argument. Though not absolutely required, it is desirable that attributes could be assigned to textual units above "word," and such attributes should be open to specification in the search argument. Linguists studying discourse, for example, might wish to assign attributes/annotations at the sentence or paragraph level. (3) The search engine should support the full range of logical operators (Boolean AND, OR, NOT, XOR), user-definable proximity values (within the SAME, or within "n" textual units), user-definable positional operators (precedence relations governing expressions or terms within the search argument) and comparative operators (for numerical values). The search argument should permit nesting of expressions by parentheses within the larger Boolean search argument. Full regular-expression pattern matching (grep) should be supported, as well as macro (library/thesaurus) facilities for designating textual corpora to be searched, discontinuous ranges or text-spans within documents, synonym groups, etc. Other standard features of powerful text retrieval systems are assumed (set operations on indices; session histories; statistical packages; etc). Most commercial search engines I have evaluated support a subset of the features in (3), but do very poorly in support of (1) and (2). The text retrieval systems which claim to be "full text" systems actually have fairly crude definitions of "text," and attempt to press textual data into rigid record-field formats that do not recognize hierarchical document structures, or are not sufficiently flexible to account for a wide range of document types. Three commercial products which attempt to support (1) are WORDCRUNCHER, Fulcrum Technology's FUL-TEXT and BRS-SEARCH. I know of no systems which intrinsically support requirement (2), though LBASE perhaps deserves a closer look, and a few other OEM products promise this kind of flexibility. It may be possible to press FUL-TEXT or BRS-SEARCH into service since both have some facility for language definition. Another promising product is the PAT program being developed by the University of Totonto in connection with the NOED (New Oxford English Dictionary). But I may have overlooked other commercial or academic products which are better suited for textual study, or which could be enhanced/modified in some fashion other than a bubble-gum hack. It is not necessary that a candidate possess all of the above features, but that the basic design be compatible with extending the system to support these functional specs, and that the developers be open to program enhancements. Ideally, such a system would work with CD-ROM, though this is not an absolute requirement. I would like good leads of any kind, but particularly products that could be leased/licensed under an OEM agreement...for microcomputers, I should add. Thanks in advance to anyone who can suggest names of commercial packages or academic software under development which meet the major requirements outlined above, or which could be *gently* bent to do so. I will be glad to post a summary of responses if others are interested in this question. Professor Robin C. Cover ZRCC1001@SMUVM1 3909 Swiss Avenue Dallas, TX 75204 (214) 296-1783 ========================================================================= Date: 7 March 1988, 10:58:17 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: The languages of HUMANIST (25 lines) --------------------------------------------------------- From: Robin C. Cover In response to Willard's suggestion that contributions in French, German, Italian (etc) be encouraged on HUMANIST, I concur wholeheartedly. It's not clear why those who feel more comfortable writing in non-English languages ought to be required to supply an English translation; isn't that giving with one hand and taking back with the other? If there is pride among HUMANISTS that we *are* humanists, then let's reflect upon that very long tradition in humanities education which requires that we be able to read great literature in any of the world's languages. That should prepare us to deal with postings on HUMANIST. If we learn computer languages but fail to treasure human languages, have we broken with our past? Professor Robin C. Cover ZRCC1001@SMUVM1.bitnet ========================================================================= Date: 7 March 1988, 10:59:28 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: A writing convention for diacritics (32 lines) --------------------------------------------------------- From: amsler@flash.bellcore.com (Robert Amsler) I came up with this one recently while encoding phonetic symbols. Basically... {,} are used around any character which is to have diacritics associated with it. {,} is taken to mean, ``combine together the symbols inside the {,}'s, so {ae} is a ligature, {,c} a c-cedilla, {o:} an o-umlaut, etc. You may note I said {o:} for an o-umlaut, rather than {:o}. That is because the position of the punctuation dictates whether it goes ABOVE or BELOW the character. Punctuation appearing BEFORE the letters goes BELOW, punctuation appearing AFTER the letters goes ABOVE. This allows one to also represent symbols such as {:a:} which is an `a' with diaresis above and below. Note that I am not necessarily claiming this is the best final form for special symbols, but it is an easily keyboarded and read system which I find useful for rapid keying of data. Bob Amsler Bellcore ========================================================================= Date: 7 March 1988, 11:01:20 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: What can't I do with C? (92 lines) --------------------------------------------------------- From: Jeffrey William Gillette Q: Are there any tasks that cannot be handled with 'C'? A: Yes, lots! At least this is the case on my IBM PC compatible computer. Let me begin to defend my (admitedly provocative) assertion by claiming a distinction between the C programming language and all the extra "goodies" manufacturers throw into their C compiler packages. According to "The C Language" by Brian Kernighan and Dennis Ritchie (sometimes referred to as the "C Bible" because its authors are also the creators of C), C is a general-purpose programming language. It has been closely associated with the UNIX system. ... The language, however, is not tied to any one operating system or machine. ... C is a relatively "low level" language. ... C provides no operations to deal directly with composite objects... C itself provides no input-output facilities: there are no READ or WRITE statements, and no wired-in file access methods. What I earlier referred to as extra "goodies", more generally known as library functions, are the system specific and machine specific functions to which Kernighan and Ritchie claimed the C language is not tied. On most computers these library functions are written in assembly language. In fact, since K & R created C as a language without input-output facilities, many of these standard library functions could not possibly be written in the C language! Often we think of input as that which we type into a computer, and output as that which the computer displays on its screen or prints to the printer. In technical terms this is not quite correct. I/o, properly speaking, refers to everything not a part of core memory (or ROM/RAM). On IBM compatible machines (i.e. computers that use Intel microprocessors), when a key is typed the corresponding key code appears in a special door (or "port"). It does not enter core memory until the processor explicitly takes it from the port and places it into some memory location. It is precisely this facility of reading a port (and its converse - writing a code to a port that will send it to the printer) that the C language lacks. Because C cannot read from or write to ports, on my IBM compatible machine I cannot write a C program that will get a character from the keyboard, read a byte from my disk drive, print a line of text, dial a modem, send an instruction to the math co-processor, or a myriad of other tasks I want to perform many times a day. By now some provoked C enthusiast will complain that my definition of 'C' is too restrictive. The question should instead be, Q: Are there any tasks that cannot be handled within the C environment distributed by X company? A: No, but C is not unique in this respect! Pilot is a rather restrictive programming language that is optimized for creating Computer Assisted Learning drills. Given enough time, however, I could program a definite clause grammar parser in Pilot (though I've no idea why I should want to). In fact, since Pilot has the same type of assembly language escape hatch used in C, I could probably reproduce MS-DOS in Pilot! Similarly, dBase III+ is not generally thought of as a word processor, but its creators are fond of claiming that the dBase programming language can be used to write a word processor. Perhaps we should all put our C compilers on the shelf and take up dBase. Or let use cast aside UNIX and MS-DOS in favor of Pilot! After all, Q: What can I do with C that I cannot also do in dBase or Pilot? A: Nothing! Jeffrey William Gillette dybbuk at tuccvm.bitnet ========================================================================= Date: 7 March 1988, 11:05:25 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Anglophone imperialism? (45 lines) --------------------------------------------------------- From: Sterling Bjorndahl It is good that Willard will accept contributions in languages other than English. However, I must disagree with his request that a translation always be appended. I am very embarrassed by the typical native English speaker's lack of facility in foreign languages, and I fear that Willard's request will only serve to condone that attitude among us - we who are, after all, HUMANists. I think that a policy of self-editing is sufficient here. If people want to send messages in Portuguese or Japanese, they will know that they will be communicating with only a very select audience. The world knows what we English speakers are like. I doubt very much that our mailers will be filled with messages we can't understand. On a few occasions when I had time to kill, I signed up to BITNET's RELAY network - an interactive computer forum which functions somewhat like amateur radio in terms of human interaction. The main population which uses the RELAY facility consists of undergraduate computer science students involved in casual conversations. On several occasions, the link between North America and Europe went down. During that time, several people in Europe would begin a conversation among themselves in German or Dutch. When the link came back up, parts of those conversations were transmitted to the North American side of the network. More than one person on this side castigated the Eupopeans for using their own language on the network. Granted, they thought that these were simply other North American students showing off their foreign language ability. But the outrage in their "voices" that anyone would use anything other than English on BITNET (they had forgotten about EARN), made me both angry at their chauvinism and sad for the North American educational system. That many of these people will be granted a university degree without every having had to learn another natural language is, well, inhuman. Sterling Bjorndahl Institute for Antiquity and Christianity ========================================================================= Date: 7 March 1988, 11:08:34 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: French messages --------------------------------------------------------- From: David Owen Having advised, from a technical not a linguistic perspective, various instructor's about the use of computer conferencing for conversation practice in French and Spanish instruction, I think I can say that the use of special symbols to indicate accents etc probably does more harm than good. It makes messages harder to write (and thus less likely to get written), and troublesome to decipher. Such special marks are extremely useful, nay essential, when the text is to be printed and the marks are re-interpreted by the formatter, but for purposes such as HUMANIST, I vote that we ignore them. David Owen OWEN@ARIZRVAX OWEN@RVAX.CCIT.ARIZONA.EDU ========================================================================= Date: 7 March 1988, 11:10:08 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: C vs. Prolog (32 lines) --------------------------------------------------------- From: Hans Joergen Marker Answer to Steven J. DeRose I have to accept your point of the number of program lines needed to accomplish a specific task in a given language, still bearing in mind that I remain practically ignorant of the workings of Prolog and Lisp. On the other hand, your statement, that: "It is roughly true that a programmer can write the same number of (working) lines of program per day, regardless of language." would naturally be dependand on the programmer. When I started this argument I was actually trying to find out whether it would be worth my while from a productive point of view to take a closer look of Prolog or Lisp. I am still not convinced. I am still very happy with C. (Perhaps it is my very well hidden macho instincts, though in Europe we would rather symbolise that with a Porsche, Firebirds are'nt that common over here). I think that your note confirms my point of view: C can get the job done, and because of the control you have over the machine using C, it will even get the job done when using other languages impractical. Hans J%rgen Marker. ========================================================================= Date: 7 March 1988, 11:14:27 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Languages on HUMANIST (31 lines) --------------------------------------------------------- From: Hans Joergen Marker fra Hans J%rgen Marker emne: Brug af andre sprog end engelsk p} HUMANIST Jeg kan selvf%lgelig ikke have noget imod at folk skriver p} HUMANIST i sprog jeg ikke forst}r. Hvem kan det? Men det giver dobbelt arbejde for afsenderen at skulle overs{tte sine tanker for at f} dem forst}et af de andre deltagere. Hvorfor l{rer i andre ikke dansk? from Hans Joergen Marker subject: Use of other languages than English on HUMANIST Naturally I can not be againt peoble writing on HUMANIST in languages that I don't understand. Who can? But it doubles the effort for the sender to be obliged to translate his thoughts into English to make them understandable to the other participants. Why don't the rest of you learn Danish? [Editor's note: In case you haven't guessed already, the rather strange looking words (e.g., "p}") have resulted from the computer's automatic translation of accented characters.] ========================================================================= Date: 7 March 1988, 11:19:04 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Genealogical display (40 lines) --------------------------------------------------------- From: John Dawson I don't know of a package for genealogical display, but I have a suggestion which I found to reduce the problem of displaying my family tree considerably. Instead of constructing a tree with all the family members of the same generation in a horizontal line, try showing the tree turned through 90 degrees so that one generation appears in a *vertical* line. If you arrange it so that information is typed in narrow columns, and so that no one text line contains text relating to more than one person, the result is quite easy to edit and keep up-to-date. Obviously, the most recent generation can either be the left-most or the right-most column, and it is easy to add a complete new generation. A small example follows (J24 is the son of J48 and J49; J12 is the son of J24 and J25; etc.): ------------------ J48 ) ???? HICKLING ) J's ggg-gf )-| | ------------------ |-(J24 ) | (???? HICKLING ) | (J's gg-gf )-| J49 )-| | ------------------ | | ------------------ |-(J12 ) | (HENRY HICKLING ) | (Traveller? in 1915)-| ------------------ | | J50 )-| | | |-(J25 )-| | | ------------------ | ========================================================================= Date: 7 March 1988, 12:35:51 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Writs (23 lines) --------------------------------------------------------- From: Eamon Kelly Does anyone know a simple guide to the names and types of writs issued by the English (or Irish) Chancery in the 13th and 14th centuries, in partictular ones dealing with appointments and grants? Hopefully yours, Elizabeth Dowse Dept of Medieval History Trinity College Dublin Ireland e-mail: EPKELLY@cs.tcd.ie or EPKELLY@tcdcs.uucp or EPKELLY@csvax1.tcd.he PS. I have already tried all the various Guides to the Public records ========================================================================= Date: 7 March 1988, 13:00:12 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: langues autres que l'anglais (17 lines) --------------------------------------------------------- From: ANDREWO@UTOREPAS sujet: langues autres que l'anglais Je suis entierement d'accord avec Robin Cover: il n'y a strictement aucune raison pour laquelle les gens qui preferent s'exprimer dans une langue autre que l'anglais soient obliges de fournir une traduction de leur contribution. Andrew Oliver (andrewo at utorepas) ========================================================================= Date: 7 March 1988, 16:56:35 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: On chancery writs (31 lines) --------------------------------------------------------- From: Dr Abigail Ann Young [My mailer returned my attempts to send this reply direct to the enquirer, so this is with due apologies to everyone not interested in the history of English law] Re your query to HUMANIST, I think some writs are discussed in Pollock and Maitland's 2 vol History of English Law. I've also found the legal writers of the 18th and early 19th century very helpful, because the forms of many writs and other legal instruments stayed quite constant from the mediaeval period until the reform of the judicature in the 1870's: I've used Littleton, Coke and Blackstone for help in understanding 15th and 16th century actions, for instance. What writs in particular are you looking at? I hope this is helpful. Abigail Ann Young Records of Early English Drama University of Toronto bitnet:young@utorepas ========================================================================= Date: 7 March 1988, 16:59:28 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Portable computers (55 lines) --------------------------------------------------------- From: John J Hughes HUMANISTS interested in the latest side-by-side reviews of IBM-PC-compatible laptop, "luggable," and portable computers should see Nora Georgas, "Planes, Trains, & Automobiles: 12 Portables for the Road," _PC Magazine_ 7:6 (March 29, 1988): 93-143. (No, despite the review's title, Steve Martin's latest movie does not figure in this review!) The Toshiba T1000 ($1,999) and the GRiDCase 1530 ($4,695) were the editors' picks. The GRiDCase wins these "competitions" time after time. According to the various reviews I've read over the last few years, the GRiDCase is probably the most rugged MS-DOS-compatible portable on the market. According to the review: "The machine has been `ruggedized' to withstand excruciating heat, cold, humidity, vibration, and G forces." Because of their ruggedness, I believe that the U.S. military is a major purchaser of these machines. (Parenthetically, GRiD Systems Corp. invented the MS-DOS laptop.) The battery-operated GRiD 1530 is a 12.5-MHz, 80386 machine. It weighs less than 13 pounds, measures 11.5-by-15-by-3 inches, has a 72-key keyboard, is EMS compatible, and comes standard with two 1.4-megabyte 3.5-inch floppy drives and 1 MB RAM--all housed in a svelte matte-black, magnesium-alloy case. The system will accept up to 512K of user-installable ROM (two 256K slots) in a pop-up panel at the top of the keyboard. GRiD Systems will burn ROMs for customers. Options: (1) hi-res gas plasma screen (640-by-400), (2) backlit supertwist LCD (600-by-200 ??), (3) internal hard drives from 10 to 40 megabytes, (3) external 1.4-megabyte 3.5-inch drive, (4) 3270 emulation cartridge ($1,295), (5) Ethernet cartridge ($695), (6) VGA monitor interface ($695), and (7) up to 8 MG RAM in 2 MB increments. Modem--??? The picture of the hi-res gas plasma screen in the review demonstrates the GRiDCase's astounding resolution, contrast, and clarity. Compared to the GRiDCase's gas plasma screen, the Compaq Portable 386's gas plasma screen looks "muddy." All of this is great, but who has $4,695 to nearly $7,000 for this machine?! I wonder if I could convince GRiD Systems to loan me one to field test for a year or so?. . . ========================================================================= Date: 7 March 1988, 17:13:20 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Translations (32 lines) --------------------------------------------------------- From: Dan Bloom 1) it seems obvious that if someone submits a request/statement in a particular language, the audience it is intended for is those that are conversant in that language and the topic at hand. 2) we are on an international network, therefore multilingual. Therefore, one must anticipate communications in many languages, and indeed it should be encouraged, although people such as myself who know about 1.25 languages (.75 english, .50 three other languages) may have to reply in English to another languages request. Which brings me to my final point; it should be noted that a request for information will get the greatest, quantitatively, response from a request in as many languages as possible (greatest subset of the people on the network) and English is an obvious choice as one. But certainly there should be no requirement for a translation into English. let the user beware so to speak ...Dan Dan Bloom Senior Consultant Academic Computing Services York University ========================================================================= Date: 7 March 1988, 18:32:44 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: linguae bellissimae (14 lines) --------------------------------------------------------- From: Sebastian Rahtz estne facile loquare linguae latinae... (i think its been too long since i did latin...) ========================================================================= Date: 7 March 1988, 18:35:21 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Screen messages in French (26 lines) --------------------------------------------------------- From: Keith Cameron I agree with David Owen that there is no absolute need for accents when writing French on the screen although the absence of an acute or a grave can cause a temporary ambiguity and retard comprehension. I suggest that ' be used before the vowel for the acute i.e. il a 'et'e and the ` for the grave - o`u as distinct from ou. It is rare that the diaresis, the circumflex or the cedilla affect meaning. If a text however is to be published, I have found that a number placed after the vowel is efficient as it allows for the subsequent global edit to adapt the text for printing eg. 1=acute, 2=grave, 3=circumflex, 4=diaresis, 5=circumflex. Keith Cameron ========================================================================= Date: 7 March 1988, 18:37:31 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Prolog and Lisp (19 lines) --------------------------------------------------------- From: Leslie Burkholder For those interested in comparisons of Prolog and Lisp here are two references: Cohen, "The Applog language", in DeGroot and Lindstrom, Logic Programming: Functions, Relations, and Equations (Prentice-Hall 1987). Warren, Pereira, and Pereira, "Prolog - the language and its implementation compared to Lisp", ACM SIGPLAN Notices 12 (1977) and ACM SIGART Newsletter #64 (1977). LB ========================================================================= Date: 7 March 1988, 18:39:11 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Multilingual messages (27 lines) --------------------------------------------------------- From: amsler@flash.bellcore.com (Robert Amsler) While I can see some benefits to a multi-lingual mailing list, there is also the issue of what are we saying to those readers who do not understand the language in which a given message appears? If you speak two languages; one of which will be understood by everyone in a room--and one of which will only be understood by 60% of the people in the room, what does it mean that you decide to speak SOLELY in the language which is only understood by 60% of the people in the room? One might say one was being rude to those who cannot understand that language? If we look at international organizations in which several languages are acceptable, such as the United Nations; there is a strict adherence to a policy of translation into each of the languages. If you look at multi-lingual journals; there is often a policy of requiring an abstract in each of the approved languages accompany the article in only one language. ========================================================================= Date: 7 March 1988, 18:41:04 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Prologs (57 lines --------------------------------------------------------- From: Bill Winder I have followed with interest the programming language debate, especially on the Prolog question. I do like logic, and that is why I turned from a shaky acquaintance with Lisp to an enthusiastic acceptance of Prolog. Turbo Prolog was all I could afford at the time, and it has proved sufficient, up to a point. I still like Prolog, but Turbo Prolog has some tragic flaws, which may ruin our relationship. In particular, there are a number of bugs, especially with the I/O. More importantly, however, is the question of having a more developed logic. What functions are needed in Tpro to make it more standard or more powerful? Obviously, a typed language has particular constraints, but I have found that it just means copying sections of code and renaming predicates for different types: there is no rethinking of the problem because of typing, just more keyboarding, at least such has been my experience. I'm willing to have the speed of Tpro, even if it means more work. (In a recent Turbo Technix article (the new Borland magazine), Tpro was shown to be faster than Turbo C for at least some functions, such as calculating the mean of a set of numbers.) The fact that it is compiled is not a problem, since you can build an interpreted level if you so desire (i.e. a prolog interpreter can be built out of a compiled program....) That might seem counter productive, but the advantage is that the interpreted level will be tailored to the specific needs of the application. For the moment, therefore, I can't find a solid argument against Tpro. This may be because I have never used sufficiently a full implementation of Prolog. Has anyone run across a damning piece of evidence against Tpro? I need but a single, convincing argument in order to abandon Tpro (even with its very pleasant development environment) and upgrade to Arity or Mprolog. (Note on Sheffer's bar: though it is true that Sheffer is given credit for it, I believe that Peirce actually proved the reduction --and used the bar, or equivalent-- some 30 years before Sheffer (I would have to check my figures, but 30 sounds right....). Don Roberts could certainly set me straight if I misunderstood Peirce's approach, and the meaning of the cut in the existential graphs. The bar is a binary connector and Peirce's cut is n-ary: it could be written in Prolog as cut([var1,var2,...]), whereas the bar would be written as bar(var1,var2). Both mean "neither, nor", only for the cut, the "nor" is iterated over all variables of the list. [N.B. Peirce's cut has nothing to do with the Prolog cut operator].) Bill Winder Utorepas ========================================================================= Date: 7 March 1988, 22:11:42 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Courtesy (33 lines) --------------------------------------------------------- From: Richard Goerwitz I enjoyed Hans Joergen Markers Danish posting immensely. The language apparently has a lot of German cognates, and I probably would have got- ten the gist without a translation. I am only concerned that the effort I would have spent deciphering it would have exceeded by far the effort it would have taken him to write it in English (his English is, of course, excellent). Maybe in cases like this - non-international W. European languages - we should encourage people to post only if they do not feel comfortable writing in something like German or French or English. -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer P.S. I mean no slight against languages that are not generally seen as "international." This implies nothing about their intrinsic rich- ness or character. It just means that your average reader is not going to be as likely to understand them. Fortunately, in cases like Dutch, Danish, etc., the resemblance to German is strong enough to make things much easier. ========================================================================= Date: 7 March 1988, 22:16:27 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Full text; languages (46 lines) --------------------------------------------------------- From: Mark Olsen I looked at a package called Concept Finder by an outfit called MMIMS (566A South York Road, Elmhust, Illinois, 60126 (312) 941-0090) which does, or claims to do, much of the sophisticated full text applications described. I was supposed to do a review for *CHum*, but, to perfectly honest, the program ran so poorly on the PC/XT class machines and was so poorly documented that we decided to wait for a future revision. We're still waiting. This is too bad, since on paper, it looks GREAT. Searches on annotation, text and references, full boolean support, proximity searching and so on. Very impressive. But the system could take up to 2 minutes for searches on very small samples of text. Even worse, once it found the references, it took 20 seconds to write a single screen. The problem SEEMS to be that it is written in a version MUMPS that is VERY poorly implemented on the IBM-PC machines. The company was considering optimizing the system, but for $1200.00 (retail) I need more than a promise of something that might, someday, run decently. I hope they do, as the overall design and approach are very interesting. I might have a draft of the old review (I'll check my files) written before we decided to let it die until a future revision. A second product is the full text retreival system which has been advertized by AIRS, Baltimore, who also market MARCON II. The advertising suggests the same kind of power as Concept Finder, but I have not seen the product in action. I would be interested in hearing about any other full text systems that HUMANISTS may be familiar with. On the language issue, would it be safe to assume that a message posted in English, French, or German could be read by the vast majority of HUMANISTS? If this is the case, we might ask those whose primary language is something other to provide a translation (or summary) of the posting. Settling on three or four languages which most of us can read will reduce our dependence on English without creating our own electronic Tower of Babel. ========================================================================= Date: 7 March 1988, 22:18:39 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Family tree (23 lines) --------------------------------------------------------- From: Ian Lambert [Note: this is the second attempt to broadcast the following message; the previous try seems to have been lost somewhere.... W.M.] Further to Dominik's message today, I use FTetc, but have one problem with it. It seems only to allow a single marriage, despite the documentation. A second husband is defined as "brother-in-law" to his wife! Similarly there seems some difficulty in entering the child of an unmarried mother! I don't know if PAF allow for these? Ian iwml@ukc.ac.uk ========================================================================= Date: 7 March 1988, 22:25:54 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Prolog, Lisp, Snobol4 (20 lines) --------------------------------------------------------- From: Eric Johnson For those interested in AI programming, Michael G. Shafto has written ARTIFICIAL INTELLIGENCE PROGRAMMING IN SNOBOL4 (Ann Arbor, MI: Cognitive Science, U of Mich, TR 47, 1982). Works by Shafto about AI can also be obtained in machine readable form from Mark Emmer at Catspaw, Inc. He can be contacted via e-mail as EMMER@ARIZONA.EDU. This topic is covered at ICEBOL3, to be held April 21-22; contact me for more information: Eric Johnson ERIC@SDNET.BITNET ========================================================================= Date: 8 March 1988, 09:00:35 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Query expressions (32 lines) --------------------------------------------------------- From: James H. Coombs I would like to know what people think about the following sorts of expressions. I am finding that people here think I am just giving them a hard time when I tell them that my Intermedia application will accept wild-card characters (a la (3)): So, there are at least these: 1) Boolean. E.g., 'own' AND ('house' OR 'car') 2) Contexts. E.g., 'own' WITHIN 5 'sell' 3) Regular. E.g., '[Ss]ee*' Thanks. --Jim Dr. James H. Coombs Software Engineer, Research Institute for Research in Information and Scholarship (IRIS) Brown University jazbo@brownvm.bitnet ========================================================================= Date: 8 March 1988, 09:04:16 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Speaking to 60% in the room (27 lines) --------------------------------------------------------- From: Richard Goerwitz Perhaps if we allowed freer use of languages other than English, we would be allowing more people to get online. If a writer feels com- fortable writing in English, then for heaven's sake, write in it! Those that don't, or who want to broaden our horizons a bit, better to post in some other language then not at all (i.e. better to post to 60% of the folks online here than to 0%). Looking at my English, I wonder if I might have been better off postin in some other language! -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer ========================================================================= Date: 8 March 1988, 09:10:57 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Re: searching for a search engine (182 lines) --------------------------------------------------------- From: James H. Coombs Robin Cover posted a very interesting query about full-text retrieval. I haven't had much time to think about it, but a couple of possibilities come to mind. 1) Battelle's BASIS? Someone built a DM on top of BASIS? DM was recommended to me by someone from Datalogics and someone else from McGraw Hill. Datalogics is planning to use DM for the Random House Dictionary. I think it may require a MicroVax, but I would be very happy to hear otherwise. 2) ARRAS, ???? being used now by ARTFL. I don't have more information on it. ARTFL is a French literature project; others will be able to give addresses, etc. ARRAS probably requires a mainframe. The new system runs on workstations, I believe. Probably requires Unix. Anyway, worth a phone call. The software was developed with a group at U Chicago. 3) *I'm* looking for a relational database management system that supports full-text fields. I would require some of the same things that Robin asks for. Among other things, I should be able to supply a function that returns tokens when the dbms indexes the contents of a text field. (Thus, the application gets to decide what a word is or, more generally, what units to index.) Hastily, comments on Robin's requirements: > (1) Literary texts should be "understood" by the system in terms of the > individual document structure, as indicated by markup elements. Are you imposing any storage requirements? For example, I can parse entries in the American Heritage Dictionary (AHD) based on the markup. I can then generate a relational database 1) containing the dictionary or 2) containing search keys for retrieval. In either case, the structure of the database is the same. I can then get answers to questios such as "Give me a list of all words whose pronunications are dependent on their part of speech" and "Give me a list of all music terms appearing in the definitions of verbs" and "For each author of a citation, tell me how many times his/her work is used to illustrate the use of each part of speech; display the results with the authors with most citations first." The point is that I can use a relational dbms to capture the structure of the data. Designing that database is not trivial, however, because determining the structure of the data is not trivial. I also have to write the program that parses the entries and generates the files ready for importing into the dbms. Furthermore, this applies to the AHD. It might be possible to determine a universal dictionary structure for all current dictionaries, but what about future dictionaries? Wouldn't we need a generative grammar? Having such a grammar would not be the same as having a structure that would be adequate for all dictionaries. Yes, it would save a ton of work, but I would still have to define a new database structure, wouldn't I? (To a large part, I am assuming that the structure of higher-levels of text is not as constrained as sentences are.) Similarly for literary texts. What is the structure of the text? How could a system know all of the possibilities ahead of time? People are out there analyzing in new ways, discovering new entities. People are out there creating new kinds of text. What do we do? SGML. That's a start at least, and a big one. Does it necessarily give you all dominance and precidence relationships? Would an SGML prolog for the AHD tell me that senses inherit the usage labels of their parents? I don't think so. It would tell me where a usage label is permitted, but it would not tell me how to determine exactly what properties apply to what entities. (This *could* be done, but it's not the goal of SGML to provide such information. E.g., items in a list may be enumerated, but that does not cause the tokens within those items to inherit that enumeration---the property applies to the parent but not to the child.) So, I think that Robin will require a meta-markup language that is richer than SGML. In fact, SMGL does not even specify a rigorous way to state that is the markup for a poetry quotation. The person who defines that tag includes a comment, but the comment is not parsed and validated. Evidence? (from ISO 8879-1986(E)) A. Annex B. Basic Concepts. (although "This annex does not form an integral part of this International Standard.") Paragraph B.4.2.1. Content Models. For each element in the document, the application designer specifies two element declaration parameters: the element's GI and a content model of its content. The model parameter defines which subelements and character strings occur in the conent. For example, the declaration for a textbook might look like this: Here "textbook" is the GI whose conent is being defined, and "(front,body,rear)" is the model that defines it. The example says that a textook contains the GIS "front", "body", and "rear". (The GIs probably stand for "front matter", "body matter", and "rear matter", but this is of interest only to humans, not to the SGML parser.) See what I mean? B. But what's a "Quotation"? And if we know what that is, how do we know that it's the same thing as a "Quote" or a "quote". What if I define a poetry quotation element? and so on. Well, I confess to not having provided evidence in support of the assertion that SGML does not enable one to determine inheritance of properties. Was I wrong? I've only shown that people may have trouble determing how to relate the concept "poetry quotation" to the entities tagged . They will have to read the prolog or the documentation and supply the query engine with the tag or they will have to inform the query engine that when they say "poetry quotation", they mean "all entities tagged ". I want to half take it back. SGML does not provide information about propery inheritance, but one can achieve the same effect by listing the parent elements that might have the property. So, I can't say something like "Give me a list of all slang verbs"; instead, I have to say "Give me a list of all verbs where some sense of the verb is slang or the head is slang". (Usage labels in the AHD can occur at sense divisions, part of speech divisions, or before all divisions.) But this means that *I* have to know a lot about the structure of the document and the resulting database. The system has no way of knowing that children inherit usage labels. Are literary documents trivial in comparison? I believe they are much more complicated, but I can't come up with a satisfactory example. > (2) The database structure must be capable of supporting annotations > (or assigned attributes) at the word level, and ideally, at any higher > textual level appropriate to the given document. This should not be a problem. I guess you want the application to make it easy to associate user text with blocks of subject text. I suppose that you also want to asign keywords to your text to make it easier to retrieve, but that you don't want to sink to the level of telling the dbms to create a table that contains X and that is associated with Y. I would think that you might want a table for linguistic information, and another for something else, but perhaps not. So do I understand your system at all, Robin? Have I been overly pessimistic? How willing are you to get your hands dirty? Should a scholar be a database designer? Should this be a system that is all primed for literature? Might not be much good for anything else? Around here, you know, people don't believe in anything as powerful as Boolean expressions. Regular expressions and set-level operations are for power users only. I wonder how many Humanists are want/need such capacities. (I will post another note with that query, since few will get this far.) --Jim Dr. James H. Coombs Software Engineer, Research Institute for Research in Information and Scholarship (IRIS) Brown University jazbo@brownvm.bitnet ========================================================================= Date: 8 March 1988, 10:27:01 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Logbooks and how to get them (26 lines) The logbooks into which all HUMANIST messages are automatically put are not where you've been told they should be. According to the "Guide to HUMANIST" you should be able to see them listed in HUMANIST FILELIST among all the other files on our newly inaugurated file-server. Due to a temporary problem with the ListServ software, however, they do not appear there. You can still retrieve them, using the standard GET command, but you have to know what they are called. As the Guide explains, all logbooks are named HUMANIST LOGyymm, where yy = the last two digits of the year, and mm = the number of the month. Thus the log for February 1988 is called HUMANIST LOG8802. On behalf of our disobedient servant, I apologize for any inconveniences. Willard McCarty mccarty@utorepas ========================================================================= Date: 8 March 1988, 10:48:22 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: C (and dBase) as languages (50 lines) --------------------------------------------------------- From: Randall Smith <6500rms@UCSBUXA.BITNET> I had been refraining from the discussion on various programming languages because I believe that there is no "best" language; the "best" language varies task to task, even person to person. However, I could not let Jeffrey Gillette's comments on C slip by without comment. I agree that the language itself has no I/O capability apart from its libraries, although I am not sure why that is so important. My guitar has no I/O capability without its strings, but this does not concern me very much since I always use it with strings. I pick the proper weight strings based on the type of music I intend to play, the condition of my fingers, etc. This is true of C as well. The libraries, as packaged by Microsoft at least, are easy to use and relatively fast and efficient; they are also written for the different memory models which the segmented architecture of Intel chips requires. Furthermore, I have written several libraries in assembly language which perform operations that were not included by Microsoft. I can also choose (for a price, of course) from a wide variety of commercial libraries to perform other functions. Libraries of this type are also available for Turbo Pascal and other languages. This gives one great flexibility in choosing the libraries which one needs without being burdened by extra baggage. I have no doubt that dBase could be used to write a word processor, or even a text parser. I can also play bass guitar on my regular guitar by tuning the strings down, but this is not an elegant solution. The difference between programming in C and dBase is elegance. Certainly other languages will be more elegant than C for specific tasks, but I appreciate C's flexibility and general applicability. I also find that different word processors are better for different types of writing. However, as a PhD student in Classics, I do not have the time to learn a multitude of word processors or a multitude of languages, as much as I would like to. Therefore, I have to pick a language, and a word processor, which can perform all my tasks. Once I have chosen, I find it easier to find a way to solve a new problem in the old language than to start a new one from scratch. Randall Smith ========================================================================= Date: 8 March 1988, 11:04:07 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Re: English and French (21 lines) --------------------------------------------------------- From: Hartmut Haberland Som svar paa Goerwitz' og Cover's bidrag til diskussionen, og som lille forsoeg paa at goere alvor ud af de smukke forslag som hidtil er blevet fremsat (paa engelsk, vel at maerke), vil jeg starte med at bidrage noget paa dansk - et sprog, som i parentes bemaerket, ikke et mit modersmaal, men et af mine daglige arbejdssprog. Problemet er selvfoelgelig: hvem kan laese det her? Og hvis jeg har et oenske om at bliv hoert: er der nogen der lytter? Der maa vel findes en eller anden Kierkegaard-forsker rundt omkring i verden som kan laese dansk. Mon hun eller han findes paa HUMANIST? Med venlig hilsen og i haab aa et svar, Hartmut Haberland (ruchh@neuvm1, ruchh@vm.uni-c.dk) ========================================================================= Date: 8 March 1988, 11:07:26 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Languages on HUMANIST (20 lines) --------------------------------------------------------- From: Birgitta Olander Det sl}r mig pl|tsligt att t ex vi i Norden kan ha en egen liten mail-grupp i HUMANIST, liksom andra humanister som har ett exotiskt spr}k gemensamt. --------- It occurs to me that humaists in the Nordic countries, for example, might have our own mail-group within HUMANIST. The same is true for others with an exotic language in common. But is it desirable? Birgitta Olander, LIBLAB Dept of Computer Science, Linkoping University, Sweden ========================================================================= Date: 8 March 1988, 13:38:59 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Flaws in Turbo Prolog (19 lines) --------------------------------------------------------- From: Sebastian Rahtz Surely the fact that a new predicate cannot be dynamically defined in Turbo Prolog is enough to rule it out of court as a full Prolog? I cannot read in from my user "loves" "sebastian" and "wagner" and 'assert' a relationship between them. Its been a while since I used Turbo Prolog (and that was only for a few days before I found this flaw) - would anyone care to correct me? sebastian rahtz ========================================================================= Date: 8 March 1988, 13:40:01 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Languages (23 lines) --------------------------------------------------------- From: Sebastian Rahtz i think people are getting too serious about this! we dont want 'HUMANIST-recognised languages', and items in Hebrew rejected by Willard. Why not just keep it free-form as it is? People should write in the language they feel will be read by the intended audience. Shame on anyone who isn't prepared to have a go at any language that comes along! anyway, its all academic, since our terminals are almost all (I assume) ASCII without accents etc, so most interesting languages havent a hope of coming over as their author intends them. why SHOULD Greek people transliterate for us, just because computers were developed by arrogant Westerners? no compromises, please. sebastian rahtz ========================================================================= Date: 8 March 1988, 13:40:53 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Multilingual messages (42 lines) --------------------------------------------------------- From: Hans Joergen Marker From Hans Joergen Marker Subject: Multilingual messages (37 lines) I would like to express my agreement with Robert Amsler where he speaks about the disadvantages of somebody using a language understood by only a fraction of the audience. On the other hand I disagree that it should be regarded as rudeness if somebody spoke, say, Spanish in predominantly English audience. My point is rather a practical one, over the last couple of days I have given up hope that the rest of you will learn Danish, but imagine for one moment that you spoke a native language unknown to the majority of the attendants of most international conferences. Then you would first of all be forced to acquire some capability in the major language (pt. English, but to keep your imaginination vivid, dig up my earlier note on this subject and imagine for a while that is was Danish). Secondly if you are attending bilingual conferences you the choice between learning the third language or loosinng half of the conference. If your natural language is one of the minor languages to a degree where it is impropable to meet anybody abroad who speaks the same language as you you will allways be in the disadvantage of being forced to know one language more than the others. This means that you will allways appear a bit more restricted linguisticly speaking than other peoble not having that difficulty. Now before you start crying for the unfortunate Scandiniavians and other peoble from the minor nations of the World. There are advantages in being a native speaker of a flexible language like Danish. Though in order not to arouse your envy I shall refrain from counting my blessings at this place. Yours Hans Joergen Marker. ========================================================================= Date: 8 March 1988, 13:41:52 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Immortality (53 lines) --------------------------------------------------------- From: Peter Houlder Sender uknet-admins-request@UKC.AC.UK Resent-Date Tue, 8 Mar 88 13:04:36 GMT Resent-From Sebastian Rahtz From Peter Phillips Date Mon, 7 Mar 88 10:13:11 GMT Peter, I've just received some information regarding a little boy, who users of the NET might be interested in helping. If you think it would be OK, could you mail a copy to all net users, or post it in the news ? Here is the text of the letter I received. ====== David is a 7 year old boy who is dying from Cancer. Before he does, he has a dream of one day being in the Guinness Book of Records for the person who has had the most postcards sent to them. If you would like to help David achieve his dream, all you have to do is send a postcard to David as soon as possible. Send to: David, c/o Miss McWilliams, St Martin de Porres Infant School, Luton, Bedfordshire. Don't forget to sign your name, - -- Pete Phillips, TEL : 0443-204242 Ext: 6552 Quality Control Laboratory, TEL : 0443-202641 (Direct Line) East Glamorgan Hospital CIX : peteqc Church Village, UUCP: ukc!egh-qc!pete SOUTH WALES CF38 1AB ------- End of Forwarded Message ========================================================================= Date: 8 March 1988, 14:53:19 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Languages (31 lines) --------------------------------------------------------- From: John Roper I was astounded that the subject of language caused such a discussion. I live in a multilingual society here in the UK. Almost everybody locally speaks English English and "Norfolk". In other areas many other languages are popular such as those from the Indian continent. Professionally my jargon changes depending whether I am speaking to my computer, a computer scientist or somebody from Art History. If you wish to communicate to another individual, you use the language most easily understood unambiguously by both of you. On occasions when speaking with a Finnish colleague, we converse in pidgin French/German. In the HUMANIST context with a broadly Nth. American audience who are obviously parochial in outlook, American English does look like the obvious language to use if you wish to communicate easily to a maximum audience. Nobody on the other hand should be barred from using their natural language or be expected to translate their thoughts into another. However I suspect the potential audience for a Russian offering to be strictly limited. John Roper(S200@CPC865.UEA.AC.UK) ========================================================================= Date: 8 March 1988, 14:55:03 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Query expressions (33 lines) --------------------------------------------------------- From: Sebastian Rahtz > I would like to know what people think about the following sorts of > expressions. I am finding that people here think I am just giving them . . > 3) Regular. E.g., '[Ss]ee*' I can see why people are not happy with this, because in the example as given the query allows for both Seeds and seeds to be retrieved. But the 'man in the street' thinks that should happen anyway (well my students do, anyway) so s/he gets aggrieved at your sharp practise. The example [ABC]ee* is equally upsetting because there is no old-world equivalent - we are not used to expressing things in parallel, as it were. As for [A-Za-z]* ..... I find the example of the regular expression fine, but thats because I am a Unix user, otherwise it could easily bother me. But I suppose the moral is that if people are going to have to use regular expressions, then the least we can do is have a universal syntax, and the Unix one seems good to me - it irritates me like mad that the two versions of SQL I use have different wild-card characters! sebastian rahtz ========================================================================= Date: 8 March 1988, 14:56:17 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Searching for a search engine (18 lines) --------------------------------------------------------- From: cbf%faulhaber.Berkeley.EDU@jade.berkeley.edu (Charles Faulhaber) Anecdotal evidence: When I talk about the glories of text searching to non-computer users they remain singularly unenthusiastic. When I suggest that such searches could be semantic in nature rather than string-oriented, their ears perk up. Technically I do not know the best way to accomplish this, although I suspect that some sort of thesaurus would make it possible. ========================================================================= Date: 8 March 1988, 14:57:14 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: An international computer glossary --------------------------------------------------------- From: Roberta Russell HUMANISTs interested in a multi-lingual glossary of computer terms will find one in the 1987 edition of the Directory of Computer Assisted Research in Musicology, published by the Center for Computer Assisted Research in the Humanities, Menlo Park, California . It gives equivalent terminology in English, French, German and Italian (sorry, no Danish or Norwegian......) Roberta Russell Oberlin College ========================================================================= Date: 8 March 1988, 20:11:19 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Talking at one another (19 lines) --------------------------------------------------------- From: Brian Molyneaux Why not simply let everyone type in the language of their choice? I am guessing that all 'humanist' participants will be able to correctly assess what their offerings will look like and what their readership will be in any given language. 'Standards' have a way of creeping into all kinds of open communication - the next thing you know, someone might complain about Sebastian Rahtz's jokes....... Brian Molyneaux (AYI004@UK.AC.SOTON.IBM) ========================================================================= Date: 8 March 1988, 20:13:49 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Languages and diacritics (35 lines) --------------------------------------------------------- From: K.P.Donnelly@EDINBURGH.AC.UK Is anyone out there using the new ISO standard, IS 8859/1? This is an 8-bit extension to ASCII which includes the accented characters needed for practically all languages with Latin based alphabets, in both upper and lower case, as well as other useful things "pound", "cent", "half", "superscript 2", and "degrees". It has been an ANSI standard for some time, and is basically the same as the "DEC multinational character set" on VT220 terminals, which will no doubt become the de facto standard li ke VT52 and VT100 before them. So it looks like being the answer to the problem of diacritics. The only snag is that there are all sorts of obstacles in the way of 8-bit working at present. Mail messages containing 8-bit characters work ok within the computer I am using, but when I tried bouncing them around the JANET network in the UK, or even between computers with different operating systems on the local Edinburgh University network, the eighth bit got stripped. What is happening in "foreign" countries? Is it chaos at present? It looks from Joergen Marker's message as if the Danish "national version" of 7-bit ASCII is being used there. Does this cause confusion within the country as well as outside, with curly brackets appearing on screens and printers? Kevin Donnelly ========================================================================= Date: 8 March 1988, 20:28:50 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Common LISP/INGRES interface; course on hypertext (53 lines) Extracted from IRLIST-L, with thanks Date: Fri, 4 Mar 88 16:08:50 PST From: "Jeffrey C. Sedayao" Subject: Common LISP / INGRES interface available Announcing the availability of CLING, Common LISP INGRES interface. CLING is a Common LISP package that permits a user to manipulate and query an RTI Ingres database. Databases can be created and destroyed, and tuples appended and retrieved, all with Common LISP functions. Versions for Sun Common LISP (Lucid) and Franz Allegro Common LISP are available. CLING cam be retrieved via anonymous FTP from postgres.berkeley.edu. Jeff Sedayao ..ucbvax!postgres!sedayao sedayao@postgres.berkeley.edu ------------------------------ Date: Fri, 4 Mar 88 16:55:13 EST From: Ben Shneiderman Subject: Hypertext Course . . . The University of Maryland University College Center for Professional Development presents HYPERTEXT: A NEW KNOWLEDGE TOOL A 3-day course taught by Ben Shneiderman, Charles Kreitzberg, Gary Marchionini, and Janis Morariu, May 9-11, 1988 This course presents hypertext systems and concepts in order to facilitate the development of hypertext applications. Participants will learn and use avail- able systems, understand implementation problems, recognize which applications are suitable, and design knowledge to fit hypertext environments. [More information available on the file-server, s.v. HYPRTEXT COURSE.] ========================================================================= Date: 8 March 1988, 20:33:02 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Languages (46 lines) --------------------------------------------------------- From: Michael Sperberg-McQueen I apologize to Hartmut Haberland for replying in English, but the answer to his question is yes, there are HUMANISTS other than those in Scandinavia who read Danish. (Not all of us even Kierkegaard scholars!) Surely no one should feel apologetic for posting notes or notices on HUMANIST in languages other than English, any more than they should feel apologetic for publishing articles in other tongues. The obvious advantages of having one's note more broadly understood will suffice to encourage those who can, to write in 'common' languages; we certainly don't need, as a group, to create any further rules or apply any further pressure. It would be a shame to lose the potential contributions of all those who read English but are shy about writing it. (I feel that way about my Danish, why shouldn't someone feel that way about their English?) If a sender wishes a note to have a broader distribution (or at least a broader readership) than is possible in the original language, why should not other HUMANISTS supply the translation? May I propose that, *if* HUMANIST needs conventions governing the language of contributions (and I am not sure we do), we adopt these: 1 contributions may be made in any language chosen by the sender. 2 a translation into another language may be appended to any message by the sender, if desired 3 if the sender wishes for a translation, but cannot supply it personally, a request for translation into a more commonly understood language may be made part of the message; any HUMANIST able and willing to undertake the translation is then encouraged to do so (and to sign the translation) Jeg ville gerne sige det allt paa Dansk, men jeg kan ikke skriver Dansk saa godt. Michael Sperberg-McQueen, University of Illinois at Chicago ========================================================================= Date: 8 March 1988, 20:35:09 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Help needed re SPIRES and PRISM (29 lines) --------------------------------------------------------- From: John J. Hughes Dear HUMANISTS, Someone involved in producing specialized musicology databases recently wrote and asked what I know about humanities data bases . . . that are (a) stored in SPIRES (or a SPIRES type data base such as PRISM) and (b) can be accessed or searched remotely by users other than the data base's creators or administrators. Can anyone help with this request? Thanks. John John J. Hughes XB.J24@Stanford ========================================================================= Date: 8 March 1988, 20:43:44 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Relational databases (47 lines) --------------------------------------------------------- From: Sebastian Rahtz I'd appreciate a fuller explanation of what Jim Coombs means by a relational database that "supports full text fields"; it sometimes seems to me that people fail to differentiate between the database back-end and the front end. If Jim has to define a database structure in which every word in a sentence is held in a separate tuple with information about it, it will not be directly useable, but someone can write a front-end that makes it look sensible. and whats wrong with "every word a tuple", aside from possible considerations of efficiency? My point is that I think Jim wants a full-text frontend, not a full-text database. Sebastian Rahtz PS Since Jim re-opened the SGML quagmire, can I pose this one to the community? Is an em-dash "punctuational markup" (Cooombs et al in ACM Comm Nov 87), or is presentational markup? I write an aside---as indeed I may do at any point---and indicate it with em-dashes, but if I were a Frenchman or a German or a Tamil speaker, might I not use a different typesetting convention? Ergo, emdashes must be replaced by descriptive markup, to indicate "parenthetical aside", must they not? Or is the choice of em-dash, brackets, foornote or whatever a genuine function of the writer's meaning, in which case it _is_ punctuational markup like a full-stop. Consider also this: a short list such as "I like a) cats b) food c) sex" appropriately appears in-line, and if I were an SGML purist I'd have done it with descriptive markup. If I went back and expanded that list so that each item was more fully described, it should probably be expanded to a full 'enumerated list'. But my intention remains the same, to list my favourite things. So is the change from 'inline list' to 'full list' up to me or my designer? is it a change in intent or presentation? There seems a horrid possibility that the SGML purist would tell me that the software should examine my list and make its own decision based on length of list. Would anyone care to comment? ========================================================================= Date: 8 March 1988, 20:47:34 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Languages, natural and others (81 lines) --------------------------------------------------------- From: Dr Abigail Ann Young Subject: languages, natural and others 1. What can you do in C you can't do in .... When it became obvious that someone at our project was going to have to settle down and write some computer programmes to manipulate strings of codes and I was elected, I wrote them (or tried to write them!) in BASIC, mostly because it was there, already bought and paid for with our IBM PC. They were very clumsy programmes, and they took a long, long, looong time to run. Even after I learned enough to write less clumsily, they still took a long time to run. Then I heard about a "real" programming language, which was very hard, called C, in which one could write programmes which ran FAST, and I persuaded my seniors to buy a C compiler and two books on the subject on the strength of that one feature!! But, it worked; it turned out to have been an intelligent, efficient choice, and despite the fears born of ignorance, it wasn't very hard to learn. The C versions of the old BASIC programmes ran infinitely faster; as our needs have changed, and the coding strings were modified, the C programmes proved quite simple to change. And I became very fond of C (one's favourite programming language, like one's favourite word processor, is chosen on the basis of indefinable criteria, like one's choice of chocolate over vanilla ice cream). Like Latin, my favourite natural language, it has an elegant, structured, precise syntax. And it encourages even a novice to produce the most elegant solution to a given problem. Now, this is a very subjective judgement, and I'm sure there are plenty of other languages which do the same thing, I just happened to learn C first. I suspect in the end, most choices of a language are equally subjective. I do worry about my affection for C in light of the automobile definitions posted earlier: I certainly don't think of myself as macho!!! 2. Unilingual vs. bi-/ or multi-lingual communications I just want whole heartedly to support those who have spoken in favour of encouraging HUMANISTS to post in languages other than English, and to do so with whatever diacritical marks they feel appropriate. I wonder how those who decried the need for accents would feel if asked to return to the days of ALL UPPER CASE TERMINALS AND 'DISPLAYS' -- AFTER ALL, IT DOESN'T INTERFERE WITH COMPREHENSION, DOES IT? I don't want to be part of a North American unilingual ghetto (after all, I live in a bilingual country!), although to be fair, I don't think anyone was seriously suggesting that HUMANIST be North American (ie, US Websterian) English only. But those of us who do not live in Europe or Asia tend to fall into the trap of thinking that the relatively small number of "internationally used" scholarly languages, English, French, German, are all there is. We don't very often encounter even the other Romance languages (except Spanish in some parts of the States, I suppose) in scholarly discourse far less Scandinavian and Slavic languages, and as for the languages of Asia! And we don't know those languages, for the most part. (the exceptions are, of course, the people who teach those other languages) I think what is really worrying the people who expressed doubts is the fear of missing out on an interesting conversation! I will certainly be running that risk if the conversation goes outside of English and French, but I would rather take the chance than create an atmosphere in which people feel they must communicate in a foreign language (eg, English or French only) or be socially ostracised by HUMANIST. An alternative, of course, is to return to a dead language (such as Latin) for the language of scholarly discourse, thereby prefering no one current natural language over another.... Ego huic proposito studeam! Abigail Young young at utorepas ========================================================================= Date: 8 March 1988, 20:50:53 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: The COBOL of natural languages.... (20 lines) --------------------------------------------------------- From: Richard Giordano Maybe one solution in a multilingual environment is to translate a message into Esperanto. Weren't most of us in Esperanto clubs in high school? More seriously, I do agree with Sebastian Rahtz who argues for the most open environment we can possibly achieve on the Humanist. Richard Giordano RICH@PUCC ========================================================================= Date: 8 March 1988, 20:52:09 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: hvem kan laese det her (39 lines) --------------------------------------------------------- From: Richard Goerwitz Well, I'm not involved in Kierkergaard research, but I know a Germanic language when I see one ("svar" or not!). Please, don't shy from post- ing. Reading your posting was just good plain fun (I'm in Semitic languages, and so it offered an interesting diversion...). It's sad to see Americans labeled as "parochial," as John Roper so baldly called us. We are linguistically isolated here not on account of some great moral failure, but on account of a thing called the Atlantic. The size of our country doesn't help, either. Admittedly, we are not the most open-minded of societies. Now that we are not dominating the world economy - in fact, we seem to be in a bit of trouble - we will surely be looking outward more to see what the rest of the world has to offer. Perhaps some day we will come up to Mr. Roper's standards of pluralism! NB: Sterling Bjorndahl, Robin Cover, and I were actually some of the first people to slip into this discussion. In our postings you'll find nothing that would imply a desire to see American (not English) become a sort of standard. Please, don't make us insist on something we haven't been insisting on! For all I care, we can start posting in Latin. Dead languages always solve the problem of linguistic chauvenism :-). -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer ========================================================================= Date: 8 March 1988, 21:00:22 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: First I plead, then I agitate, question, and comment (42 lines) The volume of mail generated by HUMANIST is getting large, nicht wahr? Yesterday, in fact, I brought down a machine-to-machine data channel here by sending out about 6 to 8 messages in rapid succession to our 210+ membership. So, I've been told that I must not play such pranks again during the day. The problem is that I must send out several during the day or there will be nothing left of my evening. So, I'm thinking of ways to make my labour more efficient. Thus follows an agitated exclamation, and a question to you; finally a comment. Please, *please* address all messages to HUMANIST to this address: HUMANIST@UTORONTO >>not<< HUMANIST@UTOREPAS or MCCARTY@UTOREPAS. As I explained earlier, the latter error leads to a bad habit that soon will cause grief. The former causes the local postmaster work, since the message goes into his account first (where it may rest for days, or minutes, depending), then it causes me extra work, because I must peal away the commentary and various bits of garbage that get attached to the message. I may have to drag out the garbage can soon and designate it my dead letter office. --- Is the volume of mail getting too much for you or are you enjoying it? If the former, then may I suggest that we do our best to stick to a single topic until it (the topic!) collapses? Any other suggestions? --- In the matter of languages, vox populi vox Dei, and cheers to the populus! From the very beginning we have tried with what strength we have to make HUMANIST truly international. I'm glad to see that you all agree. Willard McCarty mccarty@utorepas ========================================================================= Date: 9 March 1988, 00:31:27 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Prophetic warning or survival of the fittest? (19 lines) [The following is extracted from a recent resignation from HUMANIST. W.M.] I find the volume of communication on HUMANIST too much to bear. After skipping one day of checking my mail, I find 71 messages waiting for me, all of them HUMANIST. I simply don't have time for this much chat and reading. I'm sorry, but please remove my name from the HUMANIST distribution list. ========================================================================= Date: 9 March 1988, 00:36:02 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Speaking in tongues (19 lines) --------------------------------------------------------- From: Norman Zacour I'm sure that Michael Sperberg-McQueen has summed up the opinion of all of us quite neatly, and his suggestion is most agreeable, viz., that those of us who wish might ask other Humanists to supply a translation - just so long as no one has to translate Sebastian Rahtz' Latin! Might there be some relationship between the vigour of this discussion and Willard McCarty's plaintive cry for help? Norman Zacour (Zacour@Utorepas) ========================================================================= Date: 9 March 1988, 00:38:55 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Volume of HUMANIST (46 lines) --------------------------------------------------------- From: dow@husc6.BITNET (Dominik Wujastyk) I'm afraid I am finding the volume of recent HUMANIST mail opressive, and I am getting very free with the "d" key, which means that if someone says something interesting after about the second para, I don't get to see it. I feel like a relative newcomer to HUMANIST (although in the world of computing a month is a decade), so what I have to say below may have been thrashed out already. If so, forgive me. I also subscribe to TeXhax, as I expect other HUMANISTs do, and although it too can get pretty voluminous, I feel much better about it, and not oppressed. For those who don't know, Malcolm Brown collects about 20k of letters into a single document, adds a header with a list of the subject headers, date, issue number, and ocassional editorial comments, and sends it out. It appears on average once or twice a week. It feels much like receiving a magazine or the latest issue of a journal: a little thrill of pleasure in anticipating what people are now saying, and what is new. I also find it *much* easier to skip stuff that is not of interest, because of the "contents page" at the beginning, and because I read it with an editor or lister which is much faster than paging through mail. In contrast, I always feel I'm wading through HUMMANIST. I suppose I could just create my own HUMANISTmagazine by saving fifteen messages in a file before reading it, but it still woudn't have the contents header (some Icon buff could knock out a prog to do that, no doubt). But does anyone else share the leaning I have for the TeXhax type of thing? A certain spontaneity would perhaps go -- perhaps not a bad thing --- oops, delete delete delete ... Dominik bitnet: user DOW on the bitnet node HARVUNXW arpanet: dow@wjh12.harvard.edu csnet: dow@wjh12.harvard.edu uucp: ...!ihnp4!wjh12!dow ========================================================================= Date: 9 Mar 88 09:39:46-EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Date: 9 March 1988, 09:34:08 EST From: MCCARTY at UTOREPAS To: HUMANIST at UTORONTO cc: GROA13 at EMAS-A.EDINBURGH.AC.UK Subject: A sick joke? (20 lines) I have received an angry message asserting that there is no little boy dying of cancer who wants to get his name in the book of records by receiving more postcards than anyone else. I have no way of verifying either the original request or this allegation, but the fact that a hoax of such a kind has occurred before may lend weight to the latter. So, I leave it to your judgment how to respond. Willard McCarty mccarty@utorepas ========================================================================= Date: 9 March 1988, 21:37:05 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Too much! and more (81 lines) Dear Colleagues: We seem to have reached a turning point with HUMANIST. Of the dozen or so respondents to my note about the volume of mail, only one expressed approval of the quantities we have all received in the last few days. Some have said, simply, "Too much!" Some have added a wistful regret that HUMANIST might die or that they might soon be forced to drop out. Some have made suggestions about how the problem might be averted. For all of this I'm grateful. What a fascinating social experiment! There are two problems, yours (receiving and reading the mail) and mine (reading, processing, and sending the mail). I know mine (about 2 hours a day doing nothing but); I can imagine yours. What other solution to both can there be but to reduce the daily volume of mail? The number of messages you receive could be reduced by me sorting and bundling the messages according to topic, removing the sometimes voluminous headers, and sending these bundles out. One of you suggested that there might be "digesting software" that would automate this process for a VM/CMS system. If so, *please* let me know about it. If someone is willing to write the exec according to the specifications I could supply, *please* let me know -- and may he or she be blessed forever! Alone, unaided by automatic means, I certainly cannot do the digesting. We could subdivide HUMANIST into two or more separate lists by topical area. The flaw with this plan seems to me that it is based on a misunderstanding of what HUMANIST is. We want, do we not, a discussion group that can range freely over topics of all sorts, dwell on them as long as we want, then move on to something else? If a coherent subgroup (e.g., those devoted to issues of textual encoding) want to set up a discussion group, Steve Younker and I will gladly help in any way we can. But I don't think doing that will solve the problem. I did suggest earlier that we try to stay on a single topic until it is exhausted, but this notion was criticized for its obvious flaws -- and so the critic missed the point, I think. Take the obvious analogy, a large, free-wheeling seminar: there do we not in one way or another attempt to stay on topic? Don't we usually regard two or more simultaneous conversations on different topics as distraction? and focus as the way to turn random babble into something powerful and illuminating? In short, I see no real solution that does not involve some kind of self-discipline mixed with courtesy. We have done well in that regard, I've been told, but we've passed some sort of threshold that's forcing the issue once more. Everyone says that HUMANIST is a good thing, but it is only what we make it from day to day. So far we've not had to think very much about what's relevant and what's not. Now I think we do. So, what I propose is this: whether or not someone comes up with a technological aid to handling quantities of mail, that we discuss exclusively computing in the humanities in the professional sense, that we try as much as possible to stay with one topic at a time (requests for information, conference announcements, and the like excepted), and that we exercise conversational restraint on ourselves according to our best judgment. Lest we fall into the common error of becoming verbose about our verbosity, please direct all comments about this to me. I'll fairly summarize what people say and post the results. I'm open to arguments that we should do other than I have suggested. Willard McCarty mccarty@utorepas ========================================================================= Date: 9 March 1988, 21:42:13 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Relational dbms; FE vs BE (87 lines) --------------------------------------------------------- From: James H. Coombs Sebastian Rahtz writes: > I'd appreciate a fuller explanation of what Jim Coombs means by a > relational database that "supports full text fields"; it sometimes > seems to me that people fail to differentiate between the database > back-end and the front end. I mean a relational database management system, not a database. As far as I'm concerned, that means a backend. I suppose that I'm being sloppy though. I use the SQL frontend to Ingres, and I use my own frontend to Ingres. Strictly speaking, the SQL frontend is part of Ingres, which is a relational database management system. Ok, writing that helped me see that you are correct. I don't want an Ingres frontend to handle the full text, because I would then have to handle it from my own frontend. I already have that problem with SQL (Ingres' SQL frontend permits the dynamic construction of SQL statements; the ESQL interpreter does not; presumably the backend doesn't know the difference, so I blame it on the interpreter). > If Jim has to define a database structure in which every word in a > sentence is held in a separate tuple with information about it, it will > not be directly useable, but someone can write a front-end that makes > it look sensible. and whats wrong with "every word a tuple", aside from > possible considerations of efficiency? Well, there are at least two things wrong. 1) *I* have to write the frontend (as well as my own server/backend so that I can trick Ingres into opening two databases at once). 2) Even if someone else were to do it, we would have many solutions to the same problem. It's a common problem and should be solved once for all databases (for the majority?). 3) "Every word a tuple" is ambiguous, I believe. It could mean that the tables contain first-level indices, which the frontend generates. Or it could mean that the tables contain the entire text, not just indices into the text. For the simple version of my application, I use the first approach; it requires less storage and is more efficient. The second approach---where the database contains the entire text--- introduces complications with spacing and markup. Do we have a tuple for every punctuation mark? A tuple for spaces between words? How do we handle presentational markup? (I guess we would have to rule it out.) In addition, we would have to have a separate table to define the range of the definition text, for example (start=54; end=82). So far, range searches in Ingres have been relatively slow. (In part, I'm bringing up efficieny again, but there's also an issue of complexity.) I thought about the tokenized approach. The biggest problem, to my mind, is that it doesn't properly capture the state of the universe that I'm modeling. The value of a definition is the text in the definition. It's similar to words, whose values are character strings. We typically don't decompose words into one tuple per character. If we wanted to search for individual characters, then I guess we would have to (with current dbms). Notice, however, that dbms developers provide pattern matching facilities so that we can decompose words within fields. (We can use these same facilities to search for words within phrases, but the performance is unacceptable.) > My point is that I think Jim wants a full-text frontend, not a full-text > database. I will just repeat that point about multiplicity of front ends. I'm writing the frontend, so I want Ingres to do the work (just as I want Ingres to determine access paths and optimize queries). In addition, I want it to be possible for someone to write a hypercard frontend (TCP/IP is becoming widespread, so they can use my server, which has as much as possible of the intelligence in it---e.g., user asks for "ceilings"; we don't find it; we try "ceiling"---NO PLANS for other frontends, just a design philosophy). I hope this clarifies things. I suppose I get confused because I have my own frontend and backend as well as Ingres' backend. --Jim ========================================================================= Date: 9 March 1988, 21:45:57 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Help re SPIRES and PRISM (18 lines) --------------------------------------------------------- From: cbf%faulhaber.Berkeley.EDU@jade.berkeley.edu (Charles Faulhaber) Talk to Tony Newcomb (Dept. of Music, UC Berkeley) who is engaged in developing a full text data base of Italian poetry with music between 1400 and 1600. It runs on SPIRES. It is being done in collaboration with Italian scholars (see the recent note on computer activities in Italian musicology), and if not accessible now from Italy is fully intended to be so accessible. ========================================================================= Date: 9 March 1988, 21:47:32 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Markup (49 lines) --------------------------------------------------------- From: Allen Renear I just can't resist markup... Sebastian Rahtz wonders whether an em-dash is punctuational or presentational markup. I'd say an em-dash, like a comma or full-stop, is punctuational markup. But a "soft" hyphen -- one that indicates that a word is broken across lines -- would be a good example of something that may look like punctuation, but is in fact presentational markup. Notice how the number of soft hyphens varies as the design varies (eg with line length, font, hyphenation patterns &c.) But the number of em-dashes varies primarily with authorial decisions. Can punctuational markup be replaced by descriptive markup? Absolutely. It can be replaced by descriptive -- or referential -- markup. Should it? Probably not in general. But sometimes it is useful to do so. And it is interesting to speculate on what the advantages would be. The basic idea is, of course, that the *role* the punctuation plays would be formalized. Then we could easily switch between British and American full-stop/quotation mark conventions; French, German, or Anglo-American quotation symbols; have our search programs look for words only in direct quotations; and have our text editors or formatters handle conventions for representing nesting quotations -- allowing us to cut and paste without worrying about varying singles and doubles. In my typesetting days I always insisted on descriptive markup for quotations. Of course I did let the authors and editors indicate with different markup which were to be displayed and which inline -- so things are not so simple. On this topic -- markup purity -- notice that AAP has, like Waterloo GML, for quotation and for long quotation. Sebastian also wonders what happens as an inline list gets longer during editing. Should the display/inline decision be up to you the author or your designer? Your designer -- no question about it. But this is just a provocative way of saying it is a *design decision*. "SGML purists" don't really want to remove any power or authority from authors, they just want you to call a list a *list*. Format it displayed or inline, wysiwyg or batch; have the decisions made by you, your designer, or your software. The SGML *purist* removes himself from these decisions -- after having made them all possible. ========================================================================= Date: 9 March 1988, 21:50:18 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: 8-bit character sets (35 lines) --------------------------------------------------------- From: Hans Joergen Marker Answer to Kevin Donnely: Yes we have chaos over here. In the mainframe world texts are usually corrupted when transmitted from one computer to another. The Danish I wrote was appearing on my screen with beatiful Danish wovels, and would have appeared on my printer in the same way. But if I try sending a message on the network with my second chistian name spelled correctly "J%rgen" (the second letter is an o with a slash over it) the message will be rejected with something like "ILLEGAL FIELD" Speaking of micro computers Danes and Norwegians are still wondering why IBM was so preoccupied with providing a y 'umlaut' that it was impossible to provide an %. The Danish-Norwegian IBM-character set, uses the cent and Yen signs for the % in lower and upper case. This gives rise to a number of interesting malfunctions of programs, screens and printers. In Denmark this group of errors are popularly known as the Yencent errors giving a play on words with the common Danish family name Jensen. About the y 'umlaut' the latest theory is that it should be used among some remote New York tribes, utilising the letter as an interesting abreviation for the city name. Hans Joergen Marker ========================================================================= Date: 9 March 1988, 21:53:31 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Languages, HUMANIST, and mail pollution (53 lines) --------------------------------------------------------- From: Hartmut Haberland This may be my last contribution to Humanist. Although I enjoy it thoroughly, I spent amost an hour this morning reading, discarding, forwarding items I got through the net. This is simply more time than I can spare, and reading the message of the poor person who found 71 items in the mail after only one day's absence from the terminal gave me the shivers. It is great to look into one's pigeonhole in the morning and you see `030 RDR' or `027' RDR etc., all that mail, and is it really meant for you? but then the difficult bit comes: to sort 'em out. I thouroughly enjoyed the comments from Susan Kruse, Richard Goerwitz and Michael Sperberg-McQueen (and possibly of others I discarded too quickly) on the language issue. (You may have noticed that I write this in English, more on this below.) Of course, it is a myth that people in general and Americans in particular are pathologically monolingual. Some are, but far fewer than one realizes. (I would like to see statistics on that, by the way. I strong- ly believe that the pure, absolute, no-way-out monolingual is a very rare species, probably overrepresented in huge and wealthy countries, but virtually absent from large parts of the world.) Of course, one should use as many languages as one can manage to read and, possibly write. I am just afraid that although most Humanists can recognize a language when they see it (in order to carry Goerwitz' point a bit further), most of them will feel quite comfor- table writing English anyway. One of the problems is diacritics. I admit it is fun to write Greek on an ASCII terminal (ever seen how they do it? it's amaz- zing, kind of mixture between transliterated and phonetic, like this: Agaphte Hartmout, kharhka (or xarhka, beeing interpreted as Xi or Chi according to context) poly (or polu) pou elaba to gramma-sou. To diktyo mas leitourgei ... etc. etc.) It is much less fun to write Danish on an ASCII terminal, especial- ly since you never know what goes through and what not (curly brackets, yes, but also dollar signs instead of exclamation marks etc.). When I get letters from Germany in German, the a and o mlaut always comes through as ae and oe, but u umlaut as a 'bolle-a' (a with a circle on top), and the Eszett (long s) as a tilde ... So all in all, this is not so much a matter of principle but of convenience. I am certanly glad to receive mail in all languages I can handle (English, French, Danish, German, Norwegian, Greek, Swedish, Dutch, Italian), but I can't promise that I will answer in the same language (and if you write to me in Finnish, I have to fetch my dictionary first). This letter being very long, it is an excellent contribution to information pollution. I won't say much more now, but I urge everybody who feels the same to think about possibilities to restrict the information flow from HUMANIST. Its great fun, but sometimes you have other things to do than those which are fun. So much for today. Thanks for listening (who got this far, I wonder). Hartmut Haberland (rucch at neuvm1) ========================================================================= Date: 9 March 1988, 21:55:02 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: America the Golden (17 lines) --------------------------------------------------------- From: Sebastian Rahtz And there was I thinking that America had a wider racial and linguistic mix than Norfolk! Would Richard Goerwitz care to comment on that complete lack of Spanish contributions to Humanist from his country? On his side of that Atlantic there are far more Spanish speakers than ours! Sebastian Rahtz ========================================================================= Date: 9 March 1988, 21:57:27 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Electronic Henry of Avranches (16 lines) --------------------------------------------------------- From: Andrew Oliver A colleague at the centre for medieval studies at the University of Toronto is working on the Latin verse of the 13th century poet, Henry of Avranches. He wishes to know if any of Henry's poetry exists in electronic form, if so where and under what conditions might he obtain them. ========================================================================= Date: 9 March 1988, 22:02:09 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Mac text retrieval programs (45 lines) --------------------------------------------------------- From: John J. Hughes Dear HUMANISTs, A week or so ago, I asked for help in locating text retrieval and/or concording programs for Macintoshes. Alas, the replies were few--very few. This leads me to conclude that there aren't many such programs for Macs. In fact, Sonar is still the only commercial text retrieval program for the Mac that I have found. And Mark Zimmermann's BROWSER, aka TEXAS 0.1, is the only noncommercial such program. That's a sorry state of affairs. (I'll have more to say on these two programs after I review and compare them.) Although there are many text retrieval programs for IBM PCs, and although I have working copies of most them (I'm reviewing several of the "most powerful" ones for the next issue of the _Bits & Bytes Review_, including the ones mentioned above for the Mac), I am discovering that most of them are not designed for scholarly use or are frightfully slow or treat a whole file as a single record (!) or are crippled in their search functions or suffer from some serious design flaw or .... So far in my investigations, WordCruncher still seems to be "state of the art." As much as I appreciate that program, I know that "we" can do better. For example, Yaacov Choueka's/IRCOL's text retrieval software sounds like a significant "step up," and according to recent correspondence with Yaacov, IRCOL is giving some thought to creating an MS-DOS version of that software. (That's neither a rumor nor a promise.) Perhaps interested HUMANISTs should e-mail Yaacov their encouragement for producing such a program (Yaacov, please forgive me if you are inundated with HUMANIST mail!). John John J. Hughes XB.J24@Stanford ========================================================================= Date: 9 March 1988, 22:11:28 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Correction: PAT and the NOED (46 lines) --------------------------------------------------------- From: Robin C. Cover RE: University of Waterloo, Centre for the New Oxford English Dictionary In a recent posting I referred to a program called "PAT" which I said was being developed at the "University of Toronto." CORRECTION: this program is being developed at the University of Waterloo in connection with work on the NOED (New Oxford English Dictionary), at the Centre for the New Oxford English Dictionary. My information about PAT comes from a document supplied by Darrell R. Raymond, and is an internal memorandum written by Gaston H Gonnet, "Examples of PAT applied to the Oxford English Dictionary" (OED-87-02; July 28, 1987), 34 pages. According to the developers at the University of Waterloo, "Pat is a program based on the patricia tree data structure. The main virtues of Pat are (1) its speed: Pat can find the matches to any fixed string in the OED in under a second (2) its method of specifying queries: all Pat queries are searches for string prefixes, hence Pat can find repetitions, phrases, and do concordancing merely by specifying the appropriate prefix (3) its complete indifference to the the content of the corpus: since Pat knows nothing about words or any other semantic structures in the text, it can be easily applied to virtually and kind of data that is representable as text." Deepest apologies to University of Waterloo for this careless citation in my posting. Text retrieval methods being developed at the Centre for the New Oxford English Dictionary show great promise for general applications in textual studies, so we may look forward to hearing more from the Centre in the coming months. The mailing address is: Centre for the New Oxford English Dictionary University of Waterloo Waterloo, Ontario, CANADA N2L 3G1 (Professor Robin C. Cover ZRCC1001@SMUVM1.bitnet) ========================================================================= Date: 9 March 1988, 22:16:04 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Linguistic processing in Prolog (46 lines) --------------------------------------------------------- From: Bill Winder Sebastian Rahtz's remark about dynamic definition of predicates seems to concern (what I call rightly or wrongly) the interpretative level. To get such features in Turbo Prolog, one must parse expressions. The question is dealt with on page 150 of the user's manual, in the context of the call predicate. However, I suppose the point is that in Turbo Prolog, there will always be (unnecessary) complications when using such functions. That complication has perhaps some deleterious influence on the way I approach problems. For the moment, however, the Turbo Prolog solution does not seem unreasonably circuitous. In fact, the case you mention, Sebastian, is perhaps too simple to be telling, since it boils down to a n-tuple relation between strings. It can be stated as a database predicate concerning lists of strings, such as: dynamic_pred([var,var,...]), where the first var is interpreted as the function name -- dynamic_clause([loves,Sebastian,Wagner]), and dynamic_clause([writes,Wagner,music]),as in your example. Once stated, general predicates to query the database would be necessary. One such predicate might be trans(R,Subject,Object) if dynamic_clause([R,Subject,O1]), dynamic_clause([R2,O1,Object]). The two clauses above give the conclusion trans(loves,Sebastian,music), which could be asserted dynamically also. A more telling example would be one where a non-database predicate ("if" containing construction) is constructed on the fly. I don't think there is any direct way of doing so in any prolog. Such a feature would allow the program to rewrite itself entirely. In some ways, that is possible through the database clauses, but not to the extent of a true interpreted language, where the program code can be manipulated. Bill Winder Utorepas ========================================================================= Date: 9 March 1988, 22:24:42 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Conference posting to the file-server (14 lines) Announcement of an International Conference on COMPUTER-MEDIATED COMMUNICATION IN DISTANCE EDUCATION Venue : OPEN UNIVERSITY , MILTON KEYNES, UK Dates : October, 8 - 11, 1988. ========================================================================= Date: Thu, 10 Mar 88 11:28:54 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Conference posting to the file-server (8) CUNY FIRST ANNUAL CONFERENCE ON HUMAN SENTENCE PROCESSING MARCH 24 - 27, 1988 ========================================================================= Date: Thu, 10 Mar 88 11:40:20 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Computing for humanities students (20) --------------------------------------------------------- From: Keith W. Whitelam (UK.AC.STIR.VAXA) An appeal for help to fellow HUMANISTS. We are exploring the possibilities of setting up an introductory course for computing for arts and humanities students. Qustions have been raised about academic content! I would be most grateful for information on courses that are run elsewhere. In particular: 1. course content 2. assignments 3. methods of assessment 4. Is it taught by Arts and/or Computing Science staff? 5. any other information that you think appropriate. If replies are sent direct to me, I will summarise the reponses for HUMANIST asap. Thanks in anticipation, Keith W. Whitelam ========================================================================= Date: Thu, 10 Mar 88 11:47:26 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: Anaphora in Homeric Greek by computer (35) --------------------------------------------------------- From: "John B. Haviland" I have an undergraduate student who proposes a joint Classics/Linguistics honors thesis, based on a machine readable corpus of Homer which he plans to examine for patterns of anaphora. He is interested both in syntactic issues, and in stylistic detective work on the problem of Homeric "strata." We can get the Homer, I think from UC Irvine, in a somewhat idiosyncratic Greek format (with limited diacritics) on 9-track tape, to be read by Vax and thence probably to MacIntosh. I have two questions: (1) Does anyone have suggestions about software (or alternate sources for the Greek corpus, for that matter) that might be appropriately employed? (Reed has multiple Macs of all sizes and shapes, as well as the Vax running bsd UNIX. I think I am the only person on campus who uses equipment other than this.) The student seems reasonably fluent in C, and is currently fiddling with our Vax's Franz Lisp. (2) Is this old news? Is the student doing something that has already been done? I am no classicist, so I am ignorant of the answer. I appreciate all advice; and I have enjoyed the voluminous introduction to Humanist during my first week receiving the discussion. ========================================================================= Date: Thu, 10 Mar 88 11:44:51 EST Reply-To: MCCARTY@UTOREPAS Sender: HUMANIST Discussion From: MCCARTY@UTOREPAS Subject: More help for Penn OT users (45 lines) --------------------------------------------------------- From: Richard Goerwitz A few days ago, I posted a note saying I had a program that allowed one to a) slice out smaller corpora from the Penn OT texts, and b) have these marked more explicitly as to chapter and verse, while still remaining within the TLG betacode guidelines. This is still available. I now have another program available. This one prints out what the first program outputs. It can also output raw Penn OT texts, though you can't print out a verse here and there with the raw text. It has to be whole books. Sorry, that's just the way their coding scheme works! In any case, this printing program will work with any Toshiba P321-51 type printers (P321 printers must be able to accept downloadable fonts). Other printers will work, but you'll need to design a whole Hebrew font (yuck!). My program has a font that goes with it. The program isn't perfect, in that it can't reproduce all those minute accents. When it finds accents, it will print a tick over the letter to indicate its presence. It also strips out the Westminster textual notes that come at the end of some words. Otherwise, the output is quite read- able - considering you're asking a dot matrix printer to do some pretty rough stuff. Warning: This printing program is slow. That's to make it easily modi- fiable for printers that don't do nice things like dead accents. You wouldn't want to print out all of Genesis with it, in other words. It also goes through the little betacode markings (e.g. ~~xy) and tries to figure out where it is. It then prints out an explicit English location marker like "Gen 1:1" or whatever. As usual, I wrote this in Icon. You'll need to have a system that has Icon installed. Everyone ought to have it, since it's free, anyway. Interested parties should drop me a line. You oughta get both programs, by the way, so you can print out lone verses or groups of verses, rather than whole books.... -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer ========================================================================= Date: Thu, 10 Mar 88 12:04:58 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: French-English bilingual computer lexicon (17) --------------------------------------------------------- From: "Dana Paramskas, University of Guelph The most complete French-English/English-French dictionary for computer-related terminology is: Terminologie de l'informatique, published by the Office de la langue francaise, Government of Quebec, ISBN 2-551-05790-6 (1983). To quote the blurb and add a footnote to the multilingual arguments: Le present ouvrage contient pres de 12 000 termes couvrant tous les aspects de l'informatique: materiel et logiciel, traitement des donnees, micro-informatique et teleinformatique, sans oublier les diverses applications de l'ordinateur. A cela s'ajoute une bibliographie selective de 150 titres. ========================================================================= Date: Thu, 10 Mar 88 12:24:41 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Genealogical software (21 lines) --------------------------------------------------------- From: dow@husc6.BITNET (Dominik Wujastyk) > > Those of you with access to the Unix News system may wish to know that > the program "Genealogy on Display" has recently appeared on the net, > uuencoded in areas comp.binaries.ibm.pc and soc.roots. This is the > Mormon product; a series of linked Basic progs. It is quite good. > > Incidentally, perhaps soc.roots (which I don't follow) has a lot more > information for those of you looking into the problem of managing > genealogies. > > Dominik > bitnet: user DOW on the bitnet node HARVUNXW > arpanet: dow@wjh12.harvard.edu > csnet: dow@wjh12.harvard.edu > uucp: ...!ihnp4!wjh12!dow ========================================================================= Date: Thu, 10 Mar 88 12:42:20 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Structuralism and the Bible (92) [My apologies that the following has been delayed. It was sent to HUMANIST@UTOREPAS and in consequence strayed into a dusty corner. -- W.M.] --------------------------------------------------------- From: iwml@UKC.AC.UK > > I am very grateful for several responses to my query about three weeks ago > into structuralist models and the biblical text. I have tried to thank some > individuals and/or make contact, but have not got through the "mailer" yet. > > I trust that I won't bore you with taking the discussion further in > an attempt to float a couple of ideas to see what responses come back. > > ARE THE ACTANTIAL RELATIONS DISTINGUISHABLE BY SEEKING VERBS? > > Is it reasonable in looking at A J Greimas' actantial model > with its three relationships of knowledge, desire and power > to anticipate a pattern of semantics displaying > words/phrases of knowledge, desire and power at the > narrative level? > > Sender--->Object--->Receiver [knowledge] > ^ > | [desire] > Helper--->Subject<---Opponent [power] > > If so, does the concentration in the narrative on verbs, or > verb-derived words, seem reasonable on the basis that each > relationship requires a verb in order to operate? > > My thinking at the moment is that for any computerisation to > be successful, what is important is the relationship rather than the > actants in the first instance. If a database of relationships for any > text can be identified, then the locus for identifying the actants is > defined. This is a development of a hypothesis that in binary > oppositions, there are three operative areas of interest, namely two > actants and the nature of the relationship between them, a > relationship which at what ever point on its spectrum you pause, is a > boundary at the narrative level. > > HOW IS A BINARY OPPOSITION RELATIONSHIP DIFFERENT FROM A BOUNDARY? > > If it reasonable to see the relationship between two binary > opposites as central to an actantial model, or for that matter to a > semiotic square, in what way does the quality of that relationship > differ from a boundary? Is it not the case that no matter where you > stop in that relationship, that stopping point is at that moment a > boundary between the opposites? > > I can see a a conceptual assumption in the preceding paragraph > which some may wish to question - namely that of a linear movement > between the opposites. > > However, in a presence v absence binary opposition, the position > of Greimas would be that in the middle is "hiddeness". "In the middle" implies > a linear movement of thought from one binary opposite to the other. > > > WHAT LANGUAGE WOULD YOU USE? > > Fascinating though much HUMANIST discussion on various languages > has been, it is only adding to the confusing of one HUMANIST now engaging in > formal academic computing after ten years of self-teaching! Therefore it > seems appropriate to reverse the situation, and rather than read HUMANIST > for answers to the search, pose the question directly and see what > happens! > > Given: > > (1) Primarily interested in structuralist interpretations; > (2) little or no computer analysis in this field has happened since > a spate of activity in the 1970s; > (3) the programme would be an aid to analysis by the user at this stage; > (4) the programme would guide the user from a narrative text DOWNWARDS > to levels beneath the text itself ultimately to a structuralist > interpretation; > (5) the programme would save the answers (i) en route and (ii) at the > end for analysis. > > Given those presumptions, which are free to be questioned of course, > which language would you consider using? > > For fear that I will now be given 200+ different languages to look at, > I'll sign off now! > > Ian Mitchell Lambert > iwml@ukc.ac.uk ========================================================================= Date: Thu, 10 Mar 88 12:50:31 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Editorial: Messages by subject (24) Dear Colleagues: Today I started using a new mailer that allows me some relatively painless sorting of HUMANIST messages by subject. In consequence, you will receive today two kinds of mail: (1) single pieces, on topics unique to the day; and (2) three collections (on languages, that questionable postcard request, and HUMANIST's current problem with the volume of mail). If -- I repeat, IF -- we can more or less stick to a single topic, and if if if we can all be persuaded to use the SUBJECT line in the header of notes to HUMANIST (put it in by hand if necessary!), then we may survive our own success. One person has promised to write an exec to produce a daily digest, which might be even better than what I can offer now. Meanwhile, however, let me know what you think about the new development. Yours, Willard McCarty mccarty@utorepas ========================================================================= Date: Thu, 10 Mar 88 16:52:56 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Retrieval engine (32) --------------------------------------------------------- From: Lou Burnard Robin Cover's search for an engine (7 march) is a much more interesting topic of discussion than all this guff about lingo, and much more what I always thought Humanist was all about. Reactions to it have also been very interesting, if only in what they overlook. Viz. that there has for many many years been a whole area of the software industry devoted to the problem of providing access to large unstructured texts. There is an enormous body of research literature on how to search texts. There is even a directory of free text searching software updated annually. The problem is that most of the systems are concerned with providing rapid access to huge online sets of news stories, catalogue records, espionage reports etc. Consequently they cost lots of money and lots of computer. Consequently they're not very interesting or innovative. There's a working party of the IUSC (a UK academic computing committee) investigating this area at the moment; I have sent Robin (and will send anyone else who wants it) a copy of its draft report, which at the moment is more of a wish list and an evaluation procedure. Lou Burnard P.S. My own views are unchanged from those outlined in an article I wrote for Lit & Ling Computing last year sometime - you can do anything anyone can think of using indexes, but if you want to stay sane and still have some disk space, then you should be using CAFS or similar. However, as the powers that be have taken away oxford's CAFS I'm now currently experimenting with BASIS, to which JAZBO refers. It's horrible, but it does everything I can think of, so far as I can tell. ========================================================================= Date: Thu, 10 Mar 88 16:55:23 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Computing for humanities students (20) --------------------------------------------------------- From: RE: Keith W. Whitelam's appeal The starting point for information should be the CHum special issue on Teaching Computing to Humanists. (Vol. 21 #4. October-December 1987) There is a survey of existing courses and a selected bibliography. There is a conference scheduled at Oberlin College this summer that deals with the topic. There should not be too many unanswered questions after researching the bibliography in CHum. joe rudman ========================================================================= Date: Thu, 10 Mar 88 17:10:07 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Bitnet node change: BYUHRC to BYUADMIN (16) --------------------------------------------------------- From: Chuck Bush Please note that electronic mail formerly sent to Chuck Bush, Randall Jones, Kim Smith, Mel Smith or others at BYUHRC should be addressed to BYUADMIN instead. BYUHRC is still a valid node in most routing tables, so mail thus addressed will continue to get through to us for a while longer, but the link will undoubtedly break sooner or later. Chuck Bush Humanities Research Center Brigham Young University ========================================================================= Date: Thu, 10 Mar 88 19:51:18 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Test of the DISTRIBUTE function (20) This is a test of the DISTRIBUTE option of the ListServ software. Would the following HUMANISTs please reply to me directly by returning a copy of this note? Mark Olsen Richard Goerwitz Joel Goldfield anyone in the Xerox group Robert Amsler David Sitman Lou Burnard Thanks very much. If this works our local machine will be much less burdened with our weighty discussions. Willard McCarty mccarty@utorepas ========================================================================= Date: Thu, 10 Mar 88 20:03:55 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Digest: Volume of mail on HUMANIST (206) From [name withheld] I'm sorry but I am another one who regretfully must ask that my name be withdrawn from the Humanist address list. I am busily preparing my book on .... I wonder how you can stand it. I think you must be near insanity. --------------------------------------------------------- From: "David Owen, Philosophy I agree with Willard that the volume in HUMANIST is getting out of hand; and I only deal with them at the receiving end! I have some sympathy with the user who solved the problem by resigning; I may have to do the same if for no other reason than to prevent my Vax account from continually jamming up against my reasonably generous quota. And that could be a solution to the problem; more HUMANISTS will resign as the flow becomes more unmanageable; it will regulate itself. But it would be a shame to see HUMANIST become limited solely due to its own success. The TeXhak solution is a promising one, but on current form, that would mean receiving bundles of over 100 messages twice a week. Still, if they had tables of contents, etc. Another solution would be to have several lists on different topics, but that would be unwieldy, a lot of work and ad hoc. I can't help but think the fileserver is the answer. Perhaps all discussion of a continuing topic could be dumped in a file in the server, and updated twice a week. Summaries could be posted in the normal HUMANIST, as would messages on new topics. Would this be a lot more work, Willard? And if we could remotely search the server files... Anybody out there used LDBASE? --------------------------------------------------------- From: AYI004 at SOTONVM Happy anarchy is no more! Kropotkin slinks back to his cave. The awful spectre 'professional' begins to uncoil; Pavlov shrieks with delight. Do something. Help. I feel equally uneasy about the proposals - if no-one else will write an EXEC, I will, if thats the only objection to digests. I imagine it'd be quite easy? what are your specs? I for one want the anarchy to continue; there is a compromise solution which is to have some items in a weekly digest and others on demand - but it all means more work for you. Thus the 'issue of the day' goes on as before, but there is a weekly digest of news and questions about anything. how does that sound? or what about 2 levels of HUMANIST, a 'news only' subscription, and a 'full' subscription? whatever you do isn't going to satisfy everyone though. ah well! --------------------------------------------------------- From: Hartmut Haberland This may be my last contribution to Humanist. Although I enjoy it thoroughly, I spent amost an hour this morning reading, discarding, forwarding items I got through the net. This is simply more time than I can spare, and reading the message of the poor person who found 71 items in the mail after only one day's absence from the terminal gave me the shivers. It is great to look into one's pigeonhole in the morning and you see `030 RDR' or `027' RDR etc., all that mail, and is it really meant for you? but then the difficult bit comes: to sort 'em out. --------------------------------------------------------- From: CHAA006@VAXB.RHBNC.AC.UK Reply-to: Philip Taylor (RHBNC, Univ of London) Re. Willard's suggestion that we stick to a single topic until it's exhausted; I do not think it a good idea, nor do I think it will work in practice ... how will one tell that a discussion is exhausted ? Non-receipt of HUMANIST mail could simply indicate a failure in the mail distribution system, rather than an indication that a discussion has run out of steam. Even if correctly perceived, multiple HUMANISTs, all of whom had been waiting for an opportunity to launch a new hobby-horse, might then leap into the fray, starting multiple concurrent discussions until a concensus emerged on which was the "current" topic. --------------------------------------------------------- From: Walter Piovesan I find myself in the same boat as the two members of HUMANIST that have complained about the load of messages coming across their electronic desks. I will have to resign if the volume is not cecreased to a manageable level. My reasons for wanting to participate in HUMANIST was to monitor activities in the area of creation, management, and use of Machine-Readable Textual Files. For the most part the discussions have been, in my opinion, a bit too chatty. Perhaps messages can be edited and batched out in weekly or monthly "volumes". --------------------------------------------------------- From: Sebastian Rahtz someone else said to me today that HUMANIST is just too much to cope with. I think that there are only two solutions: a) a digest as Dominik W. suggested. I agree with him that it is much less forbidding. b) interest groups, with mail tagged for sending to one of, say, 10 groups. would require users to tag their mail. but so much IS of general interest. I voted for the Digest approach before and I vote for it again. If necessary you could appoint subject subeditors, who get the whole thing and extract to mail to people who only want to get strictly relevant material to their discipline. This is sent to you but maybe you could include it in an editorial when you get more responses. i find my small archaeology mailing list quite enough trouble, I am impressed by how well you cope! and with jokes too, thats the best bit --------------------------------------------------------- From: Robin C. Cover not to clog already overly-talkish HUMANIST... Suggestions which would increase your workload are not welcome, I know. I see the following solutions as superior to the current state of affairs: (a) bundle/digest 10-15 HUMANIST postings, or perhaps each day's worth, into a single file and send it instead of the 8-12 per day. The latter is expensive (I use VM/CMS "receive" command exec, and it costs a lot to "receive" small files; it would be a lot easier to read a group of HUMANIST postings at one time, or to sort the virtual reader repeatedly, as I suspect many have to do (b) IF you could automate a process to put an "index"/"contents" at the head of each digest, that would be great...but not necessary. If the exec to do this is not too bad, then perhaps the index could indicate line-number at which each new posting begins. Most of have line-oriented editors available, I think, and could immediately "goto" selected submissions, as dictated by the subject-line indexed at the top. (c) I agree that 12-28 HUMANIST mailpieces per day is too much...I was already contemplating download of the log once per week if this traffic keeps up. There IS some chatter, in my judgment...I think everyone would respond to pleas for self-control. --------------------------------------------------------- From: Does reading the Daily Paper bother people as much as reading the voluminous mes sages on the Humanist? Why not apply the same principles to read the messages here as you use when reading the Daily: 1)Read the Headlines. 2)Read the leading lines of interesting stories. 3)Save a few memorable items for your scrapbook. 4) Use the rest to line your kittylitter box, or wipe up the spilled coffee. --------------------------------------------------------- From: David Nash I'd be in favour of digesting. (Gnu Emacs has an RMAIL "undigestify" command (not that I'll have the pleasure of Gnu Emacvs in a few weeks...), but with digested news I get from other sources I find I don't usually use that command.) I agree: I'd vote against subdividing prematurely. Staying on a single topic -- no. The speech-act of "interruption" is quote different from in a seminar room -- and I welcome them here. But the point you may like to quote me on is: how much of recent discussion would I have recommended fellow country to pay 2c/line for?? Precious little of the stuff about which language to use, for instance. Keep up the good work! -DGN --------------------------------------------------------- From: ATMKO at ASUACAD Don't change HUMANIST. People can either wade through entries one at a time or use the magical PURGE RDR command to clear out the mailbox. Why is there such a stink about this anyway? - Mark Olsen --------------------------------------------------------- From: KENNEDY@DALAC I've only been a subscriber to HUMANIST, but I see the problem about volume of mail. The problem is perhaps really associated with the fact that HUMANIST functions as a mail list (simple!). Possibly it needs to be arranged like special interest groups on USENET, as a bulletin board that people can tune into instead of having all postings come directly each and every mailbox. One way of doing this would be to have institutions be subscribing members, so that the mainframe computer maintained a weeklong listing of postings and then cleared them out for the next week. HUMANIST headquarters could continue to maintain archive files and so on. --------------------------------------------------------- From: krovetz@UMass I also feel saturated by the volume of mail HUMANIST generates and have been tempted to unsubscribe. It would certainly be helpful to put the messages into a digest format rather than redistributing every message. The messages could also be organized into groups, so an entire section could be skipped through if desired. I don't want to send this message to everyone, but I just wanted to add my vote for a digest format. I also don't think the digest should be sent out more than once per day. -bob ========================================================================= Date: Thu, 10 Mar 88 20:17:07 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Digest: The Postcard Question (164) -------------------------------------------------- Date: 10 Mar 88 09:26:12 gmt From: R.J.Hare@EDINBURGH.AC.UK Comments: Received this yesterday - seems it might be genuine after all... Roger Message-ID: <10 Mar 88 09:26:12 gmt 340209@EMAS-C> --- Forwarded message: Subject: Re: Postcards From: R.J.Hare Date: 09 Mar 88 15:47:50 gmt To: Sebastian Rahtz In reply to: Your message Msg ID: <09 Mar 88 15:47:50 gmt 340138@EMAS-C> Well, that was my sort of thought, but we have some pretty suspicious people up here. I append a message sent earlier which seems pretty conclusive. (Message 50) Subject: Appeal? From: A.C.Boyd @ uk.ac.edinburgh Date: 09 Mar 88 14:45:38 gmt To: R.J.Hare @ uk.ac.edinburgh Via: UK.AC.EDINBURGH.EMAS-A ; (to uk.ac.edinburgh.emas-c) 09 Mar 88 14:46 Msg ID: <09 Mar 88 14:45:38 gmt 100667@EMAS-A> I'm not so sure now that the appeal in GENERAL BB is a hoax. A similar message was broadcast today on the LE.VAX NEWS (equivalent to ALERT) system. I warned the LE.VAX people that it might be a hoax, but one of them, Richard Mobbs (RJM@LE.VAX) assured me that someone had checked with the Luton number and found it genuine. BUT, there is an address change. Of course, it could all be an elaborate hoax-within-a-hoax, but April 1st is still weeks away! Chris Boyd. This is the revised NEWS item from LE.VAX: >A_Genuine_Appeal > > 9th March 1988 > > > I have received the following genuine message. If you feel you can > help, please do so: > > > David is a 7 year old boy who is dying from Cancer. > > Before he does, he has a dream of one day being in the Guinness Book > of Records for the person who has had the most postcards sent to them. > > If you would like to help David achieve his dream, all you have to do > is send a postcard to David as soon as possible. > > Send to: > See Below > > > Don't forget to sign your name > > Postscript: > > Someone has spoken to the police station at Luton. The > appeal is genuine but the address given earlier was > wrong. Apparently David has received enough postcards to > get into the record book, but any further donations of > cards will be welcome to Birmingham Childrens Hospital > to make Davids place in the book very secure (David is > still alive at the moment). The address is: > > "David" > Birmingham Childrens Hospital > c/o 6 Hillside Drive > Streetly > Sutton Coldfield > West Midlands > --- End of forwarded message --------------------------------------------------------------- Date: Wed, 9 Mar 88 10:36:44 EST From: dow@husc6.BITNET (Dominik Wujastyk) Would one of our English members please phone up St Martin de Porres Infant School in Luton and check out the story for us? Dominik ------------------------------------------------------------------ Date: Wed, 9 Mar 88 16:55:47 GMT From: Sebastian Rahtz >A_Genuine_Appeal > > 9th March 1988 > > > I have received the following genuine message. If you feel you can > help, please do so: > > > David is a 7 year old boy who is dying from Cancer. > > Before he does, he has a dream of one day being in the Guinness Book > of Records for the person who has had the most postcards sent to them. > > If you would like to help David achieve his dream, all you have to do > is send a postcard to David as soon as possible. > > Send to: > See Below > > > Don't forget to sign your name > > Postscript: > > Someone has spoken to the police station at Luton. The > appeal is genuine but the address given earlier was > wrong. Apparently David has received enough postcards to > get into the record book, but any further donations of > cards will be welcome to Birmingham Childrens Hospital > to make Davids place in the book very secure (David is > still alive at the moment). The address is: > > "David" > Birmingham Childrens Hospital > c/o 6 Hillside Drive > Streetly > Sutton Coldfield > West Midlands > ------------------------------------------------------------------------ Date: Wed, 09 Mar 88 20:17:40 EST From: Jeffrey William Gillette By now you have probably been deluged with similar notes, but I'll pass along what I have heard. About a month ago National Public Radio did a rather long feature on the young boy in Scotland who was terminally ill, and whose last wish was to make the book of records for receiving the largest number of post cards. In brief, the chap who first brought the story to public notice claims that he heard it by word of mouth. Being a ham radio operator, he advertised the young boy's wish internationally. The chap claims he never met the boy. I believe he claimed that he tried to track down the boy (whose town was some miles away), but to no avail. The postmaster of the little town clained to know nothing of the boy's existence, and was pleading with the world to stop sending postcards. A hoax? Draw your own conclusions, I suppose. ------------------------------------------------------------------------ Date: Thu, 10 Mar 88 12:04:27 PST From: fillmore%cogsci.Berkeley.EDU@jade.berkeley.edu (Charles J. Fillmore) The hoax required the perpetrator's village to expand its postal facilities by some huge multiple. Is this a new one? Fillmore ========================================================================= Date: Fri, 11 Mar 88 22:16:53 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Digest: Volume of mail on HUMANIST &c. (314) ------------------------------------------------------------------------ Date: 11-MAR-1988 13:12:47 GMT From: CHAA006@VAXA.RHBNC.AC.UK Although I have no objection to a digest format (which, apart from simply leaving things along, I prefer to all the other suggestions), I must raise a personal objection to a TeX-hax-style index at the start. The following may be of more interest to a psychologist than fellow Humanists, but I do feel the point is worth making :- When I log-in of a morning, I am invariably told "You have new mail messages", where is some positive integer generally less than 100, and frequently less than 10. Depending on how busy I am, and one how many new messages are waiting, I then either (a) type "Directory" (to Mail), if I am very busy, or is large, or (b) simply read each mail message in turn, if neither of the preceding constraints obtains. However, I derive more pleasure from style (b) of operation than style (a), in that each new mail message is a surprise, a present, of which I have no advance knowledge. When I read TeX-hax (which, like Humanist, I field through the MRL/PFC `Bulletin' utility), I have no choice: whether I am busy or not, the first thing I am presented with is a list of the topics to be discussed in that day's mailing. Now, I can try not to read the list, if I want the surprise of each message in turn, but that turns out to be surprisingly difficult, and much of the pleasure is lost, because each new message is no longer a total surprise. ------------------------------------------------------------------------ Date: Fri, 11 Mar 88 08:11:52 EST From Dr Abigail Ann Young I'm sorry to do this to Willard, but I cast my vote for continuing anarchy as far as free range of topics and style (chattiness, etc) of submissions are concerned. I can see that, if so many people seem to feel strongly about it, we do need to distribute in a more efficient way, ie, in batches like TeXHaX, whose editor does not, as far as I can tell, actually 'edit' the submissions, except to post very long ones to a fileserver and include only a short 'pointer' in the digest. It's never taken me that long to comb through my messages in the morning, but I confess that I only read the ones on subjects of interest: others get skimmed at best and then discarded..... Perhaps the current revolt of the masses is a as much a response to the quality of our topics over the last few months as it is to the volume of discussion! Abigail Young young at utorepas ------------------------------------------------------------------------ Date: Fri, 11 Mar 88 12:32:10 GMT From AYI017@IBM.SOUTHAMPTON.AC.UK Brendan O'Flaherty I must have been away at the same time as the other Humanist who found 60+ files waiting in the reader on return because I found the same. I also got a memo from the Computing Service about the limited capacity of the spool area of the local IBM mainframe asking me to show constraint in my use of the reader. I vote for the daily (or whatever) digest, preferably with a table of contents, for browsing at leisure. Brendan. ------------------------------------------------------------------------ Date: Fri, 11 Mar 88 13:15:17 IST From David Sitman I would like to cast my vote for continuing with HUMANIST in its present configuration. I find it far easier to sift through a lot of small messages than to go through a huge biweekly "digest". It frustrates me to have to find space to save a 700-line digest because there is one 25-line item that interests me. I think that the key to HUMANIST's continued success is self-restraint, rather than imposed guidelines. I found the "postcard" messages totally inappropriate to HUMANIST, but I found an easy solution: each time I saw a "postcard" subject, I deleted the mail immediately. All told, I probably wasted 45 seconds (about the amount of time you're wasting to read this). Instead of spinning off an array of related lists, it's far more natural for a group of HUMANISTs to move to private correspondence as soon as an issue becomes too specific or esoteric for "mainstream" HUMANIST. For those of you who are afraid of deleting or missing something of importance, remember that a copy of every HUMANIST message is archive at UTORONTO (Willard mentioned this the other day). If the LISTSERV DATABASE facility can be installed at UTORONTO, then we will have an efficient way of searching those archives (that's a hint to Willard and Steve Younker). For better or for worse, it seems that the topic most discussed on the HUMANIST list is the HUMANIST list itself (but I would never stoop so low). David ------------------------------------------------------------------------ Date: Fri, 11 Mar 88 10:21:52 GMT From AYI004@IBM.SOUTHAMPTON.AC.UK Discard this file without reading unless you are a serious person Digesting the daily mass of green (on my monitor) pulp? about the information overload is a daily pleasure. It takes about 2 seconds to delete a file - as soon as a deadly significant title or personality scrolls up. How else can the lonely deskbound academic make a comment about the world? I've got an idea: while thinking about WB's bookmakers in the Marriage of Heaven and Hell, get a bunch of sub-editors, create subjects and agendas, select, edit, fashion into a Monitors' Digest, advertise for subscriptions, print it out, affix stamp and mail. We could do a little neology, make up a word, like a...a... a dgournal or something. Subscribers could carry it with them and read it at their leisure, in the place of their choice - or chuck it, if its too chatty. Brian Molyneaux (AYI004@uk.ac.soton.ibm) ------------------------------------------------------------------------ From "Michael Sperberg-McQueen" The first digests came through this morning, and I found them good. It won't solve the space problem for the Vax users, perhaps, but by reducing the flow it will make it more manageable. I am struck by how similar our experience has been to that reported in 'The Mythical Man-Month' by Frederick Brooks: in developing System/360, they eventually found that the ease with which computers allowed them to change their specs led to chaos. To control the chaos they arbitrarily 'quantized' the changes: specs were changed only once every six months. Shame about the volume problem: I've always rather *liked* the chatty tone of Humanist. Michael ------------------------------------------------------------------------ Date Fri, 11 Mar 88 06:35:13 CST From Richard Goerwitz I don't mind the volume of mail on Hum.; I just make sure to be quick to delete messages that don't interest me. It takes me no more than fifteen minutes a day to read what I want/need, and logout. If the volume is oppressing you (since you must read everything), or it seems appropriate, please feel free to discard any messages of mine. That's what a monitor is for! -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer ------------------------------------------------------------------------ Date: Thu, 10 Mar 88 20:53 EST From I am impressed, but also overwhelmed, at the volume of activity on HUMANIST. Can't some means be found to provide only a digest or topic list and then allow users to request sets of messages in which they are interested? 83 messages per day is overwhelming. Doug Davis ------------------------------------------------------------------------ Date: Fri, 11 Mar 88 17:29 EST From I would like to cast a vote against digesting. For those of us using mail readers, rather than readerlist, the subject line provides adequate warning of what to kill and what to read later, and the like. In addition for those of us who read or skim everything (just in case) mailers let you decide whether to continue reading dor discard the message with the choice of typing either a d, or a carriage return. How simple! Of course, having also used VM I know how hard it is to quickly zip through your inbox with readerlist. Perhaps there could be two lists one automatically spraying out messages sent to it, the other digested by your program from the messages of the first. It seems to me that depending on your software, choosing either one of the two alternatives will inconvenience somebody. I don't find the volume of mail to be too large for comfort, in fact I find that humanist postings are generally a bright point in my day. However, some discussions (such as the endless programming language one), probably need a little restraint. In general, a suggestion that a discussion might better continue by private mail, and then be summarized or digested to the list later on, can prevent a lot of the duplication that can occur. A little self restraint can also help. I think most discussions of programming language virtues are pretty useless for a number of reasons. I refrained from making that assertion in order to avoid feeding the fire of discussion on a topic I was tired of. We can all apply this technique. Anyhow, this started out small and ended up long. Feel free to quote or summarize me if I said anything good, or let it all ride, if you find nothing new. David G. Durand Brandeis University ------------------------------------------------------------------------ Date: Fri, 11 Mar 88 08:49:13 PST From "John J Hughes" Two thoughts about making HUMANIST more humane. (1) Offer HUMANISTs two ways of receiving HUMANIST information: A. Everything--all messages B. Only messages in user-specified interest areas. Currently, every HUMANIST receives everything. But why not allow HUMANISTs to decide what they want to receive? For example, if HUMANISTs were restricted to one topic per message (with no limit on the number of messages that could be sent), and if each message had a sender-designated topical header (i.e., HYPERTEXT, TEXT RETRIEVAL, OS/2), and if HUMANISTs had the _option_ of asking only to receive messages on certain topics, and if your software is or could be made smart enough to recognize topical headers and to send individual HUMANISTs only those messages that fall within their specified topical areas, then those of us who are not interested in learning about `postholes' or why we should learn Danish, for example (no offense meant!), would not have to spend time reading and/or printing such messages. Furthermore, if HUMANIST had on-line user-profile files that HUMANISTs could edit to change their specified areas of interest, then users could freely update or change their specified areas of interest. (2) Here is a more radical idea that does _not preclude_ continuing on with (1) A or adding (1) B. That is, (1) A & B and this idea, (2), could all be used concurrently. They're not mutually exclusive. Instead of the fileserver at UToronto sending HUMANIST messages to HUMANISTS, why not give HUMANISTS the _option_ of accessing the UToronto fileserver for HUMANIST messages. In other words, this would be a "don't-call-us, we'll-call-you" option. Here is a modification of that idea that appeals to me. Why not have a _version_ of HUMANIST that operates like a bulletin board or like a dial-up conferencing system (e.g., CAUCUS). That would allow HUMANISTS to access a structured system in which messages and information were divided by topic. After logging on to such a system, HUMANISTs could select the topical areas in which they wished to read or post messages. Each topical area could have an on-line local or remote "manager" (pick any title) who was responsible for overseeing the contents of the area(s) he or she was responsible for. This would take some of the work off of Willard. I don't believe such a system would diminish the usefulness of HUMANIST to those who prefer receiving information in a more structured and selective fashion. And separating discussions into topical areas would, I believe, facilitate _better_ discussions. It would help to keep them more focused and more structured, and it would allow HUMANISTs to see and interact with the whole discussion--from its inception to the present--instead of getting a message here and there, now and then on a given topic of interest and trying to save and file such messages in a coherent fashion, which is the way HUMANIST now works. Right now, HUMANIST seems like a "one-room" discussion group that has grown so large that some participants are walking away because of the volume of information they are being asked to process--too many voices talking about too many things all at the same time in one room. By having a version of HUMANIST that functioned as a "multi-room" set of somewhat more structured discussion groups, and by allowing HUMANISTs to move from "room" to "room," you might help end some of the frustration of information overload without diminishing the usefulness of HUMANIST. Large "multi-room" conferencing systems like BIX and the conferences on CompuServe function as useful sources of information without inundating their users with all sorts of stuff they may not be interested in. And HUMANISTs interested in "direct contact" with other HUMANISTs can always e-mail them on BITNET, NetNorth, EARN, etc. I hope this is not just one more piece of "junk mail" that will precipitate some HUMANIST into writing and asking to be removed from the fileserver's list! John John J. Hughes XB.J24@Stanford ------------------------------------------------------------------------ Date Fri, 11 Mar 88 10:07:23 EST From elli@husc6.BITNET (Elli Mylonas) Please don't make humanist into digests! I appreciate the problems of those who have to use rdrl in CMS, and who don't see headers, but that is what headers are for. If you are not interested in the topic, then discard it! Digest form essentially forces one to accept or discard all of the messages in a digest. RISKS and the Mac Digest are a good example of this. Single message format is a much easier way to sift through postings. I also would cast my vote against Humanist sub groups, because that too is a de facto exclusion of possibly interested contributors to different topics. Isn't broadness and a wide range of interests that ultimately come together supposed to be a characteristic of humanists? Finally some guidelines for contributors that may help keep volume down: 1. Some conversations go on a long time, because what is being discussed is a matter of religion, garbed as arguements. an example of this is the programming language discussion. Not much one can do about that, except to exercise self restraint, or to address oneself to the individuals by mail. 2. If a conversation seems vacuous, better to ignore it than to add one's mite to it, especially if that mite refers to the vacuity of the discussion. That way the conversation will die a natural death. 3. One way to avoid clogging the net is not to talk about what to talk about. 'nuff said. 4. Willard, do you want to step in and call a halt to things when they seem to be going too far? Are we ready to abide by his decisions? ------------------------------------------------------------------------ Date: 10-MAR-1988 18:53:24 GMT From ARCHIVE@VAX.OXFORD.AC.UK I vehemently object to the notion of a single subject only! I am sending under separate cover (i.e. to umanist erself) a note on a topic which appears sporadically in amongst all tosh about funny scandinavian accents. How is common sense ever to re-assert itself if we all have to follow whatever lame brained topic turns up? How indeed are new topics ever to turn up? I think you should leave well alone: Humanist has always been unpredictable. Either that or you will just have to accept the responsibility of saying "Any further messages on topic X will be (a) trashed (b) trashed unless they're REALLY NEW (c) put on hold till the end of the week, when I shall circulate a bumper edition on topic X. ========================================================================= Date: Fri, 11 Mar 88 22:30:46 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Digest: The TLG & PHI/CCAT Texts (59) ------------------------------------------------------------------------ Date: Friday, 11 March 1988 1717-EST From KRAFT@PENNDRLN Subject -- TLG and PHI/CCAT texts and resources Recent HUMANIST communications from Richard Goerwitz and John Haviland relating to textual data from TLG and/or PHI/CCAT lead me to attempt to clarify some issues: Homer is available from TLG, and is part of the massive collection of data on the TLG CD-ROM "C" (as well as on "A" and "B"). It can also be acquired separately, on tape (from TLG) or diskettes (from CCAT, with TLG permission). CCAT regularly supplies IBM/DOS and MAC diskettes, although not everything can be done promptly! Searching these materials can be done in a variety of ways, including the sophisticated systems on IBYCUS or on IBM/DOS (R. Smith) or on Apples (G. Crane). The TLG coding is complete -- there is nothing "limited" about the diacritics -- even if one wishes to characterize it as "idiosyncratic" (I prefer "transparent", at least for the transliteration scheme). As for the biblical materials widely circulated from CCAT, we regularly supply programs to permit the user to reformat the TLG beta ID coding into explicit coding for each line, and to permit the user to select how much of the complex Hebrew data is wanted (e.g. omit cantillation, omit vowels and cantillation). These programs are on a "utilities" diskette that is provided free with diskette orders (Richard got the material on tape, and may not have known about the utilities). Source code is included with the programs, with all the necessary code to convert the TLG beta ID format into explicit (book, chapter, verse) references. These programs also would be useful for the Homer and other TLG texts. Bob Kraft (CCAT) ------------------------------------------------------------------------ Date Fri, 11 Mar 88 08:08:35 PST From 6500rms@UCSBUXA.BITNET Subject -- TLG beta-code use As part of our TLG search software, I have source code modules which extract small passages from TLG texts and convert the beta-code markings into normal references, e.g., Plato: 17 b 3. In addition, I have written a conversion program which then prints that raw beta-code text in fully accented Greek on EGA, which can then be used with the BYU Concordance Software, a Toshiba P351 (you need both downloabable fonts, so a 321 might not work), or into a Nota Bene SLS file. These modules are written in C, and they would need to be combined into a stand-alone program. If anyone is interested, let me know. I might be able to ARC the relevant modules together and place them on the fileserver. Randall M. Smith (6500rms@ucsbuxa.bitnet) ========================================================================= Date: Fri, 11 Mar 88 22:34:45 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Languages on HUMANIST, &c. (332) ------------------------------------------------------------------------ Date: Thu, 10 Mar 88 10:06 CST From LANGUAGE(s) In fact, the only language we all speak on the network is ASCII. In that sense the mail from K.P. Donnelly was right to the point. When the discussion on natural languages spoken by members of the HUMANIST network bears only on the fact that "you speak the language you feel comfortable with, or the one that you receivers will understand", or that "Humanists, by nature or by culture, understand more than on language" nothing new is added to common knowledge. We simply transfer to computing what is already agreed on since the Middle Ages.... The new fact to be addressed is natural languages mediated by computers, which brings us back to ASCII. In the actual situation, French, for example, is not "speakable" (pardon me for the barbarism) on the network. The use of meta-diacritics of false diacritics like:[d/'ej/`a], or [d'ej`a] or any possible combinations (cf. TEX and LATEX) other than the pure and simple accents is something that a French user cannot do with. What is the point of translittering your own language for someone who will un-translitterate it at the end of the communication? The same goes for languages with a similar structure Then, why not drop the accents? In fact, it is the solution I prefer when I write in French on BITNET. The problem is: IT IS NOT A SOLUTION: it is an adjustement, deeply unsatisfactory. Just write the simplest note in French to another francophone user and you will see how many times you will be tempted to use brackets to avoid confusion, double-entendre or obscenity... Then the language you feel "comfortable" with on the network will have to be one that ASCII, in its short version, supports. Other attempts simply lead to some kind of pidgin or creole.... TOO MUCH MAIL? 1) when postal services came into effective action (it has not happened yet in Canada!) I am sure that the Humanists of that time complained about getting too much missives on their desk... 2) why not let it go for a while before trying to impose a more rigid regulation? 3) in the long term, I find that the idea of a compendium of the mailed communications is very interesting. BUT, one factor that I like with HUMANIST is the flexibility, promptness of intervention, and all that which is precisely related to the new medium: the computer. Would that be lost in the process of getting all the material together? 4) Hints: extent use of Delete (everybody knows that) and Print for the more interesting (or extensive [no correlations]) communications. You may then take the printouts with you on the beach and have a drink while parsing through your mail... 5) and thanks to Willard McCarty. What would it be if you had to run all over the world to deliver the mail? Jacques Julien. ------------------------------------------------------------------------ Date: 09 Mar 88 09:20:26 gmt From R.J.Hare@EDINBURGH.AC.UK Subject -- Languages, Volume of HUMANIST mail (about 12 lines) 1) If HUMANIST is to use a common(ish) language other than English (American) it seems to me that the Esperanto suggestion is a good one - no accents or diacritics, and a well defined structure (like at least some programming languages?). Also it would give me an excuse to dust of my books on the subject... 2) Re the volume of mail on HUMANIST. Like some of the recent correspondents, I am beginning to find the volume of mail a little bit OTT, and I too am having to delete messages after the first screenful, if nothing interesting is said. An additional problem here at Edinburgh (which may be common to other sites) is that we have a time/size limit on our HUMANIST mailbox of 6 months/200 messages. Curently, we are cycling through our 200 messages in about three weeks - this makes life extremely difficult for anyone who can't look at the board for longer than this due to anbsences. Roger Hare. ------------------------------------------------------------------------ Date: 9-MAR-1988 16:51:31 GMT From CHAA006@VAXB.RHBNC.AC.UK Subject -- Re: ISO 88xx (multilingual, eight-bit ASCII) Kevin Donnelly asks "Is anyone out there using the new ISO standard, IS 8859/1" I'm not, and I confess I hadn't heard of it. Does it replace ISO (DIS) 6937, or augment it ? (ISO 6937 had space for sixteen non-spacing diacritics, of which thirteen were pre-allocated at the DIS stage). ** Phil. ------------------------------------------------------------------------ Date: Wed, 9 Mar 88 10:21:56 PST From Laine Ruus Subject -- Vox populi....? (8 lines) Vi aer ju vuxna allihopa, och daerfoer ej sa blaa-oegda att vi tror att Vox populi aer det som styr vaerlden. 'Vox' foervisso, men minoritetens vox, ej den tysta majoritetens. Kommunicationens object aer att foermedla information. Kommunicerar man med en doev-stum, goer man det paa ett saett som den doev-stumme foerstaar. Kommunicerar man med en grupp som endast har engelskan som gemensamt spraak, saa........ ------------------------------------------------------------------------ Date: Wed, 09 Mar 88 09:39:35 DNT From Hans Joergen Marker Subject -- A provocation Svar til Birgitte Olander: Det er nok tvivlsomt om HUMANIST er det bedst egnede medium for en nordisk diskussions gruppe. En ulempe er det i hvert fald at de skandinaviske bog- staver bliver oversat s} elendigt i kommunikationen. (F.eks f}r jeg dit } (aa) som et o med accent og dit % (oe) som et udr}bstegn) Hilsen Hans J%rgen Marker. ------------------------------------------------------------------------ Date: Thu, 10 Mar 88 09:21:10 PST From cbf%faulhaber.Berkeley.EDU@jade.berkeley.edu (Charles Faulhaber) Subject -- Re: America the Golden (17 lines) The number of hispanists who are sophisticated enough to know about and use electronic mail can probably be counted on the fingers of 1 hand; and as far as I know I am the only one who uses Humanist. Pero si le interesa que ponga mis notas en espanol, yo encantado. Ahora, no veo el valor de ello. Charles B. Faulhaber Department of Spanish UC Berkeley CA 94720 bitnet: ked@ucbgarne internet: cbf@faulhaber.berkeley.edu ------------------------------------------------------------------------ Date Thu, 10 Mar 88 07:40:59 CST From Richard Goerwitz Subject -- Spanish in America And there was I thinking that America had a wider racial and linguistic mix than Norfolk! Would Richard Goerwitz care to comment on that complete lack of Spanish contributions to Humanist from his country? On his side of that Atlantic there are far more Spanish speakers than ours! Sebastian Rahtz My posting about Americans was intended to allay fears that we American Humanists were interested in seeing American English become a kind of standard. If you are curious why Spanish is not used in this country, the reason should be quite obvious. Why did the French end up speaking a dialect of Latin instead of Gallic? Clearly, Latin was the prestige language in the Western Mediterranean at that time. This is not to say that Latin is better than Gallic. Likewise, I am not making the absurd claim that Spanish culture is "lower" in some sense than American. What I am saying is that we have a lot of poorly-educated Spanish-speaking people coming into this country whose dialects would not generally be useful to emulate. Also, they and their countries of origin, are often militarily and economic- ally dependent on the U.S., again reinforcing the perception that they should be learning English rather then the reverse. This is not an excuse for the failures of American economic policy in Central and South America. It is simply an explanation for why Spanish is not well known. I am sure that if Mexico had a rich economy, was producing all kinds of scholarly literature, and generally looked like a desirable culture, that Americans would learn its language. Now, a final word about the racial makeup of America. Indeed, it is diverse. But the reason for our lack of diversity in language should be obvious. When my ancestors came over here, they were not rich folk. So to work, they could not force their native tongue on Americans. More- over, once here, they needed a common language with which to communicate with various other ethnic groups, as well as with the natives. English became the natural choice. Again, if my ancestors (Prussians, so watch out :-)) had come into a state like, say, Czechoslovakia, which has a German-speaking "area", and lies near several German-speaking countries, they might have found it more useful to retain their dialect. They did not, and so naturally they learned English. They even gave up speaking German in the home, since they could see no basic need for it; in fact, they felt strongly that the children should be fully prepared to integrate into the mainstream of American culture. But to return to the point, American humanists, in spite of the monolingual- ity of their society, DO find it useful to know foreign languages. And, in spite of the extreme difficulty getting exposure to these languages here, generally learn them anyway. In sum, your American correspondents on the HUMANIST do not wish to impose American English on you! Quite the contrary! Write in any language you please! And please, those out there who would like to harp on American linguistic parochialist, please don't view this as some great moral failure. Our monolinguality is merely a product of the external conditions in which we find ourselves. If you folks had been placed in a similar situation, you would have acted in exactly the same way. For this I have superlatively convincing proof: Us (we are your relatives, after all - not all of us "black sheep"!). :-) -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer ------------------------------------------------------------------------ Date 11 March 1988, 15:17:51 EST From JLD1@PHX.CAM.AC.UK Subject -- Accent coding (20 lines) > > Suggestions for accent encoding. An accent follows the letter to which it > belongs. acute < grave > umlaut | > circumf. ^ cedilla ~ tilde ~ > hachek {v} breve {u} ring {o} > macron {-} dot over {:} dot under {.} > ogonek {;} comma {,} bar under {=} > hook under {h} double acute {<<} > > Special letters and signs: > German ss {ss} Spanish ? {?} Spanish exclam. {!} > Paragraph {P} Section {S} Copyright {C} > Trademark {T} Registered {R} degree {n} > dotless i {i} crossed d d~ crossed D D~ > Polish l l~ Polish L L~ small eth d< > Scand o o{/} Scand O O{/} Dutch ij i+j > ae diphth a+e oe diphth o+e thorn t+h > AE diphth A+E OE diphth O+E cap. thorn T+H > yogh {3} cap. yogh {C3} > wyn {w} cap. wyn {Cw} > > Font changes: open or close italic \ or _ > open or close bold \\ or __ > open or close bold ital \\\ or ___ > open/close Greek [g[ ]g] > open/close Hebrew [h[ ]h] > open/close Cyrillic [c[ ]c] > open/close small size [s[ ]s] > open/close superscript {^ ^} > open/close subscript {| |} > open single quote {' or ` > close sing. quote '} or ' > open double quote {" or `` > close doub. quote "} or " > > Conventions for representing the characters used in accents, etc. > less than {<} greater than {>} hat {^} > vert. bar {|} twiddle {~} underline sign {_} > grave sign {`} plus sign {+} backslash {\} > open curly {{} close curly {}} plus-or-minus {+-} > > We've been using these conventions for some years, and most users seem happy > with them. They use only the ASCII character set, so are possible from most > keyboards and on most VDUs, and print on most modern lineprinters. > John Dawson JLD1 @ uk.ac.cam.phx  Date: Fri, 11 Mar 88 22:43:10 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Digest: Databases, DBMS, Mark-up, &c. (148) ------------------------------------------------------------------------ Date Thu, 10 Mar 88 16:18:50 CST From D106GFS@UTARLVM1 A potpourri of minor comments: 1) w.r.t. Jim Coombs re. retrieving 'slang' words from an online dictionary: If the retrieval engine has an operator for the 'dominance' (linguists' jargon for 'contains', more or less) relation, then one need not specify all levels at which a feature might be found. For example, one could ask for "entry containing feature 'SLANG'", instead of "entry with feature 'SLANG' OR sense with feature 'SLANG' OR sub-sense with feature 'SLANG' OR..." For certain uncommon purposes, one should also be able to specify searches which do *not* percolate down through contained elements, just as some version of Chomsky's syntactic theory have a notion of 'bounding nodes' which block certain dominance relations' effects. 2) w.r.t. whether an em-dash is punctuational, presentational, or descriptive markup: I would say that any element can be assigned to any category, depending upon interface. If I type two hyphens, I am typing what we traditionally call "punctuation", but the program may know to store "oh yes, that's one of them m-dash thingies", or store some number of hyphens, or store "" or some such. As long as the mapping from input to semantics is one-to-one, the advantages of descriptive markup are kept available. It doesn't matter very much to the computer what I type or what is stored. A more challenging example is typing the letter sigma in Greek. In word-final position it is printed differently than it is elsewhere. So by simply typing 's' for sigma, are we doing descriptive markup? I would say yes, because we are specifying *the thing that is salient to us as writers*, rather than the thing which is alient to the printer, etc. 3) w.r.t the bulk of HUMANIST: I would prefer a 'digest' form with authors' subject lines extracted to form a table of contents at the top. This could be done by program, I trust. I would also suggest putting a number on each note in the table of contents, and an easily-locatable string at the start of each actual note. For example, one could then get to note 5 by using an editor command to search for "?%" or something. 4) I will send following this posting, an exec program for CMS, which will more conveniently load all of your HUMANIST MAIL, while leaving other things in your in-box (i.e. reader) untouched. It also should be much faster than 'rdrlist' or 'receive'. I place this program in the public domain. Steve DeRose Brown University and SIL ------------------------------------------------------------------------ Date Thu, 10 Mar 88 19:47 EST From Subject -- re: relational DB's and text representation (no SGML) (86 lines) This posting is a response to Jim Coombs posting on the 8th. My comment is mainly sparker by Jim's assertion that what he wants is "a relational database management system that supports full-text fields." My personal feeling is that relational databases may prove useful in doing the internal bookkeeping for a good text handling system, but that the primitive operations they provide are very clumsy for creating a base for a text handling system. The indexing and retrieval of large (typically > 1000 records) numbers of fixed format data items is inherently different from keeping track of a relatively small number (typically < 1000 texts) of inconsistently formatted texts. Databases as they currently have been very carefully optimized for handling business records, and I think that Jim's follow-up posting demonstrates the difficulties of using such a tool for text management better than I can. I think that there's a danger in knowing what computers can already do well that can afflict people who are trying either to create new tools or to do things that current tools were not designed for. This danger seems to me to consist in having faith in the extendability of existing methods. Such faith seems well justified, since the methods in use are obviously powerful, and the experts all seem to claim that this is the ``best''technology available. Unfortunately, frequently the experts are expert in the application of the method they are recommending, and not the discipline whose problems need to be solved. I think a few experiences of such expert advice are what cause some people to bitterly conclude that ``computer scientists don't know anything.'' This is only half right, usually they just don't know anything about linguistics, or writing scholarly (as opposed to scientific) papers, or whatever else the problem actually is. In fact, I think that scholarly text handling is in many ways a harder problem than database design: keeping track of uniform records is an inherently more structured task than keeping track of the heterogenous materials and multiple points of view involved in textual analysis. In fact most of the things that people need in text analysis have never been done in anything like an integrated way. I think a lot of experimentation is required before the most useful set of tools for the humanist can be a matter for agreement, and it will probably be years longer before such things are routinely implemented well. Just in case you think I'm a pessimist, I think that much of the work needed for at least some problems is being done now by commercial software houses who are working to solve the problems of organizing notes for use in office work. I think many of these tools will provide parts that can be forced to fit the jobs humanists need to do. In summary, I think that the focus both in designing new systems and in planning applications of existing technology should be on what we want to do, regardless of what the technology may provide right now. I think I would not have responded to Jim had he said: "What I want to to is this, and I expect that I can do it with a relational database if it will make a concordance of long text fields." When we build things on computers we need to keep a close eye on what the exact purpose is (ie. not a relational database per se, but something that will do X). I would like to offer a last word on behalf of the computer scientists (as I am one, at least by education): they are often trainable, if you give them lots of information and watch them closely. Also, frequently they know more about the tools than anyone not doing computers professionally has time to learn. Thus they can save the exploration of many blind alleys in technical work, as long as that work is properly directed to the ultimate goal (by the humanist who needs the tool). --- David G. Durand Brandeis University DURAND@BRANDEIS.BITNET ***** Indirectly related strong recomendation for reading material follows ***** For many good ideas, not deriving from the ``conventional wisdom'', Ted Nelson is worth reading. Some of what he proposes seems to me not to be useful, and some not to be feasible, but much of what he says is better thought out than it is presented. Computer LIB/Dream Machines is out in a trade edition by Tempus (Microsoft Press's new imprint) and should be easily orderable from a local bookstore. There is also e new edition of Literary Machines available. Send $25, + $5 for overseas, + $5 for purchase order to: Project Xanadu 8480 Fredricksburg # 138 San Antonio TX 78229 He may be a little crazy, but he's smart and many of his ideas are good. ========================================================================= Date: Fri, 11 Mar 88 22:47:58 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Query: word-processor conversion software (20) ------------------------------------------------------------------------ Date Fri, 11 Mar 88 16:06 EST From Does anyone have experience with commercial software packages intended to convert files from one word-processor format to another? My school is interested in acquiring such a program to convert between the major commercial word-processors in use here, esp. Microsoft Word and WordPerfect. Does anyone know how reliable these programs are? How many are available? Which are best? Any advice would be greatly appreciated. David Carpenter ST_JOSEPH@HVRFORD ========================================================================= Date: Fri, 11 Mar 88 22:51:08 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Digest: Programming languages &c. (39) ------------------------------------------------------------------------ Date Fri, 11 Mar 88 08:53:13 CST From "Eric Johnson Liberal Arts DSC Madison, SD 57042" Subject -- Re: Anaphora in Homeric Greek by computer (35) I can never understand why those interested in complex text analysis do not learn and use SNOBOL4. It is an uncommonly powerful language (it has been called "dangerously powerful") with a wide-range of built-in functions for all kinds of string manipulations. There are excellent compilers that run on all common hardware. As another new member of HUMANIST, I also find the voluminous mail very interesting. I think it should be continued in the present form. Eric Johnson ERIC@SDNET ------------------------------------------------------------------------ Date Fri, 11 Mar 88 08:44:21 CST From "Eric Johnson Liberal Arts DSC Madison, SD 57042" Subject -- Re: Computing for humanities students (20) Dakota State College with its new mission of computer integration in every program, teaches a course in SNOBOL4 and SPITBOL required for English majors and commonly taken by all arts and humanities majors. The course is ideal for arts and humanities majors for the reasons given by Susan Hockey (see ICEBOL 86 PROCEEDINGS, pp. 1-25). For additional information, please contact me: Eric Johnson ERIC@SDNET.BITNET Professor and Head, Liberal Arts Division Dakota State College Madison, South Dakota 57042 (605) 256-5270 ========================================================================= Date: Fri, 11 Mar 88 22:54:26 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Genealogical software (26) ------------------------------------------------------------------------ Date 11-MAR-1988 08:50:14 GMT From S200@CPC865.UEA.AC.UK Genealogical Software There are some booklets in a series called Computers in Genealogy published by, or in association with, The Society of Genealogists. 14, Charterhouse Buildings, LONDON EC1M 7BA UK They contain information about computer programs written commercially and by members of the society, for micros only, and describe users experiences with both machines and programs. They could be a bit too beginner-ish, as many of the articles are to do with getting started, difficulties with disks, documentation (for the machine), etc, which we (and the person asking the original question) would know how to sort out. However, they does describe programs available, which could be useful. Pat Newby (P.NEWBY@CPC865.UEA.AC.UK). ========================================================================= Date: Fri, 11 Mar 88 22:56:53 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Editorial: Digestion (31) Dear Colleagues: The latest batch of HUMANIST mail is my first consistent response to the several demands that something be done about the amount of conversation among us. As you'll see from one of the digests, some people rather like the former deluge, and I'm guessing from this that several will not like the digesting of our phantasmagoric plethora into neat and rationalized bundles. There's no pleasing of all of the people all of the time, I suppose. Anyhow, all I ask is that you live with this twist of man-made fate for a while, as I will do, and see what it's like. You're all welcome to let me know what you think of it. I seem to be wisely outvoted on the question of simultaneous conversations. Actually, in one respect I have always sided with the majority. As the sorter of marvellous variety, however, I naturally wish for the minimum number of categories per day. No polemics on my part, just the pressing realization of one mortal's limits. I'll stretch 'em, and we'll see what happens. Thank you all for your participation, your interest, and your patience. If I didn't love doing this, I'd have gone nuts long ago, and I guess you're much the same. Willard McCarty mccarty@utorepas Date: Sun, 13 Mar 88 17:53:30 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Digest: Volume of mail on HUMANIST &c. (121) [Mail on the subject of mail continues to trickle in, and I continue to think about the problems some people are having with the volume of it. At the moment the two most reasonable possibilities seem either to send out a few digests each day sorted roughly by subject OR to send out one digest with everything in it. Michael Sperberg-McQueen has already written software that will clean up the sometimes voluminous headers and produce a table of contents, so in either case IF SENDERS OF MAIL ARE CAREFUL WITH WHAT THEY SAY ON THE SUBJECT LINE then receivers of the digests shouldn't have much difficulty navigating around in them. Please note that it is also important for senders to discuss insofar as possible ONE SUBJECT in each message. Also note that HUMANIST cannot become a bulletin board or news service in which subscribers declare their interests in particular topics and only receive mail on those topics. Everyone will continue to receive everything. I, for one, think that the character and value of HUMANIST are intimately tied to the commonality of our conversations. One way or another you can expect a drastic reduction in the *number* of messages you receive from HUMANIST each day. The total amount of *storage space* required to hold HUMANIST mail cannot be reduced except through editorial censorship or self-control. The former I refuse to practice unless pushed to it by gross indencencies -- not to be expected here at any rate; the latter is generally a good thing, I suppose, but as our numbers increase such control will become more of a threat to creative and informative expression. The morals of this story are, then: (1) discuss only one subject per message; (2) be as clear and concise as possible in your subject lines, using a question mark if you're asking for information, eliminating it if you're not; and (3) look at your mail every day. Comments on any of this continue to be welcome. --W.M.] ----------------- Date: Fri, 11 Mar 88 12:11 EST From: Roberta Russell Subject: volume of mail on HUMANIST I agree with Mark Olsen. Why all the whining? Can't these people do a listing of their newmail, attend to those items of interest, and delete/all the rest? Nobody says you have to read everything. Turning HUMANIST into another BBS will destroy it. Roberta Russell % LISTSERV UTORONTO 3/12/88 % Roberta Russell humanist@utoronto 3/11/88 volume of mail on HUMANIST ----------------------------------- Date: Sat, 12 Mar 88 22:59:36 est From: amsler@flash.bellcore.com (Robert Amsler) Subject: Re: Editorial: Digestion (31) Conventional ARPANET digests long ago developed companion programs which `undigested' the data and turned it back into individual mail messages to which people could respond. The key to both having you cake and eating it too with regard to one subject at a time is to save up the messages re: some topic until you have enough for one digest or the flow stops and then to distribute the digest. That way digests are about one topic and still there need not be any control on what people can talk about at any time; it is controlled in terms of when it is `published'. % BITMAIL SUVM 3/12/88 % Robert Amsler MCCARTY%UTOREPAS.BI 3/12/88*Editorial: Digestion (31) --------------------------------------------- Date: Sat, 12 Mar 88 12:42 PST From: Sterling Bjorndahl - Claremont Grad. School Subject: pro digesting Willard: I cast my vote "pro" the current level of digesting. Thank you for doing it. I frequently read my mail via a 1200 baud modem, so paging through every message can be very time consuming. Keep up the good work! Sterling % BJORNDAS CLARGRAD 3/12/88 % Sterling Bjorndahl mccarty@utorepas 3/12/88 pro digesting --------------------------------------------- Date: Sun, 13 Mar 88 03:53:34 est From: amsler@flash.bellcore.com (Robert Amsler) Subject: Re: Editorial: Digestion (31) I guess my reaction so far is that I still get too many mail items from humanist. Some are also too small, so I'd prefer that some unrelated items be `bundled' together to make fewer larger packages. For me, the problem is that I subscribe to a dozen or more digests: AILIST, NL-KR, IRLIST, Space, Videotech, etc. and despite this, Humanist now looms larger than any 3 of them. I do not know the answer, but the situation is unstable. Oft hand, humanist can impose on me to the extend of say 2 20K byte bundles a day, after that it is too large. % MAILER CUNYVM 3/13/88 % Robert Amsler MCCARTY%UTOREPAS.BI 3/13/88*Editorial: Digestion (31) --------------------------------------------- Date: Sun, 13 Mar 88 01:16 CDT From: Wayne Tosh / English--SCSU / St Cloud, MN 56301 Subject: HUMANIST communications--volume & variety Rather than broadcasting out all communications to all subscribers, would it be possible to set up something like a bulletin board with topics which the subscriber could elect to browse and respond to? % WAYNE MSUS1 3/13/88 % WayneTosh/English-- mccarty@utorepas 3/13/88 HUMANIST communications--volu =========================================================== Date: Sun, 13 Mar 88 18:19:54 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Searching Homer (33) ------------------------------------------------------------------------ Date Fri, 11 Mar 88 09:35:17 EST From Paul Kahn For John Haviland Software to do what you describe does exist, for the mix of equipment you have available. Whether what the student proposes has been done already or not I can't say, but the tools to do it are there. The tape from TLG at UC Irvine will contain the complete works of Homer as a flat file. Greg Crane (Dept of Classics, Harvard University, Cambridge MA 02138) developed a set of UNIX programs which generate an inverted index of all words in a TLG author file, and a set of search programs for locating any word or string within the file and displaying it on any device which supports a Classical Greek character set. These programs are available from Crane (known as the HCCP software) and have proven to be portable to all versions of UNIX. They were originally created on a VAX running 4.2 BSD so you should be set there. George Walsh (Dept of Classics, Univ of Chicago, Chicago IL 60637) developed a set of Greek fonts for the Mac, and a matching version of Mac Terminal with the Greek font hacked into the VT100 emulation which lets you use a Mac as a terminal on the UNIX machine and display the Greek text. We have done a system which is used in the Classics and Religions Studies departments at Brown which uses an RT PC platform with 178 Greek authors on CD ROM, using Crane's index and search software and a simple front-end program written here. If you will give me a mail address I will send you some papers about it. Or give me a call Paul Kahn, IRIS, Brown University Box 1946 Providence RI 02912 401 863-2402 ========================================================================= Date: Mon, 14 Mar 88 20:48:07 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Digest: Miscellaneous notes (233) [Please let me know what you think of the following -- a gathering of notes on several topics with a handy table of contents. The numbers of the items in the table correspond to the numbers at the beginning of each note, each of which is enclosed in curly braces, e.g., {2}. Thus if you're interested in the second item, use your editor to search for {2}. -- W.M., with thanks to Michael Sperberg-McQueen.] Digest of Miscellaneous Messages HUMANIST Discussion Group 14 March 1988 Table of Contents 1 --------------------------------------------------------- Honorifics conference 2 --------------------------------------------------------- Date: Mon, 14 Mar 88 11:32:16 DNT From: Hans Joergen Marker Subject: Travelling plans 3 --------------------------------------------------------- Date: Mon, 14 Mar 88 11:47:37 DNT From: Hartmut Haberland Subject: Postcards 4 --------------------------------------------------------- Date: Mon, 14 Mar 88 10:26:43 GMT From: Sebastian Rahtz Subject: why not learm SNOBOL 5 ---------------------------------------------------------- Date: 14 March 88, 10:10:45 MST From: Ken Rumery Subject: Job postings 6 ---------------------------------------------------------- Date: 14 Mar 88 12:53:20 gmt From: K.P.Donnelly@EDINBURGH.AC.UK Subject: Diacritics, 8-bits and IS 6937 7 ----------------------------------------------------------- Date: Mon, 14 Mar 88 18:40 EST From: J. K. McDonald Subject: El uso de otras lenguas en HUMANIST (37 lines) Digest ----------------------------------------------------------- Reed College, Anthropology and Linguistics Portland State University, Department of Modern Languages announce A Conference on |HONORIFICS|, Portland, Oregon, April 8-10, 1988 [for more information see HONORIFX CONFRNCE on the file-server] (2)--------------------------------------------------------------------- Date: Mon, 14 Mar 88 11:32:16 DNT From: Hans Joergen Marker Subject: Travelling plans From: Hans Joergen Marker Subject: Visit to New England in May In the days 26th to 29th of May I shall be attending a conference in Washington DC. Having spend so much of the Danish taxpayers money it would be resonable to visit peoble and institutions with reasonable connection to my line of work (archiving of machine readable data, documentation and description of research data materials, computing in history and related fields). Does anyone on the HUMANIST discussion group have suggestions? My travelling plans are still open, but visits to peoble and/or institutions should take place within a resonabledistance in time and place from the conference. Although I am determined to give the tax payers their moneys worth, a visit to LA for instance would have to be extremely well argued for. All suggestions are most welcome. Hans Joergen Marker. P.S. Should I come across any one of you during my stay I promise not to speak a single word of Danish. Lou Burnard can confirm that despite his later outbursts about "lingo". % LISTSERV UTORONTO 3/14/88 % Hans Joergen Marker HUMANIST 3/14/88 Travelling plans (3)--------------------------------------------------------------------- Date: Mon, 14 Mar 88 11:47:37 DNT From: Hartmut Haberland Subject: Postcards The Postcard issue suddenly reminded me of the following story which a Swedish friend of mine told me years ago. One Thursday afternoon, ten minutes before closing time of the shops, the Stockholm local radio announced that the Swedish State Monopoly (liquor stores, Systembolaget) would raise their prices from next week and that the shops in order to prevent people from stocking up with booze at the old prices would be closed for three days, starting next day, Friday. Within a couple of minutes queues were forming in front of the shops and people tried to get hold ofthe cheap booze as much as they could carry in the ten minutes or so that were left. Now my friend (it was him who also told me "Of course I am paranoid, but that doesn't prove that they are not out to get me" - in full earnest, by the way) had the following explanation for the information `leak'. The whole thing was actually carefully planned (I don't know whether the price raise was part of the plot, or if whoever was responsible just took advantage of it). This was two weeks after Harrisburg, and the Swedish public was very worried about possible accidents in the Swedish nuclear power plants. So the whole thing was a test as to how fast you could reach the Swedish population by sending out a radio message in case of a disaster of some magnitude. So this worked. But what does the postcard avalanche through BITNET prove? % LISTSERV UTORONTO 3/14/88 % Hartmut Haberland HUMANIST Discussio 3/14/88 Postcards (4)--------------------------------------------------------------------- Date: Mon, 14 Mar 88 10:26:43 GMT From: Sebastian Rahtz Subject: why not learm SNOBOL People don't learn SNOBOL routinely because a) it isn't compiled and b) its not suited to applications over c.500 lines of code. Icon, mind you, is a different story. % LISTSERV UTORONTO 3/14/88 % Sebastian Rahtz humanist 3/14/88 why not learm SNOBOL (5)--------------------------------------------------------------------- Date: 14 March 88, 10:10:45 MST From: Ken Rumery 602-523-3850 CMSKRR01 at NAUVM Subject: Job postings The College of Creative Arts and Communication at Northern Arizona University has reorganized following several years of self-study. The college is now configured as follows: School of Art and Design School of Communication (incl. Sp.Comm, Telecomm, Journalism) School of Performing Arts (incl. Dance, Music, Theatre) Department of Humanities and Religious Studies (incl. Arts Management) Northern Arizona University is seeking a Dean for the College as well as Directors for each of the three schools. Screening opens March 21 and will remain open until the positions are filled(July 1). An earned doctorate in a discipline of the unit(s) to be administered is required. Details may be obtained from Dr. David M. Whorton Associate Vice President for Academic Affairs, box 4085C Northern Arizona University Flagstaff, Arizona 86011 % CMSKRR01 NAUVM 3/14/88 % Ken Rumery MCCARTY@UTOREPAS 3/14/88 No subject (6)--------------------------------------------------------------------- Date: 14 Mar 88 12:53:20 gmt From: K.P.Donnelly@EDINBURGH.AC.UK Subject: Diacritics, 8-bits and IS 6937 Someone was asking about IS 6937 as compared to IS 8859/1. If my understanding is correct, IS 6937 was based on teletex, which was designed in Germany as the successor to telex. It was made an international standard round about 1984. Part 2 of it defined an 8-bit extension to the ASCII character set, to allow for accented letters in languages with Latin based alphabets as well as lots of other useful symbols like "pound", and "half". The unusual feature of it was that one of the columns of sixteen characters in the extended ASCII table contained "non spacing diacritics". The idea was that "e-accute", for example, was represented by "non spacing accute" followed by the usual "e". This has certain advantages such as making efficient use of the eighth bit so that more characters could be accomodated, making it easy to strip a text of diacritics, and perhaps making alphabetic sorting algorithms simpler. But it has great disadvantages. It would need a fundamental rewrite of many editors and other programs if the number of characters were no longer equal to the number of bytes. Whereas it would need only a minor change to most programs to allow them to cope with 8-bit text in the more recent IS 8859/1 standard. Many programs might cope without any change at all. So as far as I know IS 6937 never really caught on, whereas the newer IS 8859/1 standard looks like it is taking off. As to the relationship between them, IS 8859/1 certainly doesn't augment IS 6937. In fact IS 6937 has lots of characters which IS 8859 does not have, such as "1/8", "division (mathematical)", "capital OMEGA" and "ij ligature". There is some overlap. Seventeen out of the 94 extra character positions are common to the two standards. I don't really know much about all this. Can anyone else say anything more authoratative? Does anyone know of any printers implementing IS 8859/1? Kevin Donnelly % LISTSERV UTORONTO 3/14/88 % K.P.Donnelly@EDINBU HUMANIST@UTORONTO 3/14/88 Diacritics, 8-bits and IS 693 (7)--------------------------------------------------------------------- Date: Mon, 14 Mar 88 18:40 EST From: J. K. McDonald Subject: El uso de otras lenguas en HUMANIST (37 lines) Nuestro colega Faulhaber de Berkeley lamenta el nu'mero muy reducido de hispanistas en EEUU que sepan utilizar el correo electro'nico (y/o que se hayan hecho miembros de HUMANIST) y que e'stos se podri'an contar en los dedos de una mano: !Espero que tenga una mano monstruosa! Quiza' se refiere a hispanohablantes y no a hispanistas. De todos modos, el nu'mero crecera'. Y no so'lo en esa parte de Norteame'rica que se llama Canada'. Como dice Julien acerca del france's, la falta de acentos, de signos diacri'ticos, de precisio'n en la lengua escrita es inaceptable a la intelectualidad de cualquier e'poca. Sin embargo, no nos toca a nosotros los humanistas la tarea de proveer el medio tecnolo'gico que nos facilite definir y avanzar nuestra vocacio'n. Lo que nos interesa es establecer siempre nuevamente nuestros objetivos y expresarlos lo ma's constante y lu'cidamente posible. Los te'cnicos suministrara'n los medios que precisemos. Goerwitz nos abre la dimensio'n cultural de este debate: los francos aprendieron el lati'n por ser e'ste el idioma de ma's alcance civilizador del mundo de esos siglos del alto medioevo; lo que dejo' en su tintero es que el lati'n a'ulico fue un vehi'culo civilizador tambie'n para los intelectuales de todas las provincias occidentales, incluso en Roma, para todos los que se esforzaban en hacerse bilingu"es. Es decir, los cle'rigos (y despue's, los humanistas del renacimiento europeo) reconoci'an que el bilingu"ismo, o el multilingu"ismo, los humanizaba a si' mismos. (El valor esencial del bilingu"ismo entre las clases dirigentes de Hispanoame'rica hoy di'a no refleja la fuerza de las 600 naves de guerra -deseadas- de los EEUU, sino la buena suerte de verse obligados a conducir una parte importante de la vida en conceptualizaciones logradas so'lo mediante dedicados esfuerzos mentales.) De los signos diacri'ticos hablare' quiza' otro di'a; mientras tanto, no digamos que el dane's o el sueco no valga en HUMANIST; la sociedad distinta de los quebequeses en Canada' se debe igualmente a tantos franco-canadienses que saben vivir con dos idiomas. Algu'n di'a se despertara' el gigante hispano en EEUU. % LISTSERV UTORONTO 3/14/88 % J. K. McDonald humanist@utoronto 3/14/88 El uso de otras lenguas en HU ========================================================================= Date: Mon, 14 Mar 88 21:05:23 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Digest: Volume of mail on HUMANIST (177) Table of Contents 1 Date: Sun, 13 Mar 88 13:29:57 CST From: D106GFS@UTARLVM1 Subject: Humanist bulk 2 Date: Sun, 13 Mar 1988 17:33 CST From: Robin C. Cover Subject: HUMANIST Digests 3 ---------------------------------------------------------- Date: Mon, 14 Mar 88 15:16 EST From: Subject: RE: Digest: Volume of mail on HUMANIST &c. (121) Digest (1)----------------------------------------------------------- Date: Sun, 13 Mar 88 13:29:57 CST From: D106GFS@UTARLVM1 Subject: Humanist bulk It seems to me the real problem is that the usual paradigm of mailing lists doesn't support the notion of *topics* of discussion. Even the ARPA mail header standard doesn't provide a 'category' field, but only 'subject' (of course, it also doesn't provide for persons' names very neatly, either...). If each user could easily throw away notes *by topic*, then the deluge could be stemmed. This would also work if you had software which could maintain information on people's interests, and mail them only what they wanted. But that is a mess in terms of their keeping your software updated as to their interests. Here's a proposal: Every 'Subject:' field must begin with a category, from a set of active categories known to HUMANIST at the time. There is also a reserved category "NewTopic", which is always active, and used to propose a new topic of discussion. The list of active categories can be distributed as a note from time to time. Given a category label on every note, users with mailers can easily skip past topics they do not care about, and simple software can be written for others (like VM users), to delete notes not of interest (I'll probably write it for myself anyway, and will distribute it if so). Examples: Subject: PostingLanguages: Why not Swahili? --- This would head a note which is part of the ubiquitous current debate. I personally prefer avoiding spaces in category names, to force them to be short; but it doesn't make any difference to the proposal. Subject: Markup: What's an em-dash? --- This would of course be part of a different discussion. Subject: NewTopic: Translation Theory --- This note would aim to start off a new discussion. I see these problems: First, category names must be standardized. One doesn't want people calling the same topic 'Language', 'Langage', 'Ling', etc. This can be handled by a simple exec you run before mailing out things, or by having an automatic posting machine return non- conforming mail to sender (which will get people to spell the categories right in a hurry, though they may not be happy). The XEDIT 'all' command (if you're on VM) will hide all but the 'Subject:' lines, so you could then review a day's notes all on at most a few screens, and normalize any oddball categories, or add categories if they are omitted. Second, categories sometimes evolve, rather than springing forth. This is not a real problem; when any category grows too large or diverse, either the moderator or a participant can propose a NewTopic. Another advantage of this method is that it is easy to track discussions, and see what topics are hot. Also, it is easy to retrieve old mail on particular topics. To really do this right, one should allow dots or some other separators in category names, thus providing for hierarchical categories for the obvious cognitive reasons. For example, one might have "Bible.NT.Lexicology", or whatever. This approach should be familiar to most users, from library card catalogs, hierarchical filing systems, or other similar things. But it might be overkill to begin with, and it is easily addable. I have a CMS bulletin-board system which uses these principles; as soon as I hook it up to handle posting and retrieval requests from remote network nodes, I will be making it available to interested sites. Steve DeRose Brown Univ., SIL % MAILER UTARLVM1 3/13/88 % D106GFS@UTARLVM1 MCCARTY@UTOREPAS 3/13/88 Humanist bulk (2)--------------------------------------------------------------------- Date: Sun, 13 Mar 1988 17:33 CST From: Robin C. Cover Subject: HUMANIST Digests I appreciate the digest format of HUMANIST, despite the protests of a few readers who wanted nothing unchanged. I think the idea of "digesting" has several advantages: (a)It helps lend continuity to reading of HUMANIST mail if I can read several postings related to the same topic at the same time (b) I think it will help contributors focus their comments on current topics (perhaps think a little harder about their contributions, or possible contributions) and thus cut down on inane chit-chat (c) it reduces the HUMANIST mail traffic on the network, in terms of reducing the number of mail pieces (d)it makes saving HUMANIST contributions much easier, if we want to save discussions on selected topics (e) it makes ignoring selected topics much easier. I admit that I am on a VM/CMS machine where I have to peek the reader before I can do anything to the mail, and that "receive" (or even "readcard") gets expensive with so many mail pieces. But I suspect that most HUMANISTS are on VM...do you know? Finally, I had a discussion with Nick DeRose about the idea of introducing "TOPIC" as well as (more specific) "SUBJECT" where the taxonomy of "TOPIC" would be enforced, but the "subject" specification could be flexible. Of course, VM mailers would still use "SUBJECT" line for what we mean by "TOPIC," but that isn't fatal. This too would help focus HUMANIST discussions, I think...which sometimes seem a bit shallow otherwise. Nick contributed the HORDER exec, and I'll bet other HUMANISTS would be willing to contribute similar mail software for other systems. In any case, I like the greater organization afforded by the digests. Thanks!! Robin C. Cover ZRCC1001@SMUVM1.bitnet % MAILER SMUVM1 3/13/88 % Robin C. Cover Willard L. McCarty 3/13/88 HUMANIST Digests (3)--------------------------------------------------------------------- Date: Mon, 14 Mar 88 15:16 EST From: Nancy Ide Subject: RE: Digest: Volume of mail on HUMANIST &c. (121) It appears you are asking for a vote on the two methods: several digests, each on a topic, or one big digest with a table of contents. I emphatically vote for the former, which enables deletion of those digests that are not of interest and then lets me file the rest easily. I have spoken with other VAX users and much of the perspective on mail comes from CMS users, who do not appreciate the problems of VAX mail. From the VAX perspective, one giant file would be much more difficult to handle. Also, VAX users are charged the storage space for messages, even before they are read (in your jargon, "peeked" at). Several small digests will also help there. Above I meant to say that I have spoken with other VAX users who lament along with me that much of the perspective on mail... etc. Another feature of VAX mail is that one cannot back up and edit when typing mail messages. Now you know why my messages are full of typos. % IDE VASSAR 3/14/88 % Nancy Ide MCCARTY@UTOREPAS 3/14/88*Digest: Volume of mail on HUM ========================================================================= Date: Mon, 14 Mar 88 21:10:02 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Queries: Emblems & Arabic w-p (45) Table of Contents 1 --------------------------------------------------------- Date: 13 Mar 88 20:49 -0330 From: Subject: Anyone in HUMANIST do emblems? 2 --------------------------------------------------------- Date: Mon, 14 Mar 88 12:02:47 GMT From: JLD1@PHX.CAM.AC.UK Subject: Query: Arabic w-p (1)----------------------------------------------------------- Date: 13 Mar 88 20:49 -0330 From: Subject: Anyone in HUMANIST do emblems? If there are any HUMANISTs who are interested in emblem studies, and in particu- lar in constructing an emblematic database using HyperCard, I would be happy to hear from them. In particular, if anyone already has a bunch of digitzed emblem images, I'd be especially interested. % LISTSERV UTORONTO 3/13/88 % dgraham@mun.bitnet humanist@utoronto.b 3/13/88 Anyone in HUMANIST do emblems (2)--------------------------------------------------------------------- Date: Mon, 14 Mar 88 12:02:47 GMT From: JLD1@PHX.CAM.AC.UK Subject: Query: Arabic w-p Subject: Query: Arabic word-processing (8 lines) Does anyone know of a word-processing system with full footnoting and font- changing facilities (such as would be provided by TROFF or TEX) which will work with mixed Arabic and English text? If so, for what machine(s), in what computer language, and under what operating system(s) does it run? Similar programs which do the job for mixed Hebrew and English would also be of interest. John Dawson ( JLD1@uk.ac.cam.phx ) % LISTSERV UTORONTO 3/14/88 % JLD1@PHX.CAM.AC.UK humanist@UTORONTO 3/14/88 Query: Arabic w-p ========================================================================= Date: Tue, 15 Mar 88 19:46:18 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Digest: programming languages (66) 1. Date: Mon, 14 Mar 88 23:18 CDT From: Wayne Tosh / English--SCSU / St Cloud, MN 56301 Subject: alleged SNOBOL limitations (18 lines) 2. Date: Tue, 15 Mar 88 11:54:20 CST From: Eric Johnson Subject: SNOBOL, SPITBOL, and Icon (1)----------------------------------------------------------- Date: Mon, 14 Mar 88 23:18 CDT From: Wayne Tosh / English--SCSU / St Cloud, MN 56301 Subject: alleged SNOBOL limitations (18 lines) Rahtz alleges that SNOBOL4 is unsuitable for processing texts longer than 500 lines. Wherever did you come up with that limitation, Sebastian? I've been "putzing around" (as they say here in Minnesota) with a PC disk file of Marlowe's Dr. Faustus. Without having counted accurately I would estimate it at about 2400 lines. I've been playing with about two-thirds that (the two-thirds that would fit conveniently in PC-WRITE's 60K file limitation), using SNOBOL4+ on a 4.whatever MHz machine. For those unfamiliar with SNOBOL4+, Mark Emmer says his implementation is about 10 times slower than mainframe versions of SNOBOL4. And, for my part, SNOBOL4+ seems to (and I stress the subjective nature of my measure) perform fast enough when I'm cranking out word frequencies, displaying pronouns in context, etc. WAYNE@MSUS1 (2)--------------------------------------------------------------------- Date: Tue, 15 Mar 88 11:54:20 CST From: "Eric Johnson Liberal Arts DSC Madison, SD 57042" Subject: SNOBOL, SPITBOL, and Icon My original note about SNOBOL was meant to be read in the context of the current discussion of programs for various kinds of non-numeric computing: concordance and index generation programs, etc. I am surprised that humanists are willing to routinely learn a great number of proprietary commands in order to use such programs, but they rarely learn to write their own programs in a language like SNOBOL or Icon which would not be much more difficult and would give them far, far more power and flexibility. I prefer SNOBOL to Icon because its syntax is closer to the way my mind works (and I'll bet that would be true for many humanists). Both SNOBOL and Icon code can be written with great economy: it may require 100 or more lines of COBOL or PL/I to perform the same things as ten lines of SNOBOL or twelve of Icon. As an interpreted language, SNOBOL4 is uncommonly powerful and flexible (for example, new variables can be read in at run time|), but it is slow. I am often willing to excuse slow execution in order to use an economical, powerful language. But, of course, there is also a speedy implementation of SNOBOL which is called SPITBOL. SPITBOL code is compiled and runs about seven times faster. *****END***** ========================================================================= Date: Tue, 15 Mar 88 19:52:57 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Digest: multilingual wordprocessing (56) 1. Date: Tue, 15 Mar 88 14:30:04 EST From: dow@husc6.BITNET (Dominik Wujastyk) Subject: Re: Arabic word-processing 2. Date: Tue, 15 Mar 88 13:42 EDT From: Subject: Arabic/Hebrew/Greek/Cyrillic/English Word Processors (1)----------------------------------------------------------- Date: Tue, 15 Mar 88 14:30:04 EST From: dow@husc6.BITNET (Dominik Wujastyk) Subject: Re: Arabic word-processing There are now facilities (fonts, macros, the whole kaboodle) for using TeX to typeset both Arabic and Hebrew. These were recently announced in TeXhax by Goldberg, and are available from a listserver in Israel. Send a mail message GET IVRITEX PACKAGE to LISTSERV@TAUNIVM to get the whole package. I can post more details and information her if anyone is interested. Dominik bitnet: user DOW on the bitnet node HARVUNXW arpanet: dow@wjh12.harvard.edu csnet: dow@wjh12.harvard.edu uucp: ...!ihnp4!wjh12!dow (2)--------------------------------------------------------------------- Date: Tue, 15 Mar 88 13:42 EDT From: Subject: Arabic/Hebrew/Greek/Cyrillic/English Word Processors One word processor that allows you to mix English, Greek (with full diacritics), Hebrew, Arabic and Cyrillic on the same page, and which includes a font generator (for screen and printers), and which supports 9-pin, 24-pin and LaserJet printers is: Multilingual Scholar Gamma Productions 710 Wilshire Blvd, Suite 609 Santa Monica, California 90401 tel 213-394-8622 Hardware requirements are modest: IBM PC/XT/Compatible with two floppy drives (hard disk preferred), 640 K Ram, Hercules Monographics OR Colour Graphics Adaptor (EGA real soon now!). Price: $350.00 (US). The latest version claims to support bottom-of-the-page footnotes. I haven't tried this version yet. It's easy to use and if memory serves me correctly, I think it creates ASCII-readable files. You can import standard ASCII files and then add Arabic, Hebrew, etc. It is very easy to use. Slow printing on 9-pin and 24-pin printers because text is printed in graphics mode. Sam Cioran (McMaster University, Hamilton, ONTARIO, CIORAN@MCMASTER) ========================================================================= Date: Tue, 15 Mar 88 20:15:24 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: 8 vs 5.25 Bernoulli box? (24) ------------------------- Date Mon, 14 Mar 88 18:51:43 MST From Mark Olsen I am involved in a project that will require mailing large amounts of data on Bournoulli Box or other cartridge mass storage devices. I have an 8 inch Bournoulli Box which works very well, but I am wondering whether switching to the new 5.25 inch (internal) might not be a bad idea, as we have to purchase equipment for the project in the near future. Has anyone heard anything, good or bad, concerning the new Bournoulli cartridge systems? Are there any other cartridge systems which can match/beat Bournoulli for reliability and cost? Mailing 5.25 cartridges is, in itself, a large advantage, since the 8 inch cartridges are a *pain* to package. All suggestions/comments are appreciated. Thanks in advance, Mark -- if I screw this one up I'm dog meat -- Olsen ========================================================================= Date: Tue, 15 Mar 88 20:46:06 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Test -- please ignore (20) This is a test. Please ignore it. ========================================================================= Date: Wed, 16 Mar 88 19:46:41 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Queries (115) (1) Date: Mon, 14 Mar 88 15:46:40 PST (10 lines) From: tektronix!reed!johnh@uunet.UU.NET (John B. Haviland) Subject: ICON source (4 lines) (2) Date: Tue 15 Mar 88 19:06:26-PST (50 lines) From: Pink Freud Psyche A Subject: Re: Extinct American Indian Languages of the Pacific Northwest (3) Date: Wed, 16 Mar 88 07:36:57 -0800 (28 lines) From: mbb@jessica.Stanford.EDU Subject: Request for information: document comparison programs (1)---------------------------------------------------------- Date: Mon, 14 Mar 88 15:46:40 PST From: tektronix!reed!johnh@uunet.UU.NET (John B. Haviland) Subject: ICON source (4 lines) What is the best way to get ahold of an ICON implementation? (For Vax, Mac, MSDOS.) I have assumed that one should simply write to the Griswolds, at the Icon Project at Univ. of Arizona? Is there a downloadable way? Could the source be posted to HUMANIST? (2)------------------------------------------------------------------ Date: Tue 15 Mar 88 19:06:26-PST From: Pink Freud Psyche A Subject: Re: Extinct American Indian Languages of the Pacific Northwest in Subject: Re: Extinct American Indian Languages of the Pacific Northwest in My name is Troy Anderson. I am currently doing work at Stanford University on the extinct langauge of my ancestors the Lower Coquille Indians of the Central Oregon Coast. My method for going about reconstructing a dead language is as follows. I originally did a whole lot of bibliographic work trying to gather as much info on the language as I could. I ended up with about 300 pages of texts and about 10 hours of tapes. The textual material is half published and the other half unpublished. The published material was gathered by Melville Jacobs in his 1932 vol.8 University of Washington Coos Myth Texts and Narrative and Ethnologic texts. The unpublished material is from J.P. Harrington's collection (I question its validity) The computer has played a big part of my research. I sent the texts in published form to BYU to get optically scanned on to DOS text files which I have modified on to Word Perfect. I am now trying to reformat the texts so that the line up phrase by phrase. Let me back up. The texts I have for Mel are translated clause by clause, and I would like to make a dictionary and a list of morphemes. The way I am going about running through all these texts is by sending the formatted texts through a program called Word Cruncher. This used to be called the BYU concordance program (?). So it will make a concordance, clause by clause of the texts picking out the words so that I can analyze the texts morpheme by morpheme. Once again I must back up ... the texts are transcribed morpheme by morpheme but the translation is free. I need to find out how the translation works literally. From there all my problems should be solved, and it will be just a matter of picking up a totally foreign language and then trying to make a grammar book out of it. I am currently in the formatting of texts to Word Cruncher format stage. My technical expertise prevents me from making a program to merge the English and the Miluk files ( two seperate files ) which are lined up perfectly by number of clauses but not by number of pages (the English ran longer in its translation than the transcrition of the Miluk). So I am trying to take them apart and make them the exact same length But I am not having much luck. If you know of an easy way to go about fixing this problem please let me know. Troy (3)------------------------------------------------------------------ Date: Wed, 16 Mar 88 07:36:57 -0800 From: mbb@jessica.Stanford.EDU Subject: Request for information: document comparison programs I'd be quite grateful if anyone has any suggestions regarding good document comparison programs running under MS-DOS or PC-DOS. I'm most interested in the ability to enter corrections from the keyboard. For example if file 1 and file 2 differ at some point and neither one is correct, can you enter the correction from the keyboard and continue without leaving the program? Please send responses to me, and I'll summarize the responses for HUMANIST. thanks very much Malcolm Brown Stanford University BITNET: gx.mbb@stanford ARPA: mbb@jessica.stanford.edu ========================================================================= Date: Wed, 16 Mar 88 19:51:54 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Notices (53) (1) Date: 16 March 1988 (7 lines) Subject: Call for CALL papers (2) Date: 16 March 1988 (5 lines) Subject: More on Honorifics (3) Date: 16 March 1988 (9 lines) Subject: New VM/CMS exec for HUMANIST mail Date: 16 March 1988 Subject: Call for CALL papers INSTRUCTIONAL SCIENCE, An International Journal CALL FOR PAPERS. SPECIAL ISSUE : Computer Assisted Language Learning Guest Editors : Masoud Yazdani and Keith Cameron [for more information see the file CALL PAPERS on the file-server] Date: 16 March 1988 Subject: More on Honorifics A tentative schedule for the conference on Honorifics has been posted to the file-server. Date: 16 March 1988 Subject: New VM/CMS exec for HUMANIST mail "HORDER EXEC" has just been posted to the HUMANIST file server. It is a utility for VM/CMS users, which will load all HUMANIST MAIL files from your virtual reader into "HUMANIST NOTEBOOK", with separator lines between them. Steven J. DeRose Brown University and the Summer Institute of Linguistics. ========================================================================= Date: Wed, 16 Mar 88 19:54:59 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Bernoulli boxes (38) ------------------------- Date: Wed, 16 Mar 88 03:46:31 EST From: "James H. Coombs" I haven't heard of any problems with the new BBoxes. I'm not sure how many of them are out there, but Iomega has been pretty reliable. You might check the Iomega bulletin board to see if there have been any recent complaints. The number should be in your doc---I don't have it here. I know that there have been requests for assurances that the 8 inch will be supported in the future (and assurances in response). --Jim Dr. James H. Coombs Software Engineer, Research Institute for Research in Information and Scholarship (IRIS) Brown University jazbo@brownvm.bitnet ========================================================================= Date: Wed, 16 Mar 88 21:00:11 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Word processor conversion (25) ------------------------- Date: Sat, 12 Mar 88 16:50:34 EST From: dow@husc6.BITNET (Dominik Wujastyk) There is a shareware program called XWORD written by Ronald Gans (350 West 55th Street #2E, New York, NY 10019; phone (212) 957-8361) which converts between ASCII, WordStar 3.3 or 4.0, XyWrite III or II, Nota Bene, MultiMate (or MM Advantage), WordStar 2000 (release 2), WordPerfect (4.1 or 4.2) dBase III (comma-delimited). I have used it for XyWrite and ASCII with success. Registration is $20, and gives you a slightly better, newer version, and promise of notification and a cheaper deal on future releases. MS Word and DCA format are in the pipeline, I gather. Dominik Wujastyk bitnet: user DOW on the bitnet node HARVUNXW arpanet: dow@wjh12.harvard.edu csnet: dow@wjh12.harvard.edu uucp: ...!ihnp4!wjh12!dow ========================================================================= Date: Wed, 16 Mar 88 21:24:35 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Collation software; Interline processing (44) ------------------------------------------------------------------------ Date: Wed, 16 Mar 88 18:39:56 MST From: Mark Olsen I DISCARDED instead of received, so I can't send the following to the appropriate parties. On text comparison, have you looked at URICA? This is an interactive system by Robert Oakman and ?? which allows the user to create an apparatus for critical editions, etc. He is at University of South Carolina and I purchased the prg. for $35.00??. I tested it and found that it is a good looking system. I have not, however, used it for a critical edition yet. [Editor's comment: following is some information on URICA taken from an entry in our forthcoming Humanities Computing Yearbook (Oxford U.P.). ------------------------------------------------------------- URICA has been developed and is distributed by Robert L. Cannon and Robert L. Oakman, Dept. of Computer Science, University of South Carolina, Columbia, SC 29208 U.S.A.; (803) 777-2840; e-mail:ucbvax!ihnp4!akgua!usceast!cannon [uucp]; cannon@usceast.uucp [Bitnet]. It costs $50, payable to Carolina Research and Development Foundation. It requires an IBM PC/XT/AT/PS2 with one floppy disk drive, 128K RAM, PC-DOS 2.1 or later; not copy protected. 17 pp. users manual; updates for a nominal fee.] ------------------------------------------------------------ On the Indian-English problem, one might try IT: Interlinear Text Processing program published by the Summer Institute for Linguistics. I have the program, fired it up once or twice, but have no comments on it save that it was quite reasonably priced, $50.00?? I can give more details on request. Mark ========================================================================= Date: Wed, 16 Mar 88 21:38:18 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: On digesting the digests (166) (1) Date: Mon, 14 Mar 88 21:18 PST (9 lines) From: Sterling Bjorndahl - Claremont Grad. School Subject: RE: Digest: Miscellaneous notes (233) (2) Date: 15-MAR-1988 12:58:50 GMT (24 lines) From: A_BODDINGTON@VAX.ACS.OPEN.AC.UK Subject: Digests (3) Date: Tue, 15 Mar 88 07:14:50 EST (13 lines) From: Joanne Subject: Re: Digest: Miscellaneous notes (233) (4) Date: 15 Mar 88 09:17:26 gmt (15 lines) From: R.J.Hare@EDINBURGH.AC.UK Subject: Digest (5) Date: Mon, 14 Mar 88 22:53 CDT (8 lines) From: Wayne Tosh Subject: RE: Digest: Miscellaneous notes (233) (6) Date: Tue, 15 Mar 88 19:27:02 GMT (29 lines) From: AYI004@IBM.SOUTHAMPTON.AC.UK (7) From: Dr Abigail Ann Young (28 lines) Subject: Humanistic digestion (1)---------------------------------------------------------- Date: Mon, 14 Mar 88 21:18 PST From: Sterling Bjorndahl - Claremont Grad. School Subject: RE: Digest: Miscellaneous notes (233) You wanted feedback on this subject. I vote against it. My editor and mailer software aren't that well integrated. I like your digesting by topic, but I'd prefer the "solo" items to continue coming through solo. Sterling (2)------------------------------------------------------------------ Date: 15-MAR-1988 12:58:50 GMT From: A_BODDINGTON@VAX.ACS.OPEN.AC.UK Subject: Digests DIGESTS I vote for one single digest with a table of contents. Like Nancy Ide I am using a VAX, but find long messages (IN or OUT) no problem. This may suggest that we are using different mailers (mine's VMS mail). Incoming messages can simply be scanned with the editor using READ/EDIT. Outgoing are prepared as files or using SEND/EDIT (as this one). A days Humanist mailing can then simply be printed from mail using the PRINT command. Finally it can be MOVEd to a folder or DELETEd with a single command. On a good day I only get 100 messages, it is tiring to manipulate each individually and I would be happier with half as many. I don't object to the bulk, just the tedium of manipulation. Where I do sympathise with Nancy is that we here are not charged for filespace etc. Andy Boddington Open University UK (3)-------------------------------------------------------------- Date: Tue, 15 Mar 88 07:14:50 EST From: Joanne Subject: Re: Digest: Miscellaneous notes (233) This is a great idea! Just to add to the mayhem, let me say that I rarely get the opportunity to reply to Humanist, although I am an avid reader. As the Director of a Computer Center I get so much mail that I have to have someone intercept Humanist, sort it out for me, and leave me a hard copy. Having a digest would greatly enhance MY ability to handle humanist myself, quickly, efficiently and effectively, and perhaps I may even be able to contribute something. Thanks as always for your wonderful assistance on this undertaking. (4)------------------------------------------------------------------ Date: 15 Mar 88 09:17:26 gmt From: R.J.Hare@EDINBURGH.AC.UK Subject: Digest I'afraid that I have to say that for me the 'digest' is more of a hindrance than a help. I should say though that this is an entirely subjective viewpoint which has it's roots in the fact that our MAIL system is here to provide the systems programmers with something to 'maintain', rather than to provide the users with a service - ie: it is too difficult to get an editor working on the current message. I have to store it in a file, then exit from the mail system to use the editor on the file. A real pain. I prefer the messages to be left as individual entities, then I can scan the subject line and read or not read, as I choose. Roger. (5)------------------------------------------------------------------ Date: Mon, 14 Mar 88 22:53 CDT From: Wayne Tosh / English--SCSU / St Cloud, MN 56301 Subject: RE: Digest: Miscellaneous notes (233) Terrific idea! This answers very nicely to the concerns I had expressed to you earlier about a bulletin-board arrangement that would allow one to select the topics one was interested in. Huzzah! (6)--------------------------------------------------------------------- Date: Tue, 15 Mar 88 19:27:02 GMT From: AYI004@IBM.SOUTHAMPTON.AC.UK Subject: Indigestion Digesting information may not in itself be bad for the network - but it does create an opportunity for those presiding over the communication flow to fashion it in their own interests. I apologise in advance for appearing to make a naive point, but it is important to realise that such seemingly 'neutral' tasks such as categorisation are highly subjective, and hence, potentially damaging to the hopeful prospect that this network will achieve more than self-gratification by like-minded and highly privileged people. Some recent comments, referring to 'shallow' conversation and 'inane chit-chat'are quite threatening in this regard, particularly as they are accompanied by a strategy for controlling the structure and content of information. The pedestrian aggressiveness in such comments is distressing - not only does the pronouncement reflect upon all contributors, all of whom, I believe, would not waste their time if they did not believe in the value of this forum, but it also seems to be far from the values inherent in the literature that many of us have devoted our lives to - in which metaphor represents a way of freely expressing one's view of the human condition. We may use tools, such as computers, to tear apart the imaginative worlds created by our language(s), but we are not trying to destroy them. Mutability, chance, the protean nature of sensible things - unconstrained discourse - these are what keep us from just building monuments to ourselves. Brian Molyneaux (7)-------------------------------------------------------------- From: Dr Abigail Ann Young Subject: Humanistic digestion Today I really began to experience humanist withdrawal. My constant supply of regular daily doses has been cut off. How can I survive with only one fix a day?... What I dislike about HUMANIST sometimes are the long, esoteric, needlessly (at least in my opinion) technical discussions of front and back ends, mark-up language, textbases, CD-ROM, etc which some members indulge in at intervals: don't misunderstand, I'm sure there are interesting and informative ways to discuss these issues, I just didn't notice much of it here, mostly the technological hierarchs talking.... But then to have some people talking in what seems to me to be a rather self-righteous way about the waste of their time, the difficulty of sorting wheat from chaff, and how they can't be bothered with undue chattiness (and worse still offering everyone good advice about how to do better in future!) seems a little much. I mean most of us put up rather quietly with their hobby-horses in the fall and winter!!!! I don't like HUMANIST as much as I did before. I guess the days of happy anarchy had to pass. Probably it was the postcard request that did it in! Abby ========================================================================= Date: Wed, 16 Mar 88 21:50:05 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Misc. comments (24) My apologies for the delay in sending out the digest on digestion. Some strange character in one of the messages was causing the whole digest to be rejected by ListServ. I downloaded then uploaded the file, and that seems to have cured the ill. I want to thank Michael Sperberg-McQueen in public for writing me a macro to do the digesting: il miglior fabbro! Please bear with me for the next few days while I experiment with various ways of sorting the mail. I, too, regret the passing of HUMANIST's ebullient youth -- or am I just telling myself another myth of a Golden Age? -- but I'm discovering in other spheres that maturity has its deeper joys. Anyhow, several of us were facing imminent collapse, and something had to be done. We make allowances for each other, please! Willard McCarty mccarty@utorepas ========================================================================= Date: Thu, 17 Mar 88 20:23:08 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Meta-talk (97) (1) Date: Thu, 17 Mar 88 14:49:04 GMT (15 lines) From: CMI011@IBM.SOUTHAMPTON.AC.UK Subject: chiffchaff (2) Date: Thu, 17 Mar 88 11:25 PST (64 lines) From: Sterling Bjorndahl - Claremont Grad. School Subject: On the need for aggressive mailer software. (1) -------------------------------------------------------------------- Date: Thu, 17 Mar 88 14:49:04 GMT From: CMI011@IBM.SOUTHAMPTON.AC.UK Subject: chiffchaff Abigail Young's 'esoteric' complaint. I think she's going a bit over the top. If HUMANISTs can't cope with markup languages and database concepts (if they are interested), its a pretty poor lookout! I would contend (and I KNOW that some of you out there agree with me) that the 'philosophical' issue of database design is crucial to many aspects of database design, and if I can't quarrel with Jim Coombs on whether or not a commerical database system can adqequately handle text, then what CAN we discuss on HUMANIST? Sebastian Rahtz (2) -------------------------------------------------------------------- Date: Thu, 17 Mar 88 11:25 PST From: Sterling Bjorndahl - Claremont Grad. School Subject: On the need for aggressive mailer software. I don't know about you, dear reader, but I find that my e-mail correspondents ignore my messages with unusual frequency. I find that e-mail is ignored much more than paper mail. And it is certainly ignored more than telephone calls. Perhaps 'ignored' is not the best word. Several confessions on HUMANIST have made the matter clear: e-mail messages can be quickly and easily filed somewhere to be dealt with "later" - all in good intention. The problem is that "later" never arrives, or it arrives only after the original sender, wondering if perhaps it is the network that is at fault, has sent three or four follow-up messages. Sending these follow-up messages is 'work.' Machines should work; people should think. Ergo: I think that machines should send the follow-up messages for us. A new field must be added to the mail header. I propose that this field be named "Reply-by-or-else!". Every day the mailer software scans everyone's received mail. Messages which have been read at least once[1] will be checked for a "Reply-by-or-else!" field. If the date in this field has been exceeded, a message will be sent by the system to the recipient reminding him/her that the message has indeed not been replied to yet. This message will be re-sent every day, with consecutively harsher and more threatening expressions being used, perhaps including curses in a variety of exotic languages. If a reply has not been made in a week, the mailer will randomly delete one file from the recipient's disk allocation. The threats and random deletions continue for another week. At that point, the system manager is notified that one of his/her users is being impolite. The system manager can reply to the original sender, apologizing for the uncouth nature of the users at site X, especially including user Z who has not replied to "your message of YYYY-MM-DD." If the system manager chooses not to take on this responsibility, a message is sent to every user on the system: "Attention all users: User Z is impolite and has not responded to a message sent on YYYY-MM-DD by colleague A@B. For this you all will suffer. Beginning tonight at midnight, one file will be randomly deleted from each of your disk allocations. This will continue until user Z is punished." Clearly this will result in a tremendous amount of pressure among the users to reply to, or at least to acknowledge, any incoming mail which includes the "Reply-by-or-else!" field. Can anyone think of a better plan? Sterling Bjorndahl BJORNDAS@CLARGRAD.BITNET ---------------------------------------------------------- [1] This is to allow for the situation when you go to another continent for two months and never read your e-mail. No fair that you should be punished for not replying to a letter that you never read. [Note on my English idiom in this last sentence, for those people who may not be absolutely fluent in this language: 'read' is the simple past tense (I think better (albeit more serious) usage would be the present perfect tense.) 'No fair' is short for "It is not fair." I know how tough it can be to deal with colloquialisms, which seldom appear in scholarly publications.] ========================================================================= Date: Thu, 17 Mar 88 20:26:18 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Announcement of a book (30) ------------------------- Date: Thu, 17 Mar 88 14:51 EST From: RSTHC@CUNYVM Subject: New Textbook Bob Tannenbaum (RSTHC@CUNYVM) At the risk of being accused of using BITNET for a commercial purpose, I would like to announce that Volume I of my new text is now available from Computer Science Press. The Book is: Computing in the Humanities and Social Sciences, Volume I Fundamentals. I am not aware that the book has been reviewed yet or advertised widely, so you may not have heard of it. I am just completing the final revisions to Volume II Applications, and should be shipping the diskettes to the typesetter before the end of March. I am hoping that Volume II will be available in plenty of time for use in the Fall semester, 1988. If you have any questions, please do not hesitate to send me E-mail or to telephone (212) 772 - 4289. If you have seen the book and have any comments or suggestions, I would very much appreciate hearing them. Thanks. Bob Tannenbaum (RSTHC@CUNYVM) ========================================================================= Date: Thu, 17 Mar 88 20:29:12 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Snobol and Icon (69) (1) Date: 17 Mar 88 10:02:34 gmt (29 lines) From: R.J.Hare@EDINBURGH.AC.UK Subject: Icon Source (23 lines) (2) Date: Thu, 17 Mar 88 14:49:04 GMT (23 lines) From: CMI011@IBM.SOUTHAMPTON.AC.UK Subject: a) Snobol b) Icon sources (1) -------------------------------------------------------------------- Date: 17 Mar 88 10:02:34 gmt From: R.J.Hare@EDINBURGH.AC.UK Subject: Icon Source (23 lines) I'm not sure I'm smart enough to work out a return address for John Haviland (to be honest, I'm sure I'm not!), so here's my reply to his query about Icon sources - I'm sure you will get several in a similar vein. Icon is PD - there is no registration fee, etc. and Griswold et al positively encourage copying of the code between users. This is how I interpret all the literature I have anyway. So, if you know someone 'locally' who has the system running, you could beg, borrow or steal a copy from them (this is how I got my first implementation, from Sebastian Rahtz). Here in the UK, I think I am correct in saying that the Microcomputer Software Distribution Service at Lancaster University has the executable files forat least one implementation of the language in their down-loadable library. This facility is not (from memory) available from the Icon Project, and they give some pretty convincing reasons for this being the case in one of the Icon newsletters they publish regularly. The Icon project also make the source code of Icon available for those senterprising enough to want to try and implement it on a new system. I'm sure a message to them would yield a price list, etc. Roger Hare. (2) -------------------------------------------------------------------- Date: Thu, 17 Mar 88 14:49:04 GMT From: CMI011@IBM.SOUTHAMPTON.AC.UK Subject: a) Snobol b) Icon sources Two small points: a) I didn't say SNOBOL couldn't handle 500 lines of text, but that programs over 500 lines very easily get unmaintainable. I once wrote a SPITBOL program called MONSTER, which ended up about 2000 lines, and I can honestly say that there were parts that worked entirely by the High Magic (ie luck). I'm not saying you CANNOT write good long programs but that it would be much easier in a more modular language (like Icon) b) re: the request for Icon source; I think the questioner would blanch at how much material would be needed to download the source of Icon. I'd recommend people definitively to write to the Griswolds and send a small amount of cash. The documentation, newsletter, up-to-date material, program library etc are well worth a few paltry dollars and a few weeks wait. Sebastian Rahtz ========================================================================= Date: Thu, 17 Mar 88 20:31:34 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Multilingual wordprocessing (45) ------------------------- Date: Thu, 17 Mar 88 12:50:22 IST From: David Sitman Right-to-left word processing (Hebrew, Arabic, Persian, etc.) is, of course, a hot topic here in Israel. I don't write Arabic, so the inform- ation I have on Arabic word-processing specifically is rather limited. As Dominik Wujastyk mentioned, there is a package for TeX users available from LISTSERV@TAUNIVM (as Dominik wrote, send the command: GET IVRITEX PACKAGE). While the procedures can be used for Arabic as well as Hebrew, only Hebrew fonts are available at this time. I was given a copy of version 2 of Multilingual Scholar (mentioned by Sam Cioran) to try out a few months ago. One could easily move from one language to another, even on the same line. As I recall, I found it rather lacking in "advanced" word-processing options: indexing, footnoting, etc., and the fonts weren't very nice. The thing that bothered me most was the protection scheme: you have to plug something into the parallel port in order to use the software. Anyway, we decided not to buy it. There are a number of Hebrew-English word processors which are in widespread use here, but in general, they can't hold a candle to the monolingual mainstays. One which is available in North America is EinsteinWriter, which I am told was among the 50 word processors recently reviewed in PC Magazine. Einstein is my favorite among the simple bilingual word processors. It's relatively fast, easy to learn and bug-free. Microsoft is in the process of developing Arabic (and Hebrew) versions of its main products: DOS, WINDOWS, WORD. I don't know which of these are already on the market, but anyone who is interested can send me a note and I'll dig out the electronic mail address of the fellow at Microsoft who is in charge of this project. And then, there is Nota Bene. Version 3 has just come out. I'll make sure that a report on this new version gets to the list soon. David ========================================================================= Date: Fri, 18 Mar 88 19:26:16 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Wordprocessing, mostly multilingual (102) (1) Date: Fri, 18 Mar 88 08:36 EDT (18 lines) From: Subject: Multilingual Word Processing (2) Date: Fri, 18 Mar 88 08:14:58 EST (17 lines) From: Dr Abigail Ann Young Subject: Converting Word-processor files (3) Date: Fri, 18 Mar 88 07:55:12 EST (23 lines) From: lang@PRC.Unisys.COM Subject: Multilingual wordprocessing (45) (4) Date: Fri, 18 Mar 88 11:10 MST (17 lines) From: "David Owen, Philosophy, University of Arizona" Subject: Multi-Lingual Text-Processing (1) -------------------------------------------------------------------- Date: Fri, 18 Mar 88 08:36 EDT From: Subject: Multilingual Word Processing David Sitman is correct in saying that Multilingual Scholar (Gamma Productions, California) is not a powerful word processor. His experience with version 2.0 is correct. However, there is a new version (3.x) which is much improved and does support basic footnoting (but not indexing). Few multilingual word processors (IBM PC based, at least) are going to give you the flexibility to mix Hebrew, Arabic, Greek, Cyrillic and Roman on the same page as well as provide a powerful font generator for rolling your own. Yes, there is a nasty "dongle", or piece of hardware that you have to stick in the parallel port to get print out. It's a clumsy attempt at discouraging software piracy and one I wish Gamma would abandon. However, they will allow you to buy a second "dongle" for use on a second computer, presuming that you might have one at home and one at the office. Sam Cioran (CIORAN@MCMASTER) (2) -------------------------------------------------------------------- Date: Fri, 18 Mar 88 08:14:58 EST From: Dr Abigail Ann Young Subject: Converting Word-processor files It is true, as several people have mentioned, that WordPerfect supplies its own CONVERT programme to 'translate' files from its own format to others (including WordStar) and vice versa. But one should be aware that there is an 'undocumented feature' in the conversion of a WP file containing footnotes to WS: the notes, which WP embeds in the file surrounded by an escape sequence which causes them to be invisible when you scroll through the text but printable, are stripped out by the CONVERT programme and simply thrown out, not written to another file. If, as I had to, you must have a WS version of a scholarly text originally written in WP, this can create some extra work! (3) -------------------------------------------------------------------- Date: Fri, 18 Mar 88 07:55:12 EST From: lang@PRC.Unisys.COM Subject: Multilingual wordprocessing (45) Re: word processing for Hebrew, etc. A couple of years ago, I worked for Jack Abercrombie at the University of Pennsylvania's Center for Computer Analysis of Texts, and at that time, Jack and his group was developing an IBM-PC-based word processor for all sorts of ``exotic'' languages, including Greek, Hebrew, Arabic, and Devanagari. Since it's been a while since I was associated with that project, I can't say anything about its current status, but perhaps Bob Kraft (if he is one of the hardy souls who is still reading this newsgroup) could provide more details... --Francois Francois-Michel Lang Paoli Research Center, Unisys Corporation lang@prc.unisys.com (215) 648-7469 Dept of Comp & Info Science, U of PA lang@cis.upenn.edu (215) 898-9511 (4) -------------------------------------------------------------------- Date: Fri, 18 Mar 88 11:10 MST From: "David Owen, Philosophy, University of Arizona" Subject: Multi-Lingual Text-Proxessing A couple of HUMANISTS have mentioned a package called TurboFonts to me that runs in conjunction with standard word processors such as Word Perfect. It needs and EGA or Hercules+ card (which leads me to believe that it runs in text rather than graphics mode) and supports a number of printers, including the HP LaserJet+ or Series II. It is designed to display and print, within eg Word Perfect, non Latin alphabets such as Greek, as well as many scientific symbols. Have any HUMANISTS used this package? With what hardware? If I receive sufficient replies, I will post the results in HUMANIST (or on its server). David Owen OWEN@ARIZRVAX.BITNET OWEN@RVAX.CCIT.ARIZONA.EDU ========================================================================= Date: Fri, 18 Mar 88 19:34:51 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: At play (67) (1) Date: Thu, 17 Mar 88 20:10:37 MST (23 lines) From: Mark Olsen Subject: Sterling -- break his kneecaps -- Bjorndahl (2) Date: Fri, 18 Mar 88 09:35:31 DNT (26 lines) From: Hans Joergen Marker Subject: The missing daily joke (1) -------------------------------------------------------------------- Date: Thu, 17 Mar 88 20:10:37 MST (23 lines) From: Mark Olsen Subject: Sterling -- break his kneecaps -- Bjorndahl This can be filed under IDLE CHATTER -- and please, no return mail Now here is a man I never want to borrow 50 cents from. And -- don't doubt it -- people RETURN Sterling's phone calls. Or else. Seriously, I can't think of one thing I would detest more than an intrusion on my computer. I have a modem link to the mainframe from home and office and only one phone at each location. I often, particularly in the morning, logon just to tie up the phone. At last a little peace and quiet. If I am ignoring something or have forgotten it -- like not returning phone calls -- I want someplace of refuge where some damned machine can't track me down. Now Sterling wants to equip someone else with an enforced phone answering machine that I have to respond to. If its important, they'll phone back. I don't have call forwarding, an answering machine or an auto-dialer, and I certainly don't want that on MY computer. Please don't take my electronic refuge away. Mark (2) -------------------------------------------------------------------- Date: Fri, 18 Mar 88 09:35:31 DNT From: Hans Joergen Marker Subject: The missing daily joke From: Hans Joergen Marker Subject: Too much discipline I would like to second the opinion of Dr. Abigail Ann Young where she stresses the loss of unseriousness in the Humanist discussion group. Whatever became of the one line messages from Sebastian Ratz that used to brighten up our lives. Is he exercising selfdiscipline under the impression that it all has to be serious busines from now on? I admit that the amount of messages was becoming overwhelming at one stage, on the other hand I feel that we are missing something now. If the Humanist discussion group has to be devoted to business like messages then it will probably soon be restricted to members from the main stream of the discussion. The term 'survival of the fittest' was brought up in the discussion before the changes were made. I think that the term is still relevant, only the environment has changed and so has the type of 'fitness' needed for survival. Hans Joergen Marker. ========================================================================= Date: Fri, 18 Mar 88 19:41:50 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Database/textbase software (175) (1) Date: Fri, 18 Mar 88 08:52:39 -0800 (24 lines) From: mbb@jessica.Stanford.EDU Subject: The Great Text Search Engine: why hasn't it been done? (2) Date: Fri, 18 Mar 88 11:03 EST (8 lines) From: Roberta Russell Subject: Query: VAX/VMS database software (3) Date: Fri, 18 Mar 88 10:13:29 -0800 (127 lines) From: mbb@jessica.Stanford.EDU Subject: Specifications for a useful text search engine (1) -------------------------------------------------------------------- Date: Fri, 18 Mar 88 08:52:39 -0800 From: mbb@jessica.Stanford.EDU Subject: The Great Text Search Engine: why hasn't it been done? I've been going over the notes that appeared recently on HUMANIST regarding the text search engine. As I read through Jim Coombs' and Robin Cover's notes, I wondered if what stands in the way of the development of such a system is logistical rather than technological in nature. That is: if a programmer and a humanist -- both competent! -- worked together, is there any technological barrier to their developing a search engine? It seems to me that the algorithms and hardware already exist. If so, then what stands in the way of the development of the search engine includes items such as lack of funding, lack of recognition, etc. If this is indeed the case, then it seems rather silly, doesn't it? Malcolm Brown Stanford (2) -------------------------------------------------------------------- Date: Fri, 18 Mar 88 11:03 EST From: Roberta Russell Subject: Query: VAX/VMS database software Can someone recommend a text-oriented database program for VAX/VMS machines? One that can generate bibliographic-style reports? We are presently running a dinosaur version of REF-11 (2.3) that eats up an inordinate amount of disk space. (Please respond to prussell@oberlin) (3) -------------------------------------------------------------------- Date: Fri, 18 Mar 88 10:13:29 -0800 From: mbb@jessica.Stanford.EDU Subject: Specifications for a useful text search engine It is with some "fear and trembling" that I offer this for commentary on HUMANIST. I was intrigued by Robin Cover's note that kicked off a brief discussion of a text search engine that would do the kind of things that humanist scholars would need. I have been asked here to submit just such a specification, a kind of manifesto of the needs of the humanist scholar. Accordingly, I offer the following draft and invite suggestions, criticisms, additions --- yea, even flames. Please take note of the following: > if some of this sounds like I'm plagarizing Robin Cover, it's only because I am. I thought his ideas were good ones. This document, whatever its eventual form, will not be for publication, but rather for circulation as a memo. > the brief section on the "concept of the text" is not by any means meant to be theoretically adequate. In this document, it merely attempts to point out aspects of texts that might not be apparent to programmers. > This is an initial draft, and I do not expect it to be exhaustive by any means. thanks Malcolm Brown, Stanford - - - - - - - - - - - - These notes attempt to specify the capabilities of a text retrieval or "search engine" that is required for serious research by humanist scholars. The specification of such a search engine comprises half of what would be the ideal academic text research tool. The second half of such a tool would be a robust hypertext system. Before listing the characteristics of the engine itself, it is important to review just what a text is. It is light of such a concept of the text that the requirements for a search engine make sense. The concept of the text A text has empirical and non-empirical aspects. The former includes all the letters, spaces, punctuation and other marks that comprise the text. But there are also a great many non-empirical aspects of a text, the denotations and connotations. A text is an ordered hierarchy of objects. Texts are invariably subdivided into units such as chapters, pages, verses, paragraphs, sections, etc. Moreover, it is possible to apply more than one hierarchical scheme to the same text. One example would be chronology ("the writings of K from 1797-1799" and "the writings of K from 1800-1802"); another example would be the parsing of the text into grammatical units (subject, verb, object, singular, plural, inflected, etc.) Texts can be classified according to types, such as fiction, correspondence, essays, and so forth. As with its hierarchies, it is quite possible that a text can be labeled in more than one way. Different type classifications generally result in a different hierarchical structures. The concepts of hierarchy and type are examples of non-empirical aspects of the text. Additional examples would include semantic contexts and etymolgical and historical references. In order to represent them (an hence make them "visible" to something as empirically-minded as a computer) a system of annotations can be employed. The capabilities of the search engine The search engine should support the full range of Boolean operators. The search engine must be capable to search large bodies of texts. The search engine should be able to produce concordances and be able to chart the distribution of words over the text. The search argument should permit nesting of Boolean search expressions. Search requests can be saved to disk and later recalled, either for repetition or for inclusion in more complex search requests. Full pattern matching should be supported (as with the UNIX grep). The engine must be capable of viewing and searching its text in terms of its ordered hierarchy. In addition to the capacity to search for arbitrary strings, the scope of such searches should be delimited by hierarchical units. The delimiters of "words" -- the basic text unit -- should be user-definable. The database structure must be capable of supporting annotations or assigned attributes at the word level and, ideally, at any higher textual level appropriate to the given document. Word-level annotations would account for things such as assignment of lemmas, gloss, syntatic function, etc. Such annotations must be accessible to the search engine. (this from Robin Cover) Examples of search requests that should be supported by the engine: retrieve all occurrences of "x" retrieve all occurrences of "x" and "y" but not "z" retrieve all verses in which "x" but not "y" occur retrieve all verses written in 1797 in which "x" but not "y" occur Malcolm Brown, AIR/SyD Stanford University mbb@jessica.stanford.edu ========================================================================= Date: Sun, 20 Mar 88 12:24:50 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Biographies (41) ------------------------- Date: Fri, 18 Mar 88 19:41:39 CST From: D106GFS@UTARLVM1 As you are all aware, we have an excellent resource in the compiled biographies available from the file server. However, so far we have not established a standard for format or content. I have written a proposal for a standard, which uses simple descriptive markup tags for the various parts of the biography. Specifically, I have tags for name, address, phone, institution, e-mail address, biography text proper, and new paragraph. This seems innocuous yet adequate to me; possible additions might be hardware used, and a list of special-interest keywords (the latter of course being more work). I have also written (a) a program which extracts several of these fields from the biographies on file and converts the files into the proposed format; and (b) a HyperCard stack which can import the tagged files, building a stack with the usual sorting, browsing, and retrieval functions. I plan to add an "Export" feature, so that the data can be maintained in HyperCard, but written out in SGML at will. I am willing to carry out the entire conversion, though I don't *promise* that addresses embedded in the midst of regular biography text will be noticed and moved to the 'address' field (such information would not be deleted, merely left in the text as is), etc. My questions are (please reply directly to D106GFS at UTARLVM): 1) Who wants a copy of the proposal to read and comment? 2) Who wants a (beta) copy of the HC stack to try? 3) How many people might find either the stack, or at least the tagged biographies, useful? Thanks for your interest, Steven J. DeRose ========================================================================= Date: Sun, 20 Mar 88 12:43:18 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Icon (55) (1) Date: Sat, 19 Mar 88 20:46:12 PST (9 lines) From: tektronix!reed!johnh@uunet.UU.NET (John B. Haviland) Subject: ICON source (2) Date: Sun, 20 Mar 1988 12:36:00 EST (30 lines) From: MCCARTY@UTOREPAS (Willard McCarty) Subject: Information on Icon (1) -------------------------------------------------------------------- Date: Sat, 19 Mar 88 20:46:12 PST From: tektronix!reed!johnh@uunet.UU.NET (John B. Haviland) Subject: ICON source Source for various versions of ICON--in answer to my own earlier query-- can be had via ftp from arizona.edu. The cost for disks and documentation on version 7 for MSDOS I am told is $20. E-mail queries should be directed to icon-project@arizona.edu. (2) -------------------------------------------------------------------- Date: Sunday, 20 Mar 1988 12:36:00 EST From: MCCARTY@UTOREPAS Subject: Information on Icon [The following I've extracted from my file on Icon, assembled for an entry in The Humanities Computing Yearbook (forthcoming, O.U.P.).] Developed by Ralph Griswold, Icon Project, Dept. of Computer Science, Gould-Simpson Building, The University of Arizona, Tucson, AZ 85721; 602-621-6613; Network addresses: icon-project@arizona.EDU, {ihnp4, noao, allegra}!arizona!icon-project. Public domain, distributed by vendor; cost: $15-$25, depending on computer system and format; source code available at slightly higher prices; registered owners receive a newsletter; technical support provided via electronic mail and an electronic bulletin board; all written correspondence answered. Implementations for most UNIX-based systems, VAX/VMS, MS-DOS, Amiga, Atari ST, and Macintosh; host system requirements vary with the target computer (e.g., MS-DOS systems require at least 256k bytes of RAM). Documentation: A language overview, installation instructions, and a user's manual provided with the implementation; a book providing a complete description and reference manual available separately for $30 and a book that describes the implementation for $40. ========================================================================= Date: Sun, 20 Mar 88 12:46:02 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Interlinear translation programs (48) ------------------------- Date: Sat, 19 Mar 88 20:42:37 PST From: tektronix!reed!johnh@uunet.UU.NET (John B. Haviland) Re: interlinear translations for Miluk-English, and in general. As some of you will know, I also have written some programs, in a package called TRANSC(ript), which are designed for formatting and glossing transcript material, especially conversational transcripts where synchrony is shown by overlap. The programs (originally written in SIMULA) are something of a hodgepodge, "maintained" in a diffident and sloppy way for my own needs, mostly now written in C, currently available from me for CP/M and MSDOS (the only version I actively hack at). Module 1, SCAN, produces the formatted conversational transcript from a relatively simple input format (which was meant to be easy to type); module (2), GLOSS, attaches morpheme-by-morpheme glosses (properly aligned, word-by-word) and free translations to each line, updating (and using where possible) a running dictionary as it goes along; module (3), MERGE, gloms all the resulting files together into various sorts of more-or-less readable text. I have recently emancipated the GLOSS module from its previous dependence on SCAN, so it can be used for ordinary texts as well as for conversational material. I have also made MERGE more forgiving. I don't think these programs are as flexible as the SIL IT programs, but then again I think they are the only programs around that handle the overlap/synchrony problem for conversational transcripts. The running dictionary is a useful side benefit, made even more useful if you are handy with your editor. (If there are any Humanists out there who work on them, I have fairly sizeable working dictionaries, produced by these programs, for the two languages Tzotzil and Guugu Yimidhirr.) I would be glad to offer more details, although I am reluctant to get into the disk-copying business unless people have urgent needs for this kind of software. ========================================================================= Date: Sun, 20 Mar 88 12:50:44 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: British anti-classical education? (27) ------------------------- Date: Sat, 19 Mar 88 17:20 EST From: "Tom Benson 814-238-5277" In his review of I. F. Stone's THE TRIAL OF SOCRATES (NEW YORK REVIEW OF BOOKS, 31 March 1988, p. 18), M. F. Burnyeat writes: As I write this review, legislation is passing through the British Parliament that gives to Her Majesty's secretary of state for education power to control the research and teaching of each individual university teacher in the country. An imposed curriculum will determine the content of the history taught in the state schools and will effectively exclude from their timetable all study of the classical languages. Is this true? To what legislation does Professor Burnyeat refer? Tom Benson Penn State University t3b@psuvm (bitnet) ========================================================================= Date: Mon, 21 Mar 88 17:45:53 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Survey; announcement; query (119) (1) Date: Mon, 21 Mar 88 22:02:28 GMT (33 lines) From: CMI011@IBM.SOUTHAMPTON.AC.UK Subject: Survey of mailers (2) Date: Mon, 21 Mar 1988 15:17 MST (19 lines) From: Randall Jones Subject: MLA session on hypertext and literature (3) Date: Sun, 20-MAR-1988 15:53 EST (45 lines) From: Subject: TESTNG OCR DEVICES (1) -------------------------------------------------------------------- Date: Mon, 21 Mar 88 22:02:28 GMT From: CMI011@IBM.SOUTHAMPTON.AC.UK Subject: chif chaff This man Marker is a real hero! Someone ASKS for my 'wit'? Actually our mail system (outgoing) was broken... Anyway, since alls quiet on the humanist front (rumblings of text engine guns in the distance), can I ask HUMANISTs to help me with a quick survey? I am interested in mailing software (its a sort of research interest in this department), and I'd like some hard facts on a sort of 'homing pigeon' experiment. I am interested a) in what mailer (and thence operating system) people use, and b) the approximate timings through nodes. I'd be very grateful if HUMANISTs could all send me a piece of mail which contains simply two lines with the name of their mail system (if known) and the name of their operating system. I can extract the route from the mail header, and the date sent. No, this is not a silly 'postcard' request, its a serious interest in communication. My address is NOT as shown above (as I said the mailers broken) but is spqr@uk.ac.soton.cm thanks sebastian PS two bulls at the bullfight, looking at bull #3 whos first into the ring. the crowd roar, bull #3 is confused and bewildered. "the cape, Larry, go for the cape!" bellows one of the watching bulls... but thats Gary Larson, not me (2) -------------------------------------------------------------------- Date: Mon, 21 Mar 1988 15:17 MST From: Randall Jones Subject: MLA session on hypertext and literature ACH (Association for Computers and the Humanities) is co-sponsoring a session at the 1988 Modern Language Association Annual Meeting in New Orleans in December on the topic of hypermedia and literature. If anyone is interested in participating in the session or can make a recommendation in behalf of someone else, please send a BITNET message to Randall Jones, JONES@BYUADMIN . Include in the note a brief description of the subject you (or the person you are recommending) would like to speak about. Participants in the session must be members of the MLA and must register for the conference. Randall Jones Humanities Research Center Brigham Young University (3) -------------------------------------------------------------------- Date: Sun, 20-MAR-1988 15:53 EST From: Subject: TESTNG OCR DEVICES (41 LINES) From: Terrence Erdt ERDT@VUVAXCOM Standards for Testing Optical Character Recognition (OCR) Systems During the next few months, I shall be testing some new OCR equipment --the Kurzweil Discoverer 7320, the TransImage 1000, and the Saba Handscan (I've already examined the Palantir CDP). The the TransImage is said to be capapble of reading typeset matter as well as standard typewriter fonts, while the Handscan is confined to typewriter fonts and printer output. The Discover, which sells for between ten and twelve thousand dollars, supposedly will scan typeset English, French, Italian, Swedish, and German. It seems to me that the capacity to convert typeset materials to machine readable form eventually will be an important element in a scholar's workstation, and it would make for efficiency if we could device a standard test to evaluate new OCR products, particularly since commercial publications, such as PC Magazine, tend to focus only upon business applications, where the need to scan typeset matter isn't as critical. Some of the participants in the HUMANIST forum are quite experienced users of OCR devices, particularly Kurzweil products, and therefore I would like to pose the question of whether or not a standard test of OCR equipment is feasible, as in terms of media submitted--such as microform, newsprint, line printer output; secondly, in terms of languages represented by the test. If a corpus of the kinds of test materials were agreed upon, would it be possible to standardize the administration and grading of the test? Would it be feasible to establish, for instance, a collection of test documents consisting of photocopies of the agreed upon materials for use by anyone wanting to test the capabilities of an OCR device? Terrence Erdt, Ph.D. Technical Review Editor Computers and the Humanities Graduate Department of Library Science Villanova University Villanova PA 19085 ERDT@VUVAXCOM.BITNET (215) 645-4670 ========================================================================= Date: Mon, 21 Mar 88 17:50:14 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Desiderata of retrieval software (285) ------------------------- Date: Sun, 20 Mar 88 17:48:12 PST From: "John J Hughes" SUBJECT: Text Retrieval Desiderata I would like to augment Robin Cover's and Malcolm Brown's thoughts on text retrieval desiderata by offering the following "Blue Sky Wish List" that outlines the sort of features I would like to see in an ``ideal'' text retrieval program. Some of these features were outlined on pages 247-49, 266-67, and 345-46 of _Bits, Bytes, & Biblical Studies_. Many of the features are taken from a discussion in a forthcoming review of some MS-DOS and Macintosh text retrieval programs that will appear in the next issue of the _Bits & Bytes Review_. Some of the features come from Robin's and Malcolm's lists. Although concording and hypertext programs are types of text retrieval programs (TRPs), I will use the latter term to refer to programs whose primary function is locating text, rather than concording it or linking sections together in a hypertext fashion. Generally speaking, I believe that an ideal TRP should be as fast and flexible as possible in its abilities to (1) locate highly defined (``disambiguated'') matches, (2) display them on-screen, (3) move from hits to the texts in which they occur, (4) sort the hits according to default or user-defined templates, (5) copy matches in user-defined contexts to a file or send them to a printer, and (6) format that output in user-defined ways. Specifically, an ideal TRP might support the following features. (Some of these features pertain to TRPs that work with indexed texts. I'm not going to discuss sorting options.) SEARCHING/INDEXING FEATURES. An ``ideal'' TRP should support (1) the Boolean operators AND, OR, XOR, and NOT, (2) parentheses, e.g., "(Holmes AND Watson) NOT Moran" (3) nested Boolean operators, e.g., "((Holmes AND (Watson OR Hudson) OR Mycroft) NOT Moran" (4) user-specifiable default Boolean operators, e.g., OR, AND (5) positional operators (for positionally-dependent, or sequence-specific, searches), e.g., "Holmes W/5 Watson"--Holmes followed within 5 words by Watson-- (6) proximity operators (nearness), e.g., "Holmes N/5 Watson"--Holmes and Watson within 5 words of one another in either order-- (7) metric operators, e.g., "5" in the two previous examples is a metric operator (8) comparative operators, e.g.,"SHOW Dates=> 01/08/75" (9) range operators, e.g., "Holmes 1885:1887" (10) mixing Boolean, metric, proximity, positional, range, and comparative operators in a single search construction, (11) user-defined and user-specified text units on which operators operate (i.e., restricting searches by any user-defined or specified text unit, e.g., character, word, line, verse, sentence, paragraph), e.g., "(Holmes AND Watson)/S"--Holmes and Watson in the same sentence (S)--"(Holmes W/3 Watson)/S--Holmes followed within 3 sentences (S) by Watson-- (12) using positive and negative metrical operators when restricting searches by user-defined or user-specified text units, e.g., "(Holmes AND Watson)/2L"--Holmes and Watson within 2 lines of one another-- (13) long, multiple search-term constructions with expressed operators, e.g., "((Holmes AND (Watson OR Hudson) OR Mycroft) NOT Moran" (14) initial, medial, and terminal truncation of search terms, e.g., "*olmes", "H*es", "Hol*" (15) single and global wild-card characters, e.g., ? and * (16) mixed wild-card searches, e.g., "?olm* AND ?at*n" (17) regular-expression pattern matching for word and phrase fragments, (18) wild-card pattern matching for word and phrase fragments, e.g., "?olm* AND ?at*n" (19) exact-phrase searches, e.g., "Mr. Sherlock Holmes" (20) case-sensitive and case-insensitive searches, e.g., "Holmes" and "holmes" (21) punctuation-sensitive and punctuation-insensitive searches, (22) date-delimited searches, (23) user-specifiable, noncontiguous search ranges, e.g., "Matthew-John, Revelation" or "Romans 1:-4:10; Hebrews 2:3-10:2" (24) synonym-group searches, i.e., bilateral equivalence between a thesaurus entry and its synonyms, as Concept Finder allows (25) search macros, as ZyIndex allows (26) searching sets of retrieved records, i.e., searching the results of searches, as DIALOG allows (27) naming, saving, recalling, editing, and reusing search constructions, as Concept Finder allows (28) term weighting, i.e., user-assigned weighting of search terms, as SIRE and Personal Librarian allow (29) relevance ranking of hits, i.e., algorithm-ranking of hits from most probably relevant to least probably relevant, as SIRE and Personal Librarian allow (30) root matching, i.e., allows searches by root to locate all or user-specified inflected forms, as SIRE and Personal Librarian allow (31) automatic term associations, i.e., algorithm-suggested set of search terms based on associations between the original search terms and the data base, e.g., in response to a search for "food," the program suggests "vegetables, meat, dairy products, grains" as additional search terms, as SIRE and Personal Librarian allow (32) short-context disambiguation with output sorted by increasing or decreasing frequency or alphabetically, i.e., displaying hits in a context of 1 or 2 words so that users may specify contexts to be included and excluded in the search construction, as the IRCOL search software allows (33) counting as matches only those constructions that meet a user-specified number of conditions, some of which may be absolute conditions e.g., 3 out of 5, as the IRCOL search software allows. Additionally, an ``ideal'' TRP that creates indices should support (34) multiple, simultaneous index creation, as DIALOG does (35) large indices, (36) numbered index sets that can be used in search constructions, e.g., "((S1 AND S2) OR S14) NOT Watson"--where S1, S2, and S14 represent sets of hits created be previous search constructions (DIALOG users will understand!) (37) statistical information about word usage as search terms are entered, i.e., display word-frequency information as search terms are entered in the construction before the search is run (38) search term selection from alphabetized word-frequency lists, i.e., as WordCruncher allows (39) search term selection from generic and from user-defined data base thesauri, i.e., as ZyIndex allows (40) automatic saving of search constructions with indices, i.e., as ZyIndex allows (41) user comments--header information--in indices, i.e., as ZyIndex allows (42) search-session histories, i.e., as DIALOG provides (43) appending new text to an existing index without having to reindex the entire file, as Concept Finder allows. For special scholarly needs, an ``ideal'' TRP should support (44) nonroman alphabets, (45) user-defined indexing sequences, as WordCruncher allows (46) texts that are tagged and searchable at the morphological, syntactical, and semantic levels, (47) searchable annotations on any element or tag in the data base, and (48) lemmatized data bases. DISPLAY/MOVE/PRINT FEATURES. An ``ideal'' TRP should allow users to (1) display the search construction--terms, operators, and range--when searching and when displaying terms, (2) see the following information dynamically updated on-screen: name of term being searched for, logical operation being performed, name of file or record unit being searched, location of hits by file and line, cumulative number of hits per file or record unit, cumulative time required to perform function, and time remaining to complete entire search operation, (3) see matches highlighted in inverse video, (4) display the following information when outputting hits in any format to screen, printer, or disk file: the search strategy, the file name, file creation date, and line number in which the hit is located, the total number of hits found, the total number of hits in the file, record, or text unit being viewed, and the number of the hit being viewed, (5) display hits in the context of any specified number of user-defined or specified text units (e.g., 3 words, 4 lines, 2 sentences, 2 paragraphs, 5 verses) with either a balanced or an unbalanced number of units before and after the match, (6) jump directly from ``hits'' to a default or user-specified context--including the full-text context--and back, (7) jump directly from display format to display format, (8) jump from one hit to any other hit, (9) page bidirectionally through hits a user-specified number at a time, (10) page bidirectionally through the full text of a document, (11) go to any user-specified line in a document, (12) go immediately to the beginning or end of a document, (13) go immediately from any level marker or tag to any other level marker or tag, (14) page bidirectionally from level marker or tag to same-type of level marker or tag, (15) display and print ``hits'' in a user-specifiable context of N-number of characters, words, lines, sentences, paragraphs, or user-defined text unit, (16) display and print hits in default and user-defined formats, (17) display and print word-frequency information, (18) display and print distribution-frequency information, and (19) automatically and conditionally write matches in user-specified contexts and formats to disk files or send them to the printer. Additionally, TRPs that create indices should allow users to (20) display hits nonsequentially in a user-specified order. INDEX MANIPULATION. An ``ideal'' TRP program that creates indices should allow users to (1) combine two or more indices (the equivalent of a Boolean OR operation), (2) intersect two or more indices (the equivalent of a Boolean AND operation), (3) show the difference between two indices (the equivalent of a Boolean NOT operation), (4) edit indices by adding or eliminating verses. ***** Of course, such a program does not exist! I believe that a program such as I have roughly outlined would best be designed by a group that included one or more of each of the following sorts of people: (1) humanists, (2) linguists (of course, they're humanists too), (3) computer scientists, (4) information retrieval specialists, (5) programmers with academic backgrounds, (6) and persons with experience in designing interfaces for commercial software. ========================================================================= Date: Mon, 21 Mar 88 17:57:06 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Nota Bene (56) (1) Date: Mon, 21 Mar 88 07:58:30 EST (25 lines) From: lang@PRC.Unisys.COM Subject: Wordprocessing, mostly multilingual (102) (2) Date: 17 March 1988 (13 lines) From: Itamar Even-Zohar Subject: Nota Bene 3.0 (1) -------------------------------------------------------------------- Date: Mon, 21 Mar 88 07:58:30 EST From: lang@PRC.Unisys.COM Subject: Wordprocessing, mostly multilingual (102) Re: word processing for Hebrew, etc. What is the current status of Nota Bene? About three years ago, I reivewed the then-current version for Jack Abercrombie (I forget the version number, but it was a very early one, and my recollection was that it didn't then offer Hebrew, but did offer Greek. I don't recall if either Hebrew or Arabic was even planned. While I'm on the subject of Nota Bene, I'd be interested in hearing from any HUMANISTS who currently use NB to find out about their opinions of the package. A friend of mine in the Classical Studies department at Penn is considering buying NB, and I told him I'd try to get some feedback from NB users. Since this might not be of sufficiently general interest, I'd suggest that any opinions about NB be mailed directly to ME (lang@prc.unisys.com or lang@linc.cis.upenn.edu) and if I get enough interesting opinions, I'll send the lot of them en masse to HUMANIST. Many thanks for any opinions. (2) -------------------------------------------------------------------- Date: 17 March 1988 From: Itamar Even-Zohar Subject: Nota Bene 3.0 I received Nota Bene 3.0 a few days ago and have dedicated some time now to studying, investigating and customizing it. In this version, Hebrew works all right the way I customized the Beta version before, but we still expect the more advanced Nota Bene version. So I will NOT refer to any specific problems with Hebrew in this document. [An extensive review of NB 3.0, follows in the version of this document currently available on the file-server as NOTABENE REVIEW. -- W.M.] ========================================================================= Date: Mon, 21 Mar 88 18:01:01 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Biographies (18) ------------------------- Date: Sun, 20 Mar 88 12:32:42 PST From: tektronix!reed!johnh@uunet.UU.NET (John B. Haviland) Subject: biographies I, for one, applaud Steven DeRose's initiative in making use of the biographies which are what caught my eye in the first place about HUMANIST. Why not circulate the proposed standard, if only because it might inspire some of us to revise what was (for me at least) an uninformed shot-in-the-dark at the terminal keyboard when I first made application to Humanist? ========================================================================= Date: Tue, 22 Mar 88 23:17:09 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Queries, replies, & an announcement (93) (1) Date: Tue, 22 Mar 88 20:24:19 MST (18 lines) From: Mark Olsen Subject: Drivers for Hercules graphics card (2) Date: Tuesday, 22 March 1988 1816-EST (6 lines) From: JACKA@PENNDRLS Subject: LIST OF TEXTS AND SOFTWARE AVAILABLE FROM CCAT (3) Date: Tue, 22 Mar 88 13:58:55 CST (15 lines) From: D106GFS@UTARLVM1 Subject: Query on CAI progs for Biblical languages (4) Date: Tue, 22 Mar 88 09:38:20 EST (13 lines) From: Toby Paff Subject: Re: Multilingual wordprocessing (45) (5) Date: Mon, 21 Mar 88 12:02:03 PST (8 lines) From: cbf%faulhaber.Berkeley.EDU@jade.berkeley.edu (Charles Faulhaber) Subject: Re: Interlinear translation programs (1) -------------------------------------------------------------------- Date: Tue, 22 Mar 88 20:24:19 MST From: Mark Olsen Subject: Drivers for Hercules graphics card I'm reviewing a program called RS/1 and it requires a couple of drivers for the Hercules monchrome graphics card that I've never heard of. The system requires the HGC FULL command (which I have) followed by INT10 (COM) and HARDCOPY (COM) which seem to act as extensions to DOS. Also involved in this is GRAPH_X which is a screen dump utility (I think, the documentation is vague here) which is called by HARDCOPY.COM. I'm stumped and wonder if anyone has run across these routines. Any information would be greatly appreciated. I am, obviously, looking for copies of these utilities which I am assuming were written by HERCULES for their card. All these years with my HERC card and I find out that I haven't even been using right. Maybe I'll have to convince my wife that we really NEED EGA and color. Thanks, Mark (2) -------------------------------------------------------------------- Date: Tuesday, 22 March 1988 1816-EST From: JACKA@PENNDRLS Subject: LIST OF TEXTS AND SOFTWARE AVAILABLE FROM CCAT LIST OF TEXTS AND SOFTWARE AVAILABLE FROM CCAT [now available on HUMANIST's file-server s.v. CCAT HOLDINGS] (3) -------------------------------------------------------------------- Date: Tue, 22 Mar 88 13:58:55 CST From: D106GFS@UTARLVM1 Subject: Query on CAI progs for Biblical languages I have been asked to inquire about CAI programs for Biblical languages for a colleague. Could people who are aware of such relieve my ignorance? I am aware of George Kong's "MemCards" system, which he presented at SBL, and the U Minn Basic Word Study programs, mentioned in Wheels for the Mind, but haven't looked into the area any further. Please reply direct to D106GFS @ UTARLVM1; I'll summarize to HUMANIST if there is sufficient interest. Thanks! Steve DeRose, Brown Univ. and SIL (4) -------------------------------------------------------------------- Date: Tue, 22 Mar 88 09:38:20 EST From: Toby Paff Subject: Re: Multilingual wordprocessing (45) I have seen the NOta Bene supplement for Hebrew... Mark Cohen at Princeton has a copy. It is strictly a beta test version, and we have developed a program to convert our ksemitic files into their version of Hebrew, the code points for which, alas, conflict with all the other standards. Anyone interested can contact me. Toby Paff Princeton University C.I.T. (5) -------------------------------------------------------------------- Date: Mon, 21 Mar 88 12:02:03 PST From: cbf%faulhaber.Berkeley.EDU@jade.berkeley.edu (Charles Faulhaber) Subject: Re: Interlinear translation programs (48) Ken Whistler (Dept. of Linguistics, UC Berkeley) is selling a concordance/text analysis package which is specifically designed to handle linguistic corpora with interlinear translations. ========================================================================= Date: Tue, 22 Mar 88 23:21:21 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: On desiderata for text retrieval (116) ------------------------------------------------------------------------ Date: 22 March 1988 10:40:55 CST From: "Michael Sperberg-McQueen" The desiderata listed by Robin Cover, Malcolm Brown and John J. Hughes for text search and retrieval programs have all given me great pleasure (imagining programs with such features), some pain (reflecting that such programs don't exist), and some wistful daydreams (ah, if one only had the time to drop everything else and develop such a program, or set of programs . . . ). I hope the software developers among us are listening! And on the off chance that they are, here are my own two cents' worth: An ideal text retrieval program should understand and help examine textual variation, whether from successive authorial drafts, successive editions of a printed work, divergent manuscript transmission of a text, or revision of a text (e.g. in legal codes or commentaries). (These are technically and correctly known as 'text states', but I will refer to them as 'manuscripts' because that's probably easier to follow.) Understanding and handling manuscripts correctly means: 1 Each of the kinds of searches noted by Cover, Brown, or Hughes should be available - for specified manuscripts ("In ms C, find all occurrences of 'Holmes' within one sentence of 'Watson' ...") - for all manuscripts ("In *any* ms, find ...") - for some subset of the manuscripts ("find this, but only in mss A, B, or C ...") Note that the distance between two words is no longer absolute, but a function of the manuscript in which the distance is measured. 2 In browsing, the user should be able to select any manuscript as the basis for the display ("show me this passage in ms. A! ... now in ms. B!") 3 In displaying hits, the program must obviously display the version of the text that has the hit. (If ms. A has 'Holmes' and 'Watson' together in this sentence, but ms. B doesn't, the display must be based on A, not B.) Where more than one ms. has the same wording for a passage, that passage should be listed as a hit only once, and displayed according to the 'preferred' manuscript. The user, of course, gets to say the order in which mss are to be preferred. And should be able to switch, within the 'hit' window, to any other ms. 4 Both for browsing and for display of hits, the user should be able to request and see an apparatus criticus for the passage shown on the screen. When the base manuscript changes ("show me this passage in manuscript K!") the apparatus, of course, must change too. When the text window scrolls, of course the apparatus scrolls too. 5 The user should be able to specify dynamically which manuscripts should be included in the apparatus. Extra credit: the user can mark specific variants as interesting or not interesting (or with a variety of flags: lexical, orthographic, syntactic, metrical, ...) and have variants included or excluded from the visible apparatus on that basis. 6 In browsing, the user should be able to open a parallel window with a different base manuscript, so that ms. A can be seen in one window and ms. B in the other. Apparatus should be available in each window. When one scrolls, the other should scroll, too. 7 In addition to the searches specified by Cover, Brown, and Hughes, the user should be able to search for manuscript variations of a specific type: "find all the cases where ms. C reads 'recke' and ms. A reads 'helet'" or "find all the cases where some mss read 'damn' and others 'darn'". If you have a parsed text, it should be possible to say "find all the sentences where some mss. have a pluperfect but others a simple past tense" and so on. 8 As a sideline, the program ought to be able to generate lists of variants and variations of the sort developed by Henri Quentin and used by W. W. Greg or Vinton Dearing, for analysis using other programs and eventually for the generation of stemmata. It would be nice if the program could also be given a stemma and from it generate the text determined by that stemma. (Indeterminate passages highlighted, with alternatives printed below each other, for the editor to choose between?) I should point out that some things of this kind can be done by the program TUSTEP, developed at Tuebingen by Wilhelm Ott; I don't know much in detail because I have not used the program or had the chance to read its documentation. Unless I am much mistaken, however, Tustep is principally batch-oriented, and I have been envisioning an interactive program. Several programs have been mentioned already in this discussion as able to do at least some of the kinds of searches desired. Not mentioned yet, though (I think) is ARRAS, developed by John B. Smith and now sold by Conceptual Tools, of Chapel Hill, NC. While not yet the system of our dreams, ARRAS (which runs on IBM System/370 machines, under TSO or VM/CMS) will handle points 1-3, 5-7, 11-16, 19, 24, 38, 42, of John J. Hughes's list. Not bad at all, on the whole, and cheap by comparison with most other mainframe software. Smith is working on a micro (RT-class machine) version that is even more elaborate and slick. But I notice that all of this discussion of text-retrieval software remains, as Willard observed some months ago, within the world of the text itself: if not 'New Critical', then at least still 'text-immanent'. And I wonder: what desiderata would others add to these lists? Michael Sperberg-McQueen, University of Illinois at Chicago ========================================================================= Date: Tue, 22 Mar 88 23:27:02 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Computing for humanists at York, U.K. (94) ------------------------------------------------------------------------ Date: 22-MAR-1988 15:16:37 GMT From: SRRJ1@VAXA.YORK.AC.UK The following message was intended for Keith Whitelaw. I'm sorry to burden the whole of Humanist with my reply to his recent request [due to lack of adequate userid/postal address] , but I hope it may contribute to the fitful debate about the allocation of resources for Humanities computing, which still seems a vital issue to me. In reply to your recent request to Humanist for information about computing for arts students, I thought you might be interested to know about the way we have begun to introduce computing in the History department at York. As yet we have no money for hardware, no money for special staff, no money for software (we're trying), so we're not really comparable to places like Glasgow, but things can still be achieved. We do have a responsive, roomy new VAX/VMS cluster on campus with terminal classrooms to which we have free access, and we do have some terminals in our offices now too. We have started this year by introducing a revamped computing ancillary course for history students taught by history lecturers. [ A former ancillary taught years ago by Computing staff and not aimed especially at History students had withered away and finally died during the confusion of changing over to a new mainframe. ] The new course was initiated by history lecturers and is offered as a voluntary extra to students and is not an integral part of their degree. So far wee've been overwhelmed by its popularity. By the end of the first year we will have taught 60 (out of c. 270) of our students. Demand was greater, but this is the maximum number it was possible to accommodate. 1. course content Part 1 (4 hours) Introduction to VAX/VMS Introduction to Edit and Mail, and some use of remote systems. Part 2 (4 hours) Introduction to Datatrieve (data entry and retrieval system) Part 3 (4 hours) Introduction to WPS plus 2. assignments Part 1 To edit a text on the childhood of Henry V and MAIL it to tutor. Part 2 To retrieve information from existing bibliographical database in the tutor's area. To set up a database of the population of 19th century English towns in the student's own area using a single domain. To set up a bibliography database linked to a keyword database (using two linked domains) in student's area. Part 3 To enter and format a text, containing a table, which is at least 3 pages long. 3. methods of assessment A voluntary exam at the beginning of the following term, undertaken independently in students' own time testing skills in all three areas, again using historical materials. The student will be given a certificate ratified by the Board of Studies listing satisfactory/unsatisfactory performance in each of the three areas. 4. It is taught by History staff. 5. any other information that you think appropriate. The first time we taught the course we also included introductions to the SORT/MERGE and SEARCH facilities of DCL, and to OCP. But this was definitely trying to squeeze a quart into a pint pot and has now been dropped. Students who have taken the course continue to use the VAX and teach their friends, but we haven't yet measured this continued use. Next year we intend to expand on this by incorporating the use of IT into mainstream history courses, but it has been useful to gain this experience of teaching in a terminal room which was a new experience for all of us. Staff involved Dr Sarah Rees Jones, Dr Ted Royle, Dr Edward James, Dr John Wolffe. Advice from John Illingworth and Dr Rob Fletcher in the Computing Service. Hope this is of some help. Sarah Rees Jones. [srrj1@uk.ac.york.vaxa] ========================================================================= Date: Wed, 23 Mar 88 19:50:17 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Mail survey; another e-mail service (78) (1) Date: Wed, 23 Mar 88 10:03:59 GMT (16 lines) From: CMI011@IBM.SOUTHAMPTON.AC.UK Subject: mail survey (2) Date: Wed, 23 Mar 88 12:13 EDT (45 lines) From: Comserve@Rpicicge Subject: About Comserve (1) -------------------------------------------------------------------- Date: Wed, 23 Mar 88 10:03:59 GMT From: CMI011@IBM.SOUTHAMPTON.AC.UK Subject: mail survey a) To all those who asked me a personal question: my outgoing mail is dead, so I'll wait to reply for a week or so until it recovers b) For those who have never mailed to the UK, it should be said that some mailers require you to say spqr@cm.soton.ac.uk not uk.ac.soton.cm It depends how sophisticated your software is! Mine likes the highest level name (such as uk) first, others last. Sebastian Rahtz PS could the notes about biographies be put on the file server (2) -------------------------------------------------------------------- Date: Wed, 23 Mar 88 12:13 EDT From: Comserve@Rpicicge - - - C O M S E R V E - - - Comserve is a service for professionals and students interested in human communication studies. Conserve is supported through the cooperation of the Center for Interactive Computer Graphics and the Department of Language, Literature, and Communication at Rensselaer Polytechnic Institute. Comserve's Principal Functions 1. Comserve is a "file server;" i.e, Comserve can send you copies of files -- computer programs and documents including bibliographies, instructional materials, announcements, research instruments, etc. -- from its extensive collection. 2. Comserve is a news service. Announcements of interest to users are distributed periodically in issues of Comserve's electronic news bulletin. 3. Comserve maintains a "white pages" or "user directory" service. 4. Comserve has a "Hotline" system that provides a method for communicating with others on topics of general interest in communication studies. 5. Comserve has a system for automatic distribution of announcements or survey forms in electronic format. If you have questions about Comserve or would like to submit information to be distributed by Comserve, contact Comserve's editorial staff at Bitnet address: SUPPORT@RPICICGE. A free hardcopy booklet named "Comserve User's Guide" can be obtained by sending a request to SUPPORT@RPICICGE. Be sure to include your "normal" (i.e., not your computer mail) address with your request. Comserve is supported by the Eastern Communication Association, the International Communication Association, and Rensselaer Polytechnic Institute ========================================================================= Date: Wed, 23 Mar 88 21:26:51 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Teaching computing to humanists (50) ------------------------- Date: Wed, 23 Mar 88 10:15 EST From: Bob Tannenbaum (RSTHC@CUNYVM) The question by Keith Whitelam regarding information on courses designed to teach computing to humanists and the responses by Joe Rudman and Sarah Rees Jones have opened a subject that I feel is most important. I hope that others will share their experiences and suggestions via HUMANIST. At the Vassar Workshop on this subject in the summer of 1986, over 100 faculty members from many institutions in North America and Europe gathered to discuss their experiences in developing and teaching courses in computing to humanists. All who were actually teaching such a course brought materials such as syllabi and assignments to share. I believe a collection of these materials still exists somewhere in Nancy Ide's closet, because our original intention was to begin a "clearinghouse" for materials related to such courses. Unfortunately, we could not obtain the funding for the clearinghouse, so it remains a dream. We have produced the issue of CHum 21(4) to which Joe Rudman made reference. That issue is a direct result of the Vassar Workshop. It contains Bob Oakman's Keynote Address, Joe Rudman's excellent survey and bibliography, and articles by Nancy Ide and me about "What" we should teach (Nancy) and "How" we should teach it (me). The conference scheduled for 16-18 June 1988 at Oberlin College in Oberlin, Ohio is also concerned exclusively with teaching computers and the humanities courses. We will have a Keynote Address by Joe Rudman, four panels devoted to different aspects of the subject, and over 30 contributed papers by scholars from the United States and Canada who are teaching computing to students in all different branches of the humanities. The papers include teaching students in music, languages and literature, translation, history, and philosophy, among other subjects. It is my hope that the papers will appear in a formal proceedings, together with materials gathered at the Vassar Workshop and at this conference. I am currently working on realizing that hope. I invite all of you who are teaching in this field to share your experiences with the rest of us via HUMANIST and to join us at Oberlin in June. To be put on the conference mailing list, send a message to Dr. Roberta Russell at Oberlin (PRUSSELL@OBERLIN). Bob Tannenbaum (RSTHC@CUNYVM) ========================================================================= Date: Thu, 24 Mar 88 16:29:16 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: E-mail survey & an e-mail service (79) [This mail was detained by ListServ for some completely obscure reason. My apologies on behalf of the software. --W.M.] (1) Date: Wed, 23 Mar 88 10:03:59 GMT (16 lines) From: CMI011@IBM.SOUTHAMPTON.AC.UK Subject: mail survey (2) Date: Wed, 23 Mar 88 12:13 EDT (45 lines) From: Comserve@Rpicicge Subject: About Comserve (1) -------------------------------------------------------------------- Date: Wed, 23 Mar 88 10:03:59 GMT From: CMI011@IBM.SOUTHAMPTON.AC.UK Subject: mail survey a) To all those who asked me a personal question: my outgoing mail is dead, so I'll wait to reply for a week or so until it recovers b) For those who have never mailed to the UK, it should be said that some mailers require you to say spqr@cm.soton.ac.uk not uk.ac.soton.cm It depends how sophisticated your software is! Mine likes the highest level name (such as uk) first, others last. Sebastian Rahtz PS could the notes about biographies be put on the file server (2) -------------------------------------------------------------------- Date: Wed, 23 Mar 88 12:13 EDT From: Comserve@Rpicicge - - - C O M S E R V E - - - Comserve is a service for professionals and students interested in human communication studies. Conserve is supported through the cooperation of the Center for Interactive Computer Graphics and the Department of Language, Literature, and Communication at Rensselaer Polytechnic Institute. Comserve's Principal Functions 1. Comserve is a "file server;" i.e, Comserve can send you copies of files -- computer programs and documents including bibliographies, instructional materials, announcements, research instruments, etc. -- from its extensive collection. 2. Comserve is a news service. Announcements of interest to users are distributed periodically in issues of Comserve's electronic news bulletin. 3. Comserve maintains a "white pages" or "user directory" service. 4. Comserve has a "Hotline" system that provides a method for communicating with others on topics of general interest in communication studies. 5. Comserve has a system for automatic distribution of announcements or survey forms in electronic format. If you have questions about Comserve or would like to submit information to be distributed by Comserve, contact Comserve's editorial staff at Bitnet address: SUPPORT@RPICICGE. A free hardcopy booklet named "Comserve User's Guide" can be obtained by sending a request to SUPPORT@RPICICGE. Be sure to include your "normal" (i.e., not your computer mail) address with your request. Comserve is supported by the Eastern Communication Association, the International Communication Association, and Rensselaer Polytechnic Institute ========================================================================= Date: Thu, 24 Mar 88 16:33:35 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Teaching computing to humanists (52) [The following was also delayed by ListServ. -- W.M.] ------------------------- Date: Wed, 23 Mar 88 10:15 EST From: Bob Tannenbaum (RSTHC@CUNYVM) The question by Keith Whitelam regarding information on courses designed to teach computing to humanists and the responses by Joe Rudman and Sarah Rees Jones have opened a subject that I feel is most important. I hope that others will share their experiences and suggestions via HUMANIST. At the Vassar Workshop on this subject in the summer of 1986, over 100 faculty members from many institutions in North America and Europe gathered to discuss their experiences in developing and teaching courses in computing to humanists. All who were actually teaching such a course brought materials such as syllabi and assignments to share. I believe a collection of these materials still exists somewhere in Nancy Ide's closet, because our original intention was to begin a "clearinghouse" for materials related to such courses. Unfortunately, we could not obtain the funding for the clearinghouse, so it remains a dream. We have produced the issue of CHum 21(4) to which Joe Rudman made reference. That issue is a direct result of the Vassar Workshop. It contains Bob Oakman's Keynote Address, Joe Rudman's excellent survey and bibliography, and articles by Nancy Ide and me about "What" we should teach (Nancy) and "How" we should teach it (me). The conference scheduled for 16-18 June 1988 at Oberlin College in Oberlin, Ohio is also concerned exclusively with teaching computers and the humanities courses. We will have a Keynote Address by Joe Rudman, four panels devoted to different aspects of the subject, and over 30 contributed papers by scholars from the United States and Canada who are teaching computing to students in all different branches of the humanities. The papers include teaching students in music, languages and literature, translation, history, and philosophy, among other subjects. It is my hope that the papers will appear in a formal proceedings, together with materials gathered at the Vassar Workshop and at this conference. I am currently working on realizing that hope. I invite all of you who are teaching in this field to share your experiences with the rest of us via HUMANIST and to join us at Oberlin in June. To be put on the conference mailing list, send a message to Dr. Roberta Russell at Oberlin (PRUSSELL@OBERLIN). Bob Tannenbaum (RSTHC@CUNYVM) ========================================================================= Date: Thu, 24 Mar 88 16:49:48 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Test -- please ignore! Testing 1,2,3. ========================================================================= Date: Thu, 24 Mar 88 16:53:34 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Test - please ignore! Testing 4,5,6. ========================================================================= Date: Fri, 25 Mar 88 10:29:04 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Test - please ignore! Date: Fri, 25 Mar 88 10:23:22 EST From: Steve Younker Subject: test Ignore this test ========================================================================= Date: Fri, 25 Mar 88 10:35:37 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Test - Please ignore! Date: Fri, 25 Mar 88 10:23:22 EST From: Steve Younker Subject: test Ignore this test ========================================================================= Date: Fri, 25 Mar 88 10:48:18 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Test - Please ignore! Date: Fri, 25 Mar 88 10:23:22 EST From: Steve Younker Subject: test Ignore this test ========================================================================= Date: Sun, 27 Mar 88 18:46:39 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Test - please ignore Please ignore this entirely. ========================================================================= Date: Tue, 29 Mar 88 10:13:40 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Test - please ignore! Ignore this except to celebrate the fact that it has arrived. ========================================================================= Date: Tue, 29 Mar 88 10:17:24 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Another test! Ignore this joyfully. ========================================================================= Date: Tue, 29 Mar 88 11:24:22 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Announcements (58) (1) Date: 26-MAR-1988 11:33:19 GMT (13 lines) From: Grace Logan Subject: Conference announcement (2) Date: Mon, 28 Mar 88 14:40 O (11 lines) From: John D. Hopkins Subject: CompTrans Report (3) Date: 29 March 1988 (12 lines) From: Willard McCarty Subject: Error in file-name (1) -------------------------------------------------------------------- Date: 26-MAR-1988 11:33:19 GMT From: Grace Logan Subject: Conference announcement Assoc. for Logic Programming: Fifth Int'l Logic Programming Conference; Fifth Symposium on Logic Programming. 15-19 August 1988, Seattle, Wash. Contact on the above conference is Kenneth A. Bowen, Syracuse University, Logic Programming Research Group, School of Computer and Information Science, Syracuse, NY 13210. (2) -------------------------------------------------------------------- Date: Mon, 28 Mar 88 14:40 O From: John D. Hopkins Subject: CompTrans Report COMPUTER APPLICATIONS IN TRANSLATOR TRAINING AND TRANSLATION WORK IN FINLAND A report by John D. Hopkins, University of Tampere (Finland) International Conference For Translators and Interpreters Vancouver Community College, Canada 22-24 May, 1987 [Now available on the file-server, s.v. COMPTRAN REPORT] (3) -------------------------------------------------------------------- Date: 29 March 1988 From: Willard McCarty Subject: Error in file-name I mistakenly announced that the listing of CCAT's texts and software had been stored on our file-server as CCAT HOLDINGS. The real name is CCAT COLLECTN. My apologies. Willard McCarty mccarty@utorepas ========================================================================= Date: Tue, 29 Mar 88 11:29:40 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Greetings (27) Dear Colleagues: You should already be receiving HUMANIST mail from our locally resurrected system. My thanks for the several encouraging messages, such as the following: -------------------------------------------------------------------------- Subject: HUMANIST Date: 25 Mar 88 08:47 -0330 AAARRGHHH! It's been 36 hours! HUMANIST! I need HUMANIST! Hope you are not ill. Best Wishes -------------------------------------------------------------------------- A question for you. I know that at least one person is having some trouble extracting digested messages she wants to keep from those she doesn't. Who else? Perhaps if the people with this trouble were to identify themselves and their systems, others might have useful suggestions or solutions. Willard McCarty mccarty@utorepas ========================================================================= Date: Tue, 29 Mar 88 11:49:47 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Files on the server (75) (1) Date: 24 March 1988 (44 lines) From: Willard McCarty Subject: Availability of files on the server (2) Date: Wed, 23 Mar 88 08:47:57 PST (13 lines) From: "John J Hughes" Subject: Outline of Bits, Bytes, & Biblical Studies (1) -------------------------------------------------------------------- Date: 24 March 1988 (0 lines) From: Willard McCarty Subject: Availability of files on the server Please allow at least 24 hours from the time a file is announced as being available on the server until you attempt to fetch it. Soon, I am told, I'll be allowed to put things there myself, but until the file-serving software is granted its maturity by the local authorities, I must ask the assistance of our always helpful postmaster. Thus the delay. It is vastly more convenient for the both of us to work in this way under the current dispensation than for me to wait until a file is actually there before I announce it. Your indulgence, please. I attach below my standard message that I send to HUMANISTs who ask me to get files for them and to those who make the easy mistake of asking HUMANIST, rather than ListServ, for the file(s) they want. W.M. mccarty@utorepas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dear Colleague: Thanks for your query about fetching items from our file-server. The instructions for doing so are contained in the latest edition of "Guide to HUMANIST" in a file named HUMANIST GUIDE. If you joined our group recently, you will have received the proper edition of the Guide as part of the initial batch of files. If you've been a member for some time, you will have received this edition as a separate piece of mail. If you do not have the proper edition of the Guide, please let me know. Otherwise, please, attempt to get the file(s) you want by following the instructions. If all your attempts fail, then let me know, and I'll gladly get the file for you. I'm sorry to have to redirect your request in this way, but the volume of mail I receive daily does not allow me to give as much individual attention as I would like. +++++++++++++++++++++++>>Note well<<+++++++++++++++++++++++++++++++++ Send your requests for files, including HUMANIST FILELIST (the file of files), to LISTSERV@UTORONTO and *not* to HUMANIST@UTORONTO. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Willard McCarty mccarty@utorepas (2) -------------------------------------------------------------------- Date: Wed, 23 Mar 88 08:47:57 PST From: "John J Hughes" Subject: Outline of Bits, Bytes, & Biblical Studies Analytical Outline of _BITS, BYTES, & BIBLICAL STUDIES: A Resource Guide for the Use of Computers in Biblical and Classical Studies_. John J. Hughes (Grand Rapids: Zondervan Publishing House, 1987. 650 pp. Available from Zondervan or from Bits & Bytes Computer Resources, 623 Iowa Ave., Whitefish, MT 59937; (406) 862-7280; XB.J24@Stanford.BITNET). [This outline is available in full (more than 480 lines) on the file-server s.v. BITBYTES OUTLINE.] ========================================================================= Date: Tue, 29 Mar 88 11:52:48 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Nota Bene for Hebrew (86) ------------------------- Date: Fri, 25 Mar 88 10:29:02 IST From: Itamar Even-Zohar Subject: Information about Nota Bene Hebrew version(s) I have seen some queries in HUMANIST about Nota Bene's Hebrew version. As we have had quite a lot of experience with that here at Porter Institute, Tel Aviv University, here is a piece of brief information: 1. There is a Beta version of Hebrew for Nota Bene's multilingual supplement. This version is made for American scholars who have no Hebrew chip in their printers (nor in their computers, but the latter can be easily solved with Hercules Graphics Plus Card). It can print vocalization plus other massoretic stuff. It probably can provide everything a Biblical scholar in America might need for his/her scholarly work (especially since Greek is available too). 2. This version is NOT adequate for Israelis and other people using Hebrew as a *primary* language, because the Hebrew alphabet has been imposed on various ASCII positions that are not compatible with those used in Israel. Since IBM people have imposed Hebrew on ASCII 128- 154, once you change that you cannot be compatible with any other local programs, and that is not acceptable to us. For this reason, Dragonfly have provided us with a so called "chip version", intended for people who have access to hardware adapted to Hebrew computing and printing (the regular hardware sold in this country). The Beta version originally used print modes to go over from Hebrew to Latin (allowing both push mode and regular writing of both directions within one file). I have temporarily changed that to impose language (direction) change on PITCH (e.g., Latin is PT12 while Hebrew is PT112 etc.; LR0 and LR1 respectively control the primary/major direction). I have avoided the SLS mode and returned completely to regular Nota Bene. The same solutions actually work very well with the new 3.0 version. You just need a different printer table (not provided by Dragonfly) and impose the Hebrew letters on the CapsLock key. 3. However, we still wait for a new Hebrew version, since there are some more problems to solve: in my customized version, each time you change directions you also must press/depress CapsLock (where the Hebrew letters are located). Perhaps a quicker solution eventually can be found (though I do not consider that any major issue). Moreover, I have heard that instead of the somewhat awkward modes with numbers (Hebrew=MD71, Hebrew underline=MD65 etc.) there will be more easily memorizable codes (MDHM or MDHU probably). 4. All versions carry out most of the regular functions of Nota Bene. Equal Width columns however are a bit shaky if you use more than 2; Newspaper Columns won't work. Textbase, database, mailmerge will all work with certain minor extra steps (you can now get a full vocabulary of a Hebrew text since the new Textbase fully recognizes the high-bit ASCII without transforming it to low-bit ASCII equivalents as it used to do before); accumulation of bibliographies works perfectly all right; indexes and table of contents work all right, but for the moment you must run a program I have written to transform all the *numbers* to "left-right" direction, since they come out "right-left" (21 instead of 12 etc.). If you are interested in more details, reports on problems and other suggestions, I will shortly put on LISTSERV@TAUNIVM a document based on my previous memos to Dragonfly people about the Hebrew version. But no doubt this is the most powerful REAL software package for Hebrew and multilingual writing. But if all you need is only some words in Hebrew inserted in some text, or just quotations, I am not sure you should bother, unless you are using Nota Bene anyway. [Editor's note: This file, NOTABENE REVIEW, is on our file-server.] Hebrew fonts designed by me and Nimrod Gil-Ad for the Hercules Graphics Card Plus are downloadable from LISTSERV@TAUNIVM. I have designed Russian and Arabic fonts as well, but I do not recall for the moment if they are available right now on the listserve. (Writing Arabic with Nota Bene would work precisely like writing Hebrew, but you have no built-in solution for printing; only hardware can solve the issue, unless you are an expert in downloading printer fonts yourself.) ========================================================================= Date: Tue, 29 Mar 88 11:57:04 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Text retrieval software (44) ------------------------- Date: Wed, 23 Mar 88 20:04:21 -0800 From: mbb@jessica.Stanford.EDU Subject: a few more thoughts on the text retrieval (31 lines) Well, recent contributions by John Hughes and Michael Sperberg-McQueen have, I think, pretty much done the job of covering all that is wished for in a text search engine. It's been a most helpful exchange, assisting me to completely revamp and extend the initial list I published on HUMANIST. Obviously we won't all agree as to what is "required," which is natural, reflecting different types and approaches to scholarship. For example, much of what Michael calls for would be nice to have, but to my mind would be "extra credit." But vive la difference! Michael's contribution recalled my submission of some months back that kicked off a conversation about hypertext and high-powered workstations. In my utopian computer environment, BOTH a search engine (as has been outlined) and a robust hypertext system are integrated. For example, Michael's point about the on-line availability of an "apparatus criticus" seems ideally suited to a hypertext system. Only if both components are available and integrated can the computer really keep pace with the work of a humanist, who both examines and analyses texts (search engine) and documents his/her findings (hypertext). Finally, I think we are all waiting anxiously for MicroArras. The most promising approach I've heard that John Smith is taking is to place the "analtyic engine" in UNIX on a workstation and the user interface in a DOS machine. That would free the engine from the constraints that DOS poses, such as the 640K memory space. Dream on.... Malcolm Brown Stanford (gx.mbb@stanford) ========================================================================= Date: Tue, 29 Mar 88 11:59:14 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: URICA? (24) --------------------- Date: Fri, 25 Mar 88 23:05:04 EST From: dow@husc6.BITNET (Dominik Wujastyk) A little while ago there was a message about URICA here on HUMANIST. It was billed as a program that would be useful in constructing a critical edition. I sent mail to the uucp address that was given, but it has been greeted with a deafening silence. I shall phone and so on, but in the meantime, can anyone say anything more about what URICA can actually do? Have any of us used it? Dominik bitnet: user DOW on the bitnet node HARVUNXW arpanet: dow@wjh12.harvard.edu csnet: dow@wjh12.harvard.edu uucp: ...!ihnp4!wjh12!dow ========================================================================= Date: Tue, 29 Mar 88 21:15:43 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Nota Bene and WordPerfect (353) [In the interests of vigorous and potentially enlightening argument, I am passing on a lengthy piece from a lively member in Israel. Nothing quite arouses the passions of a computing humanist like a debate about wordprocessing packages, and I am hoping that the following will be no exception. I am particularly hoping that we can get beyond debate and attempt to answer the question, `What makes good software good?' My thanks to the NOTABENE list, from which the following has been lifted. W.M.] ------------------------------------- Date: Tue, 29 Mar 88 19:20:08 IST From: Itamar Even-Zohar Subject: Reply to some remarks about Nota Bene and WordPerfect (in a letter to a colleague) 1. Of what I have learnt about and experienced with WordPerfect, it seems to be a very good word processor. However, for a large range of features, Nota Bene is quite superior to it. No doubt had there been neither Nota Bene or just its wordprocessor, known on the market as XyWrite (version 3.1), WordPerfect would definitely have come first or second. 2. The programs cannot adequately be compared in toto, since Nota Bene is not just a word processor like WordPerfect but rather a package of softwares. Beside its word processor, it also has various application programs (indexing, bibliographies, database, generating forms etc.), a unique Textbase and an extremely useful programming language. From the point of view of somebody in the social and human sciences, Nota Bene a priori therefore has a tremendous advantage due to this fact, since even pricewise it is a better bargain than if you had to add all those extras to WordPerfect. The Textbase, however, even if you wished to buy it for WordPerfect is simply not available on the market as a separate software. And the absence of a programming language in WordPerfect cannot be remedied by some extraneous programming language. I would like to stress that the Textbase is such a unique feature that I would have bought Nota Bene even if its word processor had been inferior to WordPerfect's (which is not the case, as you will see from the following). WordPerfect's official price is $495 (according to PC-Magazine), while Nota Bene's official price is also $495. Both can be purchased for much more favorable prices, but in view of Nota Bene's package- deal nature it is a far better bargain, and in view of its superiority as a wordprocessor, even if there had been price disadvantage, the quality and quantity of its features would have justify that fully well. (Note: I know there is an indexing function in WordPerfect, as well as table-of-contents utility, but they work far less elegantly than Nota Bene's, esp. as regards number of levels and formatting options.) 3. WordPerfect had until very recently, two important features lacking in Nota Bene - a good speller and a thesaurus. This was a clear cut advantage, but WordPerfect had to develop it because not being a pure ASCII program, it would have difficulties in interacting with extraneous spellers and thesauri. Nota Bene on the other hand could afford to postpone developing its own speller/thesaurus because all spellers and thesauri can be used with it, including Turbo Lightning. Since you can load any program (software) on top of Nota Bene (that is, without exiting Nota Bene), it is also very easy and practical to use non memory-resident spellers then return to Nota Bene in no time. Recently, version 3.0 of Nota Bene got fabulous speller and thesaurus. PC-Magazine enthusiastically described some time ago XyWrite's thesaurus and spelling checker, which it considered to be far superior to anything for the moment available on the market. These have been now made available to Nota Bene's users with some remarkable enhancements (like the auto-replace writing mode). So even this gap between the programs has been overbridged to the clear advantage of Nota Bene. 4. If we compare Nota Bene in toto to WordPerfect, the total WordPerfect would equal one third of Nota Bene. It would therefore be fair and adequate to compare only this portion, which is mainly the wordprocessing features of both. Let me therefore discuss some of the programs' respective wordprocessing features: 1. Speed I find it very strange to read your colleague's words about WordPerfect being a faster program than Nota Bene. Nota Bene's wordprocessor is basically enlarged XY-Write. And XY-Write, the official wordprocessor of PC-Magazine since two years (see PC- Magazine Vol.6 No 10 [May 26 1987], p. 220), has won the recognition and acclaim of all experts in the field for being the fastest program on the market. (There are many references to the "blazing speed" of XY-Write/Nota Bene. See quoted article, p. 219.) (To scroll from beginning to end of a long file takes considerably less than half the time in NB than in WP and other functions are similar. Moreover, cursor movement, deletion etc. in NB can be done by letter, word, phrase, sentence, paragraph. In WP as of version 4.1 only word level functions were available , and even that only for cursor movement and forward delete if I am not mistaken.) 2. WYSIWYG and Desktop publishing. It is not true that WordPerfect is more WYSIWYG than Nota Bene. And Nota Bene is ahead of any other wordprocessor for Desktop publishing. WP is no more WYSIWYG than NB. In NB bold is shown as bold, underline as underline. and line-breaks appear where they will print. In NB one sees the deltas which indicate where format commands have been placed precisely, so that one can edit the format commands directly, a process which is extraordinarily difficult in WP. One must enter there a reveal codes mode in which the cursor moves very sluggishly, and even then only one or two lines at a time can be seen. Moreover, one cannot edit the values of format commands directly, but must erase the existing ones and reenter new ones from the menu. Finally, if one wants in NB to see the document with true spacing and with the deltas suppressed, these modes are only a key-stroke away; moreover, they can be made defaults as well. Toggling between these modes and the regular mode is a matter of a keystroke. It is true that WP shows on screen page breaks, which one must do review mode to see in NB, and this is a distinct advantage to WP; however, it is at the cost of severely reduced scroll speed in WP. NB offers you a page-line mode, which tells you very quickly where there are page breaks as well. Further, it is simply not true that one requires a Hercules card to access extended characters in NB. These are fully accessible, and if you wish, you can also put them not where NB had but where YOU would like them to be. The Hercules card or EGA is only required for special languages whose characters are unavailable on a chip such as Hebrew with vocalization, Greek, or Cyrillic. WP cannot access such downloaded character sets at all at present. So NB not only has a better access to the extended characters but has access to downloaded characters which are inaccessible at all in WP. As for desktop publishing, "So many professional writers make use of XYWrite's ability to generate pure ASCII files, which can be handed directly to a typesetter, that staying ahead of the pack here virtually assures continuing preeminence for XyWrite" (PC-Magazine, Vol. 6 No 10:219). For more about Nota Bene's superiority as a Desktop wordprocessor see page 220 of article quoted above. In a special review about "The Desktop-Publishing Phenomenon" by John W. Seybold (Byte, Vol. 12 No 5 [May 1987]:149-166), it is stated that: "As a microcomputer-based text generator for composition systems, XyWrite has no competitors. XyWrite's compatibility with almost every other system in the composition-systems market has made it the most popular text processing package for microcomputers in the publishing industry. It has become the program of choice for people in the composition business and is commonly used to emulate the editing functions of large editorial system such as Atex..." (p. 164, 166). On the whole, Layout features of Nota Bene are far superior to WordPerfect's, as you can infer from the description in PC-Magazine. Nota Bene/XY-Write fantastic invention of the commands hidden in deltas just allows an incredible flexibility with layouting. You can not only shape the document you like, but you can easily see your hidden commands any time (without getting a preview). 3. Sorting Sorting is an extremely advanced feature in Nota Bene, and has been even further enhanced in version 3.0 (see my document on that version in NOTABENE@TAUNIVM). You can sort a file the way DOS does, but in contradistinction to DOS, you have far better control of the parameters. For instance, you can decide what order sorting will take place by construing various "sorting tables". This means that even non-English language material can be sorted correctly and that if you wish, upper and lower case would be treated equally (unlike in DOS). Moreover, you can define, within a file, any number of lines that sort them in a split of a second. Among the most ingenuous sorting options is the one allowing you to sort material without actually altering anything in file. For instance, if you have a list of addresses where each address begins with "Mrs. and Mr....", you can put into a hidden delta the item according to which you want sorting to take place (like a family name). (And I have written a program in Nota Bene which carries out this operation automatically.) 4. Printer and Character set(s) customization There is not anything like Nota Bene's printer and character tables in WordPerfect, allowing you to customize complicated matters quickly and in an incredibly versatile way. Since printer tables are open to modifications, you can have control of a lot of features involved with the computer-printer communication. For instance, you can decide how a certain font or print mode will actually print (e.g., underline as italics, bold reverse as enlarged or whatever). Character tables allow high flexibility with printwheels, ASCII writing and automatic transliterations. 5. Automatic Caps There isn't anything like Nota Bene's automatic capitalization at the beginning of sentences in WordPerfect. This simple yet ingenuous feature saves you at least 25% of typing time and mistakes. 6. Automatic numbering I am not aware of the existence of 10 levels of automatic number- ing, the shape of which can be controlled by few deltas, in WordPer- fect. (That is, you can change numbers to letters or Roman numbers in no time, even after having written the automatic numbers.) 7. Cross-referencing and multiple cross referencing I do not think WordPerfect has got any cross-referencing fea- tures. In Nota Bene you can refer to number of footnote, page number or section number. Anybody in our field knows very well how many tedious hours of hard drudgery are thus saved! And "Better still, if you like to create separate documents for different chapters that have subdivisions that are numbered, but you want to chain-print them, the number references can be made conditional, so that they will be ignored when included in a larger print chain but displayed if that docu- ment is handled alone" (PC-Magazine 6, 10:220). 8. Writing in columns I am not aware of WordPerfect's column features, but as far as I have worked with it, I do not recall it has the capacity of writing equal-width columns (far better than tabs for many documents, such as programs of conferences). As for newspaper columns, I don't think they work as easily in WordPerfect, since with Nota Bene you can either pre-decide to write that way or insert the necessary deltas post factum. (I am not sure of the number of columns either; in Nota Bene you can write 6 such snaking columns.) 9. "Foreign" Characters and "Foreign Language" versions I doubt that WordPerfect can so freely accommodate "non-English" (or "foreign"), as the Americans call them, characters. Nota Bene simply allows you to put those anywhere you like on its keyboards plus you have full access to ASCII wherever you are. For multilingual writing this is a great relief. Moreover, Nota Bene has developed a very advanced Hebrew version, still experimental but already highly advanced, more than any extant Hebrew wordprocessor (I have been working with it for some time). Russian and Greek follow suit (these are not as complicated as Hebrew). Since you can access any downloaded characters and use character tables freely, other non-European languages can also be written. WordPerfect is not even interested, so it seems, in develop- ing anything for Hebrew or other languages. 10. The open nature of Nota Bene is on the whole far more sophisticated than anything on the market. You may work either with or without menus/help screens, while with WordPerfect you are com- pelled to go through them for quite rudimentary matters (such as copying or moving). You can customize many defaults (some of which do not even exist in WordPerfect: turning backup option on/off; having a prompt before erasing a file; changing the cursor from blinking to non-blinking and a lot more) in a matter of seconds. 11. Various features that seem to be lacking in WordPerfect The following features seem completely absent from WordPerfect (please ask your friend to correct me where I am wrong): 11.1. APPENDING (rather than merging) a file to another file (without actually entering it), or portions of a file to some other file. 11.2. Calling a file from any subdirectory then saving/storing it to that directory without going to it. 11.3. Finding (through "find" command) any file on hard disk then calling it to screen without going to actual directory 11.4. Writing hidden prompts anywhere in file then accessing them quickly. (I believe WordPerfect 4.2 now introduced something similar to that.) 11.5. 3 sets of footnotes plus endnotes, with call number either with Arabic, Roman or Latin numbering system, plus any sign (asterisk or no sign at all). 11.6. 9 windows, full, split horizontal, split vertical or all combined. Easy and fast movements between windows, copying/moving etc. (See praises of inter-window movements in the said article, p. 219.) 11.7. Automatic hyphenation, fully customizable plus an open dic- tionary of exceptions for both English and any other language. 11.8. Better methods for print modes (underline etc.) than any other wordprocessor's, since these can be changed/abolished/searched in one command. 11.9. Much stronger (and of course quicker!) search/search back/ change/change invisible/change for only upper or lower case/etc. operations. 11.10. Multiple search on whole diskettes/subdirectories. 11.11. Enlarged directory (with desired number of lines from beginning of file). 11.12. Full control of print types and fonts from file (with deltas), including mixture of pitches and proportional vertical and horizontal spacings (Nota Bene calculates the screen in tenth-of-an- inch units rather than in number-of-characters). 11.13. Chain printing with sequential/non-sequential numbering. 12. Clumsy functions in WP Many wordprocessing functions are very clumsy in WordPerfect. I admit I have not worked with 4.2, but as far as I have read, 4.2 has not dramatically changed 4.1. For instance, to move a passage from one place to another. In WP you press Alt F4 to begin defining, then move the cursor by word to the end of the paragraph. Then Ctrl F4, and you get a menu, where you choose the option cut, then move the cursor to insertion point, press Ctrl F4 again and then choose the insertion function from the menu. In NB, you define the paragraph with one stroke, move the cursor to insertion point, and press gray minus. To change footnotes to endnote in WP you must write a macro with about 8 steps and run it for each note. Of course, you can automate this process, but it takes several seconds for each individual note. In NB you add two format commands (deltas). 13. WordPerfect's clumsy macros vs. Nota Bene customizable keyboard and unique programming language WordPerfect allows writing macros for adding necessary functions. If you read a lengthy article about this in PC-Magazine ( ) while you are already familiar with Nota Bene's customization and programming possibilities you are definitely astounded by the clumsiness and rudimentary nature of these "macros". Such results, and far better ones, can be achieved most elegantly and easily in Nota Bene by either customizing the keyboard, which is a smooth and painful opera- tion, or by writing programs, small or large. If you take my file of programs I have written for Nota Bene, you will be able to fully appreciate the difference. Besides, some of the macros suggested in that article are already built-in features in Nota Bene. An interesting exchange in PC-Magazine sheds some real light on the matter of macros in WP. Dave Tocus from Rockville, Maryland writes: One drawback to WordPerfect macros is that the macro files must live in either the current directory or the same direc- tory as WP.EXE. But since I have 130 macros, I would like to keep them in their own subdirectory. The editor of this section, M. David Stone, reacts: ...My own preference with WordPerfect is to ignore the macro feature and use Superkey, Prokey, or some other keyboard redefinition utility instead. These programs eliminate the clutter of macro files by putting all WordPerfect macros in a single file. They also let you edit the macros, even without the WordPerfect Library. Prokey also permits names mac- ros.(PC-Magazine, Vol. 16 No. 12, June 23, 1987:365. [Power User section]) 14. Finally, it is NOT true that learning Nota Bene is a diffi- cult matter. On the contrary, with its clear and transparent philosophy, language-oriented (rather than arbitrary-key-oriented) commands, (optional) help screens and menus, very good Tutorial and extremely laudable Manual, learning Nota Bene is a real enjoyment. Everybody can use it after a very short time very successfully, and those who wish to really make the most of it never can reach a point of disappointment. In making version 2.0, Dragonfly people have accommodated many incredible whims and dreams by many of its users, and it seems that this promptness has made it what it has become. *****END***** ========================================================================= Date: Tue, 29 Mar 88 21:29:33 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Query & Reply (98) (1) Date: Tue, 29 Mar 88 14:08:49 EST (21 lines) From: Rocco Capozzi Subject: Machine-assisted translation (2) Date: Tue, 29 Mar 88 11:20:26 MST (60 lines) From: Mark Olsen Subject: URICA (1) -------------------------------------------------------------------- Date: Tue, 29 Mar 88 14:08:49 EST From: Rocco Capozzi Subject: Machine-assisted translation Does anyone know of reasonably inexpensive software for machine-assisted translation in an instructional setting? We in the Department of Italian Studies at the Univ. of Toronto would like to set up a course to train translators, and we are interested in using PC-type machines in a lab. We'd like software that would provide a good assortment of tools, e.g., online dictionaries and thesauri and user-constructed terminological dictionaries. The software could either provide the usual assortment of word-processing tools or work in conjunction with a word processor. The classroom should be kept in mind, but we do not need a network in which the students' machines are linked to the instructor's. Thanks very much. Rocco Capozzi ersatz@utorepas (2) -------------------------------------------------------------------- Date: Tue, 29 Mar 88 11:20:26 MST From: Mark Olsen Subject: URICA To respond to Dominik's query about URICA, I have tested it for use here and think it is a pretty good system. URICA stands for "User Response Interactive Collation Assistant" -- stress the interactive. It compares one text file to either another text file or keyboard entry, stopping at EVERY variant, allowing the user to either correct (in keyboard mode) an error or write the variant to an apparatus file. It runs well and is fast enough to be useful. Text appears in two windows and you can follow along as it compares the texts (assuming of course that you are working with two files as opposed to keyboard entry). The format of the appartus is typically as follows: URICA : User Response Interactive Collation Assistant TEXT 1 : C:GRIMM.TX2 TEXT 2 : C:GRIMM.TX2 INSERTION P001L01W08 his << >> wife P001L01W08 his << lovely >> wife TYPOGRAPHICAL ERROR REPLACEMENT P001L02W22 which << over- looked >> a P001L02W22 which << overlooked >> a DELETION P001L03W07 of << lovely >> flowers P001L03W06 of << >> flowers REPLACEMENT P001L04W01 and << nobody >> dared P001L04W01 and << no one >> dared INSERTION P001L04W11 a << >> powerful P001L04W12 a << very >> powerful REPLACEMENT P001L04W17 by << everybody. >> One P001L04W19 by << everyone. >> One DELETION P001L06W13 eat << some of >> it. P001L06W13 eat << >> it. I had been playing with OCCULT before seeing URICA, and let me assure you that it is a far sight easier to use and more accurate than that old beast. I have not had the opportunity to use it "for real," but the people I have shown it to here think that it would be most useful. Mark ========================================================================= Date: Tue, 29 Mar 88 21:31:49 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Announcements (78) (1) Date: Tue, 29 Mar 88 15:22:09 CST (25 lines) From: Richard Goerwitz Subject: Penn OT texts (2) Date: Tue, 29 Mar 88 17:40:37 GMT (38 lines) From: CMI011@IBM.SOUTHAMPTON.AC.UK Subject: Job in multi-media databases (1) -------------------------------------------------------------------- Date: Tue, 29 Mar 88 15:22:09 CST From: Richard Goerwitz Subject: Penn OT texts The two programs that I had which a) sliced up Penn OT texts and b) printed them out are still available. I say this because a couple of people reqested them two weeks ago - at a time when I happened to be about to take an Aramaic and Akkadian exam. I had to send out only file a. File b has a font that goes with it, and I said, "Send me a reminder." Please go ahead and send me those reminders you people who still want them. Apologies to Bob Kraft, by the way. My notes on his texts seemed to imply that his texts were somehow defective - which they are not! The coding scheme is, as it stands, minimal; but he has programs that can expand them as needed (as do I). Note also: In more than a year of intense experiments with these texts I have yet to find an error. This has been very hard for me to grasp. How can this be?! -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer (2) -------------------------------------------------------------------- Date: Tue, 29 Mar 88 17:40:37 GMT From: CMI011@IBM.SOUTHAMPTON.AC.UK Subject: Job in multi-media databases Here is an advert that went out a couple of weeks ago. Push it out on any bulletin board you can think of. Thanks. (PS I don't know how to put the pounds sign in for the salary) RESEARCH ASSISTANT IN COMPUTER SCIENCE The Image and Video research group of the Department of Electronics and Computer Science has been awarded a research grant to employ a research assistant/programmer for at least one year to work on multi-media databases. Part of this work, at least one-third of the time, will be undertaken at the University of Essen in West Germany. Appropriate supplementation will be given to reflect extra living expenses whilst in Germany. The successful applicant would be eligible to register as a part-time Ph.D. student at the University of Southampton. A suitable honours degree and an ability to program in Pascal and C are necessary requirements for the job but some knowledge of German would be an advantage. For further information contact Dr Wendy Hall, Department of Electronics and Computer Science, University of Southampton, Southampton S09 5NH, UK. e-mail: wh@uk.ac.soton.cm Salary: pounds 8,675 - 11,680 Applications, including CV and names and addresses of two referees, should be sent to Mr. H.F. Watson, Staffing Department, The University, Highfield, Southampton, SO9 5NH, as soon as possible, quoting reference 629/HFW/SMT. ========================================================================= Date: Wed, 30 Mar 88 19:06:39 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Text comparison software (32) From: Gene Boggess Regarding Mark Olsen's March 16 () remarks on text comparison programs, one of my colleagues, Dr. Peter Shillingsburg, has something that may be of use. He has long been involved in The Thackeray Project, which compares various editions of Thackeray's works. To facilitate this process, he devised the CASE (Computer Assisted Scholarly Editing) program, which is designed to assist the production of critical editions from text comparison through preparation of textual apparatuses and typesetting. One of my assistants has recently completed the conversion of this program from PL/1 to Pascal and has compiled the system to run as a series of menu-driven programs for the IBM-PC and compatibles. The text-comparison portion is not interactive, as Olsen requested, but it is integrated so that the output from one process serves as input for the next. It is best for use with large prose texts with multiple relevant manuscripts and editions. For futher information, write: Dr. Peter L. Shillingsburg English Department, Mississippi State University Mississippi State, MS 39762. ========================================================================= Date: Wed, 30 Mar 88 19:09:54 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Translation software (25) Date: Tue, 29 Mar 88 20:40:57 MST From: Mark Olsen I have reviewed a couple of translation tools that might be useful. Mercury by Lingua-Tech (recent _Computers and the Humanities_ review) is a pretty decent memory resident multi-lingual glossary manager. INK TextTools provides more sophisticated glossary management and memory resident access. (I reviewed it in the most recent number of _LT:Language Technology_). I can send you e-mail copies of either of these reviews. You may also be interested to know about the translation support that Dr. Ted Cachey and I are giving to the _Repertorium Columbiaum_ project under the direction of Friedi Chaippelli (UCLA). This is a twelve volume body of texts dealing with the discovery of America. We are using WordPerfect, WordCruncher and Mercury to provide an interesting translation environment. A preliminary outline of that methodology by Cachey and I appeared in _Computers and Translation_ last year. The goal of the project is consistent transltion across the volumes and we hope that this approach will encourage that. Let me know if there is anything I can do for you. Mark Olsen ========================================================================= Date: Wed, 30 Mar 88 19:13:15 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Wordprocessing (79) (1) Date: Tue, 29 Mar 88 23:29 MST (22 lines) From: "David Owen, Philosophy, University of Arizona" Subject: Turbofonts (2) Date: 30 March 1988 09:18:21 CST (39 lines) From: "Michael Sperberg-McQueen" Subject: Nota Bene and Word Perfect (ca. 40 lines) (1) -------------------------------------------------------------------- Date: Tue, 29 Mar 88 23:29 MST From: "David Owen, Philosophy, University of Arizona" Subject: Turbofonts Only one person responded to my recent request for experiences in the use of the multi-lingual add-on Turbofonts. He reports that a colleague has used it with some success though with some difficulty. I quote: "She said that Turbo Fonts produces good enough output once it is con- figured, but that configuring it to work with WordPerfect 4.2, with the printer driver in use in the department, and with one or two other things had just about driven her crazy. For example, when she finally thought she had it installed and working, every printout had seemingly random dots and partial underlines scattered through it, which were apparently the devil to track down and eliminate. She does say, though, that TurboFonts has not caused anything else to crash, which is some blessing, I guess." Sounds as if we should wait for WordPerfect ver 5, or switch to NotaBene. David Owen OWEN@ARIZRVAX.BITNET OWEN@RVAX.CCIT.ARIZONA.EDU (2) -------------------------------------------------------------------- Date: 30 March 1988 09:18:21 CST From: "Michael Sperberg-McQueen" Subject: Nota Bene and Word Perfect (ca. 40 lines) While an admirer of both Nota Bene and Word Perfect, I have had more experience supporting the latter. So I can offer these corrections to Itamar Even-Zohar's list of things Word Perfect cannot do: Word Perfect has no trouble at all displaying or accepting from the keyboard any character in your character set; I never had any problems with EGA or other user-loaded fonts. The key definition facility is, to be sure, less flexible than Nota Bene's (but also easier to use). For complex key redefinitions, one can and should use a memory resident keyboard macro program. (These have always worked with Word Perfect; last time I tried they did not work with XyWrite or N.B. -- has that changed?) Also, Word Perfect does have the abilities: - to add columns after the fact - to retrieve and save files from/to directories other than the current directory (11.2) - to search for formatting codes (11.8) - to search for a word or phrase in a whole set of files (11.10) - to control font switching, etc. in the printer. (11.12) (I don't know what 'full control' might include, so I won't claim it. Word Perfect's printer drivers are numerous and readily accessible for user customization. Their chief drawbacks vis-a-vis XyWrite / N.B. printer drivers are that they are not ASCII files and they have some mysterious overall length limitation, which I ran into only with laser printers requiring extraordinarily long escape sequences.) I don't dispute the central claim that Nota Bene is a good program and more powerful than Word Perfect. But the record should be correct on the details. Michael Sperberg-McQueen, University of Illinois at Chicago ========================================================================= Date: Thu, 31 Mar 88 20:59:47 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Wordprocessing: NB & WP (251) (1) Date: Thu, 31 Mar 88 16:54:08 IST (36 lines) From: Itamar Even-Zohar Subject: Re: Wordprocessing (79) (2) Date: 31 March 1988 (72 lines) From: Willard McCarty Subject: What makes good software good? (3) Date: Thu, 31 Mar 88 12:42 EST (121 lines) From: PROF NORM COOMBS Subject: Word Processing software (1) -------------------------------------------------------------------- Date: Thu, 31 Mar 88 16:54:08 IST From: Itamar Even-Zohar Subject: Re: Wordprocessing (79) I should like to thank Michael Sperberg-McQueen for his corrections to my comparative description of WP vs. NB. When more corrections hopefully arrive, we will have a better comparison no doubt. I would like however to make two comments: 1. I tried to underline in my description that it is not enough to have the same feature in 2 different programs. A major question is how *accessible* and *implementable* that feature is. I think that in view of that, WP drivers don't even come a bit close to the flexibility of NB's drivers. This is not an easy task in NB either, basically because printers are complicated. But even with Laser you have all drivers open to you as an open book, and you have full control of all details. 2. Although I was sad to discover that NB 3.0 has cancelled the ESC keyboard, the overall power of the keyboard has not been changed. I now use Right Shift as a toggle key instead, and with my AT I have customized the SysReq key. You can put any macros as before, and powerfully combine phrase library macros, ampersand macros with keyboard. I normally run most of my most needed programs (written in NB language and downloadable from LISTSERV@TAUNIVM. If you want to see what's available type: TELL LISTSERV@TAUNIVM INDEX NOTABENE) from keyboard rather than imposing long macros on it. Thanks again for all corrections. But I am afraid that even with the pluses added, WP is a less adequate software for us, researchers in the human and social sciences. Itamar Even-Zohar Porter Institute for Poetics and Semiotics (2) -------------------------------------------------------------------- Date: 31 March 1988 From: Willard McCarty Subject: What makes good software good? Itamar Even-Zohar has touched on an interesting point about wordprocessing packages that, I think, applies to software in general. He remarks that just because two packages have roughly (or even exactly) the same features does not tell the whole story; you must look at how these features have been implemented. I would like to carry this further. I would like to argue that features *as such* are epiphenomena, that what matters more to the user of a program is the underlying `structure' manifested in these features and their manner of implementation. As in other matters, we are constrained to know invisible things through the visible, but we must take account of the invisible or we are at the mercy of devils. That is to say (using less religious language), computer programs are human artifacts and so are bound to incorporate a human mentality. It is this mentality, or whatever else you wish to call it, that must finally be considered, and *is* considered subliminally if not analytically by the user. Why else is it that people tend to feel so deeply about their wordprocessing packages and get so defensive if they (which is the antecedent to this pronoun?) are attacked? I was driven to think in this way by spending considerable time reviewing software, mostly wordprocessing software. I had a minor epiphany one day when attempting to figure out a particular package (I mercifully forget which): I suddenly realized that the thing must have been designed by a raving lunatic. It had all the right features, but the way they were put together seemed to make no sense whatsoever. To use an analogy, the experience was not like talking to a man confused by drink, rather like talking to someone in the preternatural clarity of some schizophrenic state. Less dramatic encounters have reinforced my conclusion that programs reveal mental states or conditions. Thus I've briefly also lived with cloyingly obsequious programs, with neo-Stalinist systems, and with others that have all the arrogant helpfulness of a benign authoritarian towards slaves and children. I am rather less sure of how to systematize this sort of analysis. Features are easily listed, and one can say whether or not a program does what its vendor claims it does. Perhaps one reason why people rightly continue to feel that programs must be tried out before they buy them, or that the opinion of a trustworthy advisor must be sought, is that the worth of a program cannot be determined from any list of attributes. Would any of us substitute the listing of a table of contents for a good book review by a dependable scholar? I happen to like the features of Nota Bene, and I use many of them. Fundamentally, however, I use the package because I have considerable respect for the mind in the program. It is a very scholarly mind, and it indeed manifests some of the quirks of personality scholars often have, e.g., not suffering fools gladly but offering the initiate great rewards of intellectual joy. (Let it be noted, however, that my 11 year old son, who is not especially a good student, uses NB for all his writing assignments and school projects. So, is NB difficult? Perhaps the academic with 4 languages and a Ph.D. should feel a bit reluctant about claiming that it is.) I for one would be very interested to know what others think about these matters, i.e., what makes good software good. Since many of us are more or less directly involved with the designing of software if not the actual writing of it, and since most or all of us live with software daily, I'd think this a worthy subject for debate here. Willard McCarty mccarty@utorepas (3) -------------------------------------------------------------------- Date: Thu, 31 Mar 88 12:42 EST From: PROF NORM COOMBS Subject: Word Processing software What follows is a commentary on Word Perfect by one of our software specialists who also teaches Word Perfect courses for users. The comments and oppinions are his. His name is Vince Incardona VXIACC@RITVAX.BITNET I am Norman Coombs NRCGSH@RITVAX.BITNET ......... ........ I don't mean to imply that NB is no good (actually, I wish Dragonfly Software had chosen WordPerfect as a base to work from), or to compare WP and NB as this author has done, but I really feel that the guy who wrote the "comparison" in 46.0 should get his facts straight before putting a critique like this out on a network. For example: > 6. Automatic numbering > > I am not aware of the existence of 10 levels of > automatic number- ing, the shape of which can be controlled by few > deltas, in WordPer- fect. (That is, you can change numbers to letters > or Roman numbers in no time, even after having written the automatic > numbers.) WordPerfect has had this feature for at least 4 years that I know of. They call it "mark text". > 8. Writing in columns > > I am not aware of WordPerfect's column features, If you don't know anything about the feature, how can compare it to the same thing in some other package? > it, I do not recall it has the capacity of writing > equal-width columns Well, it does, and has had this ability since at least 1985 > programs of conferences). As for newspaper columns, I don't think they > work as easily in WordPerfect, since with Nota Bene you can either > pre-decide to write that way or insert the necessary deltas post > factum. You can do this with WordPerfect, too, and in much the same way NB does it. The author couldn't have even tried it. > (I am not sure of the number of columns either; in Nota Bene > you can write 6 such snaking columns.) WP allows as many as you can fit on the page. 6 or 8 is about as many as you can reasonably fit. > You may work either with or without menus/help screens, while with > WordPerfect you are compelled to go through them for quite > rudimentary matters (such as copying or moving). No, you're not "compelled" to go through _any_ sequence of keystrokes, menu or otherwise. That's what macros are for - but then this author says he doesn't like to use them because he prefers to write programs instead. To each his own, but I'd rather not program if I don't have to. > The following features seem completely absent from WordPerfect (please > ask your friend to correct me where I am wrong): Consider yourself corrected: > 11.2. Calling a file from any subdirectory then saving/storing it to > that directory without going to it. > 11.3. Finding (through "find" command) any file on hard disk then > calling it to screen without going to actual directory This ability has been part of WordPerfect ever since Version 3.4. Use the "retrieve" key. > 11.8. Better methods for print modes (underline etc.) than any other > wordprocessor's, since these can be changed/abolished/searched in one > command. You can do this in WordPerfect, too. Always could, as far as I know. > 11.10. Multiple search on whole diskettes/subdirectories. It's option number 9 on WordPerfect's "list files" menu. > Many wordprocessing functions are very clumsy in WordPerfect. > to move a passage from one place to another. In WP you press Alt F4 to > begin defining, then move the cursor by word to the end of the > paragraph. You're right, that's clumsy. That's why the best way to move a paragraph in WP is to Press CTRL-F4 and choose "move paragraph." - something this person has obviously not even looked up in the manual. > Everybody can use it (Nota-Bene) it after a very > short time very successfully.. That statement is more of a glittering generality than an actual fact. I would challenge that notion with respect to ANY software, and the lack of objectivity in a statement like this really makes me question this person's agenda in comparing these two packages. There are other erroneous or misleading statements in here, but the point is that postings like this should be taken with a grain of salt. I've noticed that sometimes people tend to make up their minds that they are going to dislike some things before trying them, and I wonder if a little of that isn't going on in this person's mind. They apparently used a long-outdated version of WP, and could not have spent more than an hour with it before deciding it wasn't as good as whatever it was they were already using. ========================================================================= Date: Thu, 31 Mar 88 21:03:59 EST Reply-To: Willard McCarty Sender: HUMANIST Discussion From: Willard McCarty Subject: Readers of Madame Bovary? (25) Date: 30-MAR-1988 22:41:40 GMT From: GW2@VAXA.YORK.AC.UK Subject: A New Translation of Madame Bovary I am currently working on a new translation of MADAME BOVARY, to be published by Penguin Books in 1990. Anyone interested in reading - critically - a few chapters-in-progress? I don't expect any massive labour of erudition, or even a knowledge of the original. But it would be useful to have a couple of 'test- readers' scanning my version for the incomprehensible or the merely fatuous. Please get in touch if you think you might be interested. Geoffrey Wall =========================================================================